Exhibition at Yamanashi Prefectural Museum of Art, Feb. 2022
Exhibition at TUB, Jan. 2022
Exhibition at TIERS GALLERY, Sep. 2021
This work is an interactive installation that allows viewers to experience the creation and visualization of kanji characters by actually inputting characters in a new series of kanji characters named “Compressed ideographs”, which are created by DALL-E, a deep learning model that differs from any of the methods used to create the six kanji characters (hieroglyphs, fingerspelling, kaiyi, phonetic, transcriptions, and pseudonyms) created in history.
Since the second century, kanji characters have been created and classified into six categories (Rikusho) according to their origins. Kanji characters themselves are still being created for newly discovered elements, for example, but they are created by people using existing methods. In today’s increasingly complex and diverse world, is it possible to explain the world using only kanji characters created using conventional methods? In this work, we used a deep learning model to create a seventh category, which we named “Compressed ideographs”, which can be applied to any text.
We used a transformer model called DALL-E to generate the kanji characters, and the authors trained it on a large number of pairs of kanji characters and sentences describing the meaning of the kanji characters. In this way, for any string or sentence input by the viewer, a kanji character is generated that is compressed into a single character. At the same time, arbitrary character strings and sentences entered by the viewer are vectorized into 300 dimensions by the Doc2Vec model trained by the authors, and then their location in the 3D space created by the dimensionality reduction algorithm UMAP is calculated. The newly generated kanji characters are then placed together with a huge amount of existing kanji characters in a 3D space that represents the meaning of strings and sentences. In addition, the relationship between the two is visualized by displaying the kanji characters that are closest in meaning to the existing kanji characters, and by randomly displaying a large number of similarities to the existing kanji characters.
Through the experience of plotting kanji characters that reflect complex features in the meaning of the text by AI, viewers can explore the gap between characters that have been created and fixed by humans and those generated by AI.
September 25th, 2021.
Scott Allen (Direction, Machine learning, Visual programming)
Keito Takaishi (Machine learning, Visual programming)
Asuka Ishii (Machine learning)
Kazufumi Shibuya (Machine learning, Visual programming)
Muhan Li (Support)
Atsuya Kobayashi (Sound)
Nao Tokui (Technical advice)
Keio University SFC Computational Creativity Lab (Nao Tokui Lab)
2nd edition (Exhibited at TUB, Jan. 2022)
Video by Asuka Ishii
Haruka Komano (Cast)
Tama Art University Bureau
1st edition (Exhibited at TIERS GALLERY, Sep. 2021)
Video by Asuka Ishii
Soshi Yamaguchi (Sound equipment cooperation)