Company Post

Human-Machine Reality in the Immersive Art of Weidi Zhang


Weidi Zhang is a new media artist and designer whose visually lush and elegant narratives contemplate and conceptualize a not-so-distant-future human-machine relationship. Her interdisciplinary art and design research investigates A Speculative Assemblage at the intersection of interactive AI art, immersive media and experimental data visualization. She is a recipient of multiple international awards including SIGGRAPH's Best In Show Award, the Red Dot Design Award, as well as an Honorary Mention in Prix Ars Electronica and has exhibited and performed immersive A/V works worldwide. Weidi is also an Assistant Professor of immersive experience design at the Media and Immersive eXperience (MIX) Center of Arizona State University. We had the opportunity to talk to Weidi about her art practice as well as her teaching experiences thus far and are excited to share this fascinating conversation with you.

Derivative: What attracted you pursue a career in media arts, technology and design and what has been your trajectory?

Weidi Zhang: I began my pursuit of a career in media arts while attending graduate school at the California Institute of the Arts. During my time there I explored interactive media and immersive art installations that combined real-time visualization with data input. I was introduced to the fields of concept art, visual design, and media arts, and after graduating from CalArts, I worked in the industry creating commercial visuals. In 2017, I returned to school to pursue a PhD degree in the Media Arts and Technology program at UC Santa Barbara, where I began using TouchDesigner. My focus is on creating interactive immersive experiences at the intersection of interactive AI art, immersive media, and data visualization.

Derivative: I believe you first encountered TouchDesigner at USC when Jarrett Smith from Derivative and Jordan Halsey led a workshop? What interested you about the software first and how was the learning experience?

Weidi Zhang: I first came across TouchDesigner at USC during a workshop, and it left quite an impression on me. Before that, I used Unity and Processing to create media arts. However, I had never used a node-based visual programming language before. I was amazed at how clean and user-friendly the interface was and how intriguing the learning process was. It was easy to produce immediate visual results, which was very satisfying and motivating. I started with the basics and delved into GLSL when creating my work “Cangjie’s Poetry”. We developed an AI system that converts live streaming into a collection of new symbols for a real-time art experience.

Derivative: How is TouchDesigner useful in your practive and what other tools do you use alongside?

Weidi Zhang: When I first began creating new media art, I used Processing and p5.js. However, I soon switched to using Unity for creating VR experiences. After being introduced to TouchDesigner, it quickly became my go-to tool for creating data-driven interactive computer graphics.

Creating daily generative visuals through TouchDesigner, has become an addiction for me. It inspires me to explore new approaches to visual experiments, and I find it enjoyable because it promotes diverse perspectives on the visual-making process.

In addition, I utilize c4d and Rhino for modelling, as well as audio software such as Ableton Live. My research primarily involves integrating TouchDesigner, with my customized AI systems to create unique art experiences. I have also produced generative digital content using TouchDesigner, environments for both aesthetic purposes and immersive cultural production.

Derivative: You are also Assistant Professor of immersive experience design at the Media and Immersive eXperience (MIX) Center of Arizona State University. Can you talk a bit about what you are working on there, how it is to be on the teaching end of the equation and how you structure your courses.

Weidi Zhang: I am conducting research into the creation of immersive art experiences that combine interactive AI system design, data visualization, and immersive media. The goal is to develop a new visual language using emerging technologies that can provide multi-sensory and empathetic art experiences. In addition to this, I also teach courses on immersive media and digital content at MIX Center. Along with my colleague, Ana Heruzzo, who is a talented new media practitioner and also a TouchDesigner user, we teach students the theoretical and technical skills necessary to understand, design, prototype, and create immersive experiences. One of our most impressive projects involved our students working together to explore the critical issue of climate change through large-scale interactive projects, web-based applications, and augmented reality. Their results were impressive and received a lot of public attention.

(THE MELT PROJECT team: ASU students: Henry Beach, Xavier Nokes, Ankita Santhosh Kumar,Chetan Nagaraja, Clare Witt, Kimberly Tsen, Mary Kenny, Nandini Maya Thevar, Paul Amendola. Professors: Dr. Ana Herruzo, Dr. Weidi Zhang)

Another course I teach, entitled “Assembled Reality,” focuses on digital content creation through a generative approach. By creating rule-based systems, students made a list of interesting audio-visual projects. I am thrilled to see my talented students enthusiastic about learning TouchDesigner. It is worth noting that the TouchDesigner community is exceptionally generous, with many TouchDesigner creators posting tutorials and providing solutions to technical queries on the TouchDesigner forums. Students are excited to get involved in the community and share insights. In this class, our Ph.D. student Shomit Barua and graduate student Henry Beach produced an impressive final project entitled "Entrainment 718", a multichannel video installation with 8 channel/spatial audio, which recreates the hypnotic and transcendental states produced by repetition and the visual, acoustic and haptic polyrhythms that occurs on the train.

Video Credit: Shomit Barua and Henry Beach

Derivative: What are some of the emerging or existing technologies that are influential in the scholastic environment right now? What are students (and faculty) excited about?

Weidi Zhang: In the contemporary scholastic environment, the landscape of influential technologies is both diverse and dynamic, often reflecting the unique interests and pursuits of individual researchers, faculty, and students.

A prominent trend that has garnered substantial enthusiasm is the integration of artificial intelligence systems for generative visualizations. Another interest is the implementation of multi-channel multimedia systems for multi-sensory interactive immersive performance which has also been transformative, particularly in the realms of theatre arts and interactive demonstrations.

The exploration of new media art, encompassing generative AI, XR technologies, and interactive systems, continues to blur the lines between technology and artistic expression. These developments, coupled with the audio/visual experiences, are not only enhancing academic pursuits but also cultivating an environment where students and faculty alike can engage, innovate, and inspire.

Derivative: You have a very pronounced and evocative stylistic and visual sense. What are some of your influences speaking on an artistic and design front?

Weidi Zhang: My artistic and design sensibilities are deeply rooted in a synthesis of various influences, both historical and cultural. I find myself particularly drawn to the avant-garde movements of the 20th century, such as Cubism, Surrealism, and Abstract Expressionism, where the boundaries of form and concept were continuously challenged and redefined. This fascination with breaking conventions extends to my interest in merging Eastern and Western aesthetics. By employing a generative approach, I strive to create a dialogue between these seemingly disparate traditions, weaving together the intricate patterns and philosophies of the East with the bold abstraction and dynamism of the West. This confluence of influences not only shapes my visual language but also fuels my ongoing exploration of new artistic horizons.

Derivative: You describe your fascinating work "RAY", which was awarded the prestigious Best In Show Award at SIGGRAPH 2022, as interactive AI art. In this piece via a camera situated above the work, RAY observes participants and authors the live-streamed data into “… a novel semantic Rayograph that evolves in real-time” siting the Dadaist and Surrealist artist Man Ray who coined the term Rayograph to describe his photography process which did not include a camera. Can you tell us more about this work and what drew you to making it as well as your process?

Weidi Zhang: "RAY" is an exploration that marries the cameraless photographic method of Man Ray with contemporary intelligent systems. It bridges the tactile, manual approach of traditional Rayographs with today's automated, AI-driven processes. This fascination led me to recontextualize Rayographs into a dynamic experience that raises critical questions about surveillance and camera culture.

The project employs Image-to-Image Translation with Conditional Adversarial Networks, trained on over 3000 pairs of Rayograms and human portraits, to translate human images into new Rayographs. This intricate process is visualized through light painting aesthetics using TouchDesigner. This software not only aids in developing the generated moving images but also connects to an external customized AI system, capturing live data and translating it into a generative Rayograph.

TouchDesigner's real-time capabilities enhance audience interactivity, transforming participants' movements and interactions into an evolving visual experience. "RAY" metaphorically links the power of gaze with surveillance, creating a continuous dialogue between viewers and artwork. As a reflection on our relationship with images in a world mediated by intelligent systems, "RAY" invites viewers to consider images beyond mere visual representations and engage with them as data-based visualizations driven by automatic operations.

“Trained on over 9,000 Chinese characters, the AI system Cangjie by Weidi Zhang and Donghao Ren is creating a new language to converse with the spectator, creating an immersive data visualization spectacle in a multi modal installation. Cangjie's Poetry tackles the issue of language creation between human and machine in a sensitive, poetic, and fragile way. Cangjie´s Poetry is an exceptional and far-reaching work bringing data visualization and the use of AI in co-creation between being and apparatus."

- Prix Ars Electronica Honorary Mention

Derivative: To follow up on what Ars Electronica had to say about "Cangjie's Poetry", could you pease detail for us how the work was made and some of the challenges you encountered.

Weidi Zhang: "Cangjie's Poetry" is an intricate blend of art, technology, and historical inspiration. The project began by training a neural network, dubbed Cangjie, on the principles of over 9000 Chinese characters, a process inspired by the legendary historian Cangjie, who devised Chinese characters based on earthly characteristics. The challenge lay in teaching the network to interpret images through the lens of Chinese characters, generating new symbols from Chinese strokes, and crafting corresponding descriptive sentences. By utilizing a pre-trained model for localized natural language descriptions, we were able to create a symbolic system that translated real-world streaming imagery into abstract pixelated landscapes and flowing poetry. This was projected as part of an interactive art installation, intertwining the past and present, and conceptualizing a future human-machine relationship.

Creating "Cangjie's Poetry" presented multifaceted challenges, including the development of a multimodal intelligent system and the difficulties of remote collaboration during the pandemic. Striking a balance between technical accommodation and artistic expression was intricate. The project was further tested by the constraints of presenting an interactive installation during the pandemic, which led to an innovative global call for entry to collect daily footage from people worldwide. These submissions were creatively fed into the Cangjie system, which then output an animation presented as an evolving scroll of collective voices and conversations, culminating in a unique artwork that embodies both ancient wisdom and collective conversations. 

Derivative: What compels you to explore working with AI and what have you learned in the process?

Weidi Zhang: My exploration in working with AI is driven by my research entitled "A Speculative Assemblage," where the integration of automation and artistic decision-making serves to transform real-world data into cultural artifacts. These artifacts are then decoded by participants, culminating in immersive art experiences. Today's advancements in machine learning provide a fresh approach to automation, where the aesthetic essence of generative models heralds a new aesthetic frontier.

The prediction using machine learning algorithms can create personalized interactive experiences, while the uncertainty in these algorithms introduces new possibilities for chance and choice operations.

However, the ethical considerations, including questions of copyright, privacy concerns, and potential biases, must be approached with caution and integrity. Yet, these challenges present an opportunity to leverage new media art and design as platforms to not only pose these vital questions but also to foster a broader public awareness and engagement with these critical ethical issues.

Derivative: What is next on your horizons?

Weidi Zhang: I'm currently joining Society for Arts and Technology (SAT) remotely as a collaborative artist, where I'm working on an immersive piece named "Wayfarer." Scheduled for completion early next year, I'm eagerly anticipating its reveal. Concurrently, I am developing a new interactive AI artwork, also targeted for completion next year. TouchDesigner will be the primary environment for creating these pieces.

Additionally, I recently completed a project entitled "ReCollection" in collaboration with Rodger Luo, an AI principal scientist at Autodesk. This work was recently showcased at Siggraph 2023 in LA. It revolves around transforming participants' fragmented language of past memories into synthetic, real-time visual memories. We crafted this piece by synergizing several AI methodologies, including speech recognition, text auto-completion, and text-to-image modelling. By building an AI system and integrating it with TouchDesigner environments, we were able to further process and present the AI's output, transforming it into an interactive art experience. Inspired by my grandmother's experience with memory regression, I hope that "ReCollection" can have a future impact by providing comfort to dementia groups through the fusion of generative AI and art, language, and narrative.

Follow Weidi Zhang Website | Instagram