Close
Company Post

A Night at the Getty: What I See Is What You Get

Earlier this year Woodbury’s Applied Computer Science - Media Arts program was invited by the Getty Museum to design an installation and exhibition for their College Night event. The results merging AI and visual arts in an interactive installation were remarkable in scope and complexity, meticulously executed and at the same time emotive and playful - inspired to say the least. We spoke with ACS Program Chair Professor Ana Herruzo an architect by training and a brilliant artist and programmer whose Captured Atmospheres we wrote about here worked with the students in creating the exhibition. 

Derivative: Ana could you please tell us a bit about the Woodbury's ACS program and how this Getty installation came about?

Ana Herruzo: The Applied Computer Science - Media Arts program is a Bachelor of Science that helps students become designers, thinkers, and leaders of the new digital age. It is an art and technology hybrid degree focusing on emerging digital practices by working with interactive environments, experiential design, and human interaction. The program uses computer science as a tool to innovate within the fields of design, entertainment, and media arts.

This year we were invited by the Getty Museum to design an installation for their College Night event. We decided to combine two classes for the development of the project.

With concept and project management by myself and faculty member Nikita Pashenkov, Woodbury’s ACS students created an immersive, interactive installation merging AI and visual arts. Students from my Media Environments class, led the project design and execution, and the students in Professor Pashenkov’s Artificial Intelligence course led the machine learning development part of the project.

Derivative: Can you give us a brief overview of the TouchDesigner systems you built?

Ana Herruzo: There are four main areas that were developed in TouchDesigner. Students would have leadership roles in each one of these areas. 

  1. Software, hardware, and networking - Zane Zukovsky
  2. Playback system - Ben Luker
  3. Interactive development - Sungmin Lee
  4. Art direction and overall looks and color palettes for emotions - Ka Kit Chiu

In addition to these responsibilities, each student designed at least three real-time TouchDesigner scenes. Students learned TD programming skills but also had creative roles, and learned how to create real-time generated visuals in TD. None of the students had previous TouchDesigner experience.

Derivative: That's really impressive! In context it would be very insightful to know a bit about your experiences teaching TouchDesigner.

Ana Herruzo: In the art and tech field, which is what our program focuses on, TouchDesigner is a very powerful tool. Students learn 3D modeling and programming in the first year, so when they arrive at the Media Environments class they pick TouchDesigner up quite fast. It’s amazing how much they learned this semester. As I mentioned they can now all make beautiful 3D scenes, but they can also script and write a show-system, build complex logic systems, state machines, parse interactive data, network with other applications and much more. 

“Learning TouchDesigner is like learning how to write a poem with logical ideas.” Student, Ka Kit Chiu

Getty Installation Brief 

Ana Herruzo: We designed and developed an experiential installation that creates live interactive visuals, by analyzing human facial expressions and behaviors, accompanied by text generated using Machine Learning algorithms trained on the art collection of The J. Paul Getty Museum in Los Angeles.

The installation consists of a vertical video wall composed of three landscape-oriented screens. On top of the wall, there are two embedded sensors: A Microsoft Kinect ONE (containing an RGB color VGA video camera, a depth sensor, and a multi-array microphone) and a USB web camera. These two sensors enable us to obtain live data from users with computer vision algorithms. We used two software platforms, PyCharm as an integrated development environment for the Python programming language; and Derivative TouchDesigner  a real-time rendering, visual programming platform. The two platforms communicated with each other via TCP/IP sockets sent over the network. The content displayed on the video wall were real-time generated animations playing inside and outside the silhouette of the users.

 

Our Installation is equipped with a camera sensor and used Machine Learning (AI) algorithms to detect the user's facial expression. When the users approach the screen, depending on how many of them are in front of the screen and their facial expressions, their age and genders a new unique animation was displayed on the video wall. From each live interaction with the piece we obtained the following data: 

  • Number of people

  • Age

  • Emotion

Using those parameters, the students generated the graphics displayed on the screen. i.e. if the user expressed "joy" the screen would display a particle system that used colors associated with this feeling, and perhaps moved at a faster pace if a "sad" emotion was detected. 

Examples of different color palettes applied to a scene depending on the user's facial expression. Also, different AI generated titles and text descriptions using the user data are displayed on each animation.

"TouchDesigner is addicting." student Sungmin Lee

In parallel, the students worked on "training a network" that generated a description of what was being displayed on the screen. In the artificial intelligence class with Professor Nikita Pashenkov,  the students studied the Getty’s art collection and selected all the art pieces that depicted humans. Their  focus was on artifacts containing people, so based on the total number of pieces currently on display at the Getty Center (1,276 results according to the website). The database was created using the following information from the artworks: 

  • Title

  • Artist/Maker

  • Date

  • Description

  • Primary Sentence

  • Number of People

  • Gender

  • Age

  • Emotion

  • Image

In analyzing the Getty’s art collection, students experimented with a deep learning language model GPT-2, released by the non-profit foundation OpenAI in February of this year. The language model was trained by the students with existing descriptions on display at the Getty Center, then prompted by computer vision algorithms detecting participants’ facial expressions in order to generate new synthetic descriptions to accompany the real-time visualizations. 

After studying the text descriptions of the artworks of the Getty collection when the users approach the screen and the parameters listed above were detected, the algorithm generated a new title and description for this new composition. This description was also displayed on the screen.  Attendees were able to take a picture/video of the animation+description of their interaction with the piece.

There was one other component to the project. The students showed great interest in the installation being alive, changing and evolving with the users' interactions. To achieve this we designed an “idle state” animation in TouchDesigner that would come up after every interaction. Each time a user interacted with the piece a new band would be added to the scene. The color of the band would reflect the emotion of the user. Starting with no bands by the end of the night we had a fully-populated scene reflecting all the interactions and emotions of the night.

CLASS CONTENT

 

class CSMA 212 Media Environments (all development in TouchDesigner)

  • Create real-time generated graphics to be displayed across three seamless screens.
  • Calibrate and set up the video wall.
  • Generate system diagrams for the project.
  • Install Kinect, parse the data and send it over the network. 
  • Network machines and software.
  • Install software and media management.
  • Troubleshoot, debug and optimize the installation's performance.
  • Installation's deployment and build.
  • Develop a production schedule and project management skills.

Media: 

  • Design and create animation effects for different emotions.
  • Track users and overlay animations based on location and gestures.
  • Design live modification of animations with data  depending on number of users age, gestures etc. 
  • Display the machine learning generated text​​.

 

"For the first time, I am expressing my ideas through a digital tool without being forced to express into certain ways because of the limitation of the tool itself." Student, Ka Kit Chiu

class CSMA Artificial Intelligence

  • Manually studied the Getty's collection to obtain data from the images in their database. (Number of people, emotion, age, gesture, space position, etc.)
  • Gathered all text descriptions of the art pieces. 
  • Created new data-base to train the network on the Getty’s art collection.
  • Worked with computer vision and facial expression detection algorithms.

  • Generated new titles and text descriptions using the data obtained from the users.

  • Wrote software to send this information to TouchDesigner.

In order to keep track of the full project we used a Trello board. You can see the different areas of the project and how students could upload their work and meet deadlines and assign each other tasks. I think its quite interesting to be able to have a look at all the areas of a project at one glance.

Derivative: Incredibly well-organized and well-executed but did you face any challenges during development and installation?

Ana Herruzo: We did in fact! One of the challenges was having 15+  scenes playing smoothly without encountering any performance issues. The students learned how to use the performance monitor, Probe or Anton Heestand’s “cook_bar”, to help check on their the frame rate, and cook times. We were using an NVIDIA 2080 GTX card which made it quite easy to play heavy, real-time scenes across 3 screens. We were always rendering two scenes, one was used in the background and the other was used inside the silhouettes. Depending on the user, number of people and their emotions, different scenes were triggered, so the students had to also implement that switching logic using the data that was received from Pycharm through TCP. 

TouchDesigner was an excellent choice of software since this installation needed to run real-time interactive generated graphics and the graphics needed to change depending on users emotions, ages and number of people in the scene. We developed the machine learning portion in another application, so we used TCP object to send JSON messages in and out of TouchDesigner. In the end one of the main challenges was cleaning up the noise from the Kinect’s feed to create smooth silhouettes from the users - that took a bit of tweaking.

Derivative: Which part of this process had you not done before?

Ana Herruzo: Developing a project of this characteristic within an academic environment. We needed a lot of approvals, everything is slower than in a production environment. Also creating a piece for an event in a premier art institution in LA, the project needed to be drafted clearly in advance, to go through the necessary approval processes. So academic and art institution production was a completely different scenario to what I was used to. 

As for the installation itself, I had already done similar programming in a professional environment, but managing and guiding all students to collaborate and combine all the development in one TouchDesigner file was a new experience. There needed to be a lot of planning ahead to make sure that the workload for the students was not too excessive, and that it could be done within reasonable weekly homework hours and class hours. 

In TouchDesigner, it was important to teach the students how to work together on the same TouchDesigner file, and make sure that the toxes would connect to all the inputs and networking data once included in the overall playback system.

All students were starting from scratch with TD at the beginning of the semester so there has been an amazing progress and learning curve throughout the semester. 

Derivative: Does this give you ideas for future work/exhibits/teaching?

Ana Herruzo: Actually, I’ve become quite interested in Machine Learning and creating databases in order to create new content. I will be focusing on this in my next projects. 

 

We'd like to give a very big thank you to Ana Herruzo for all the time she invested in talking to us about the Getty Installation and for being such an amazing teacher and leader! For those interested in seeing the installation is now permanently be showcased in the Woodbury lobby campus. The installation will also be exhibited at Woodbury’s 56th Fashion Show, May 2020, taking place at the Petersen Automobile Museum in Los Angeles, CA.

 

“Hi, my name is Sungmin Lee. I just finished the second year in the Applied Computer Science - Media Arts major program.

When I was learning TouchDesigner and used it for different assignments, it felt like I was introduced to a new world, because I noticed that there is an infinite possibility of creation within the program. Also, learning TD from my instructor and other online sources helped me to think more broadly and creatively than I did before.

In the Getty project, I was in charge of implementing the Kinect and the user experience part. Especially the Kinect part was challenging because I had to track only within a certain boundary so that the Kinect only detected a number of people inside it, and dismissed anyone outside it. Also, when I was making it, I had to consider a good enough distance for people to feel comfortable looking directly at the video wall. In this process, I used various methods, like using python codes, bubble sorts, and CHOPs. Later, with help from my instructor Ana Herruzo, I was able to implement the logic that solved the job; however, other methods that I tried and failed still improved making me a more experienced TouchDesigner user. In addition, it taught me that I should not reluctant at trying any possible methods with the program, it has so much capability."