Dalí Lives is unique in that it offers a new way to delight and inspire museum visitors and is a remarkable example of art meeting artificial intelligence and showcasing the possibilities created by this sometimes controversial technology.
We spoke to creative technologist Shan Jin of the San Francisco-based agency Goodby Silverstein & Partners (GS&P) who partnered with the museum to create the immersive exhibit. Shan Jin works at GS&P Labs, an internal innovation department at GS&P that experiments with emerging technology and collaborates with other creative departments to develop prototypes and fun experiences. Shan managed the Dalí Lives project and was responsible for the TouchDesigner development aspects of the implementation.
Derivative: Shan can you describe the Dalí Lives experience for us?
Shan Jin: When you arrive at the museum, you will find what looks like a free-standing door, but it’s actually a person-height vertically oriented screen. This is our first screen - the “welcome screen”. Visitors will first see a man in the distance, walking back and forth, holding a cane, reading a book, fixing his moustache, or painting. If they choose to press the button and interact with the screen, a man gets close to the visitor stepping into the light and it becomes apparent that it’s Dalí himself: alive and life-sized. Dalí will then greet visitors and comment on current events, the day of the week, and the weather, for example.
Screen 2 sits on the second floor where Dalí talks about his artwork, the motivations behind his work, his childhood or his love for Gala. Dalí would just be sitting there, or reading a newspaper and suddenly begins to talk whenever the button is pressed. A new feature that we are working on right now is that we’ll render new videos everyday with the front page of the Tampa Bay Times, so you’ll always see Dalí reading the latest newspaper. We’re hoping to roll out that feature in the next a couple of weeks. Each visitor’s interaction with Dalí might be a little different from the next and not everyone will necessarily get the same experience. We added some dynamic content and various different versions, so it feels like Dalí always has something different to say.
Image Description: The split panel on left is the root path, the red node “Variable1” is a master operator(container) for the replicator, and all the yellow containers are auto generated (in each container Dalí does different things, talk about weather, temperature, days of week, time of day etc etc.). The right split panel is inside “Variable1”, the red node “Condition1” is also a container for replicator, because under each variable there are different conditions (like under temperature there are “warm”, “mild”, “cold”). So for each container generated the condition also gets generated automatically. It reads the folder structure and does the work based on that.
Image Description: Under container “Condition1” there are operators who does the real work, like a container for playing different videos for idle states, switch between idle and contents.
D: How was TouchDesigner used to bring Dalí to life?
SJ: For screen 1 and 2, TouchDesigner holds together all the face-swapped assets from motion. It reads the button input from serial and triggers the content. I used the Replicator so that it’s more organized to play the 125 videos we have. When the videos are placed in the right folder structure, all the containers are generated automatically. It also makes it really easy if we need to update a new version of the contents - given the right folder path TouchDesigner would just pick up whatever video file is inside that folder. I actually have replicator inside replicator because we label the contents as different variables - conditions - idle/content, and it’s very satisfying to see that every time those containers are generated recursively. Components also come in handy for reusing the module across different screens.
Image Description:This is the network inside corner_pinning, it loads the data from csv, check if the current frame is larger than the start frame, and loads the 4 position data based on row number.
SJ: There’s also Python script inside TouchDesigner to help with the dynamic contents. For example we’re pulling real time weather API so that on a rainy day you’ll see Dalí walking with an umbrella. Or on a Thursday Dalí would tell you the museum opens late that day. A timer will trigger our logo animation every 30 seconds to prompt users to push the button, because we found out that people are really cautious to touch anything inside the museum!
Screen 3 sits in gift shop where Dalí takes a selfie with visitors - and they can text Dalí to get the image! For this one there’s a lot of close collaboration with the motion department. We have 7 different versions for selfies, and each of the components have their own local timeline. When Dalí holds up the phone we use corner-pinning to map the webcam feed to the phone. The corner-pinning data is exported from After Effects, I set the local timeline’s frame rate to match the frame rate of the video and lock the video playback to the timeline.
D: Which part of this process had you not done before?
SJ: The sheer volume of this project and the fact it’s a permanent installation in a museum is something I’ve never done before. My past experience focused more on events. Also managing the project and developing at the same time is a really big challenge for me, that I need to talk to ten different people every day and at the same time try to focus on software development. It’s a tricky balance in what I tried to delegate to other people as opposed to what I wanted to take on by myself.Time managing wise, towards the end of the project there was a lot more fine-tuning on the design side than I expected, it was hard to strike a balance between creative/engineering. Shall we keep making changes? Or shall we stop the development now and focus on testing? I think it’s always something to think about on project management for a tech lead.
Image Description: This is a sceenshot from selfie screen. The component has its own local timeline and the red nodes here are all Logic CHOPs trying to trigger different events based on the timeline (events like when the LED turns on, when to corner pin the camera feed onto screen, when to upload the image and get a code from the server).
D: How did you first encounter TouchDesigner and what do you mostly use it for?
SJ: I was working at design agency Fake Love before joining GS&P. Mary Franck, who was a tech lead there introduced me to TouchDesigner. (Funny I just found out she’s one of the presenters at your Summit!) I was just so surprised by how fast it is to prototype in TD rather than writing code from scratch. It’s really robust and great for production as well.I use it mostly for graphics which is my main interest. I like that it’s so easy to use and that each parameter changes the output so directly, making it really straightforward and helping me to understand the higher level concepts of graphics and the rendering pipeline. Basically I entered the world of graphics with the help of TouchDesigner and now I can transport it into other coding languages.
D: Can you tell us a bit about your background experience and your focus as a creative technologist? SJ: I majored in Software Engineering in college and did my post grad at New York University, Tisch School of the Arts. They have a really fun program called ITP which brings together students from different backgrounds (mostly art, design and tech backgrounds) and we learned a lot from each other developing new skills we didn't possess. My main focus is in interactive installations. Now as I look back on my life I feel I’m deeply influenced by an article which was the first reading in the first class I took at ITP. There’s so much potential in what our hands and bodies can do so why do we limit ourselves to just tapping and swiping screens? I think that in the past 5 years of being a creative technologist that I always try to make something that’s about more than a touchscreen. D: What are some of the tools you use and projects you like to work on? SJ: Like most creative technologists I use a bit of everything, but the ones that are most handy to me are: C++ (openFrameworks), TouchDesigner, shader, and web (python, HTML/javascript). My portfolio is here and some past projects include:
Magic Mirror - or Oubliette, is a box that brings back the past. We made a box with a camera inside running object detection and there are a selection of objects around like an ipod, walkman, CD, cassette, 8track etc., each being an iconic music object of its decade. Once the object is put inside the box it is recognized and related images are pulled in real-time from Google image and shown on a transparent screen. So for example putting a Walkman into the box would bring up 90s culture, fashion and TV shows. We felt it was kind of nostalgic and also showed the current Internet landscape.
We hacked a pinball machine for Panorama Music Festival installing a bunch of sensors inside it to track the game and generate visuals based on the game. An in-store sculptural installation for cosmetic brand NYX's first store in New York City at Union Square. It’s made of a lot of tablets and phones with a Kinect that would pick up the colors of what people were wearing, and we would pull images from NYX’s social network based on color match.
My thesis project Muggles’ Pensieve borrows a reference from Harry Potter where witches use a magical receptacle called a Pensieve to look at their memories. I was trying to recreate this experience for the modern age. An app would scan your photos and get a list of all the places you’ve been to and project them as a circle outside a bowl of water. As you shook your phone, your photo “fell” into the the bowl and you could then slide the knobs to choose different locations to see your memories.
D: What projects are coming up next?
JS: I’m currently working on the new feature (Dali reading the daily newspaper) and some improvements to the exhibit based on the museum’s feedback. After that I'm trying to finishing up an internal project for the company and do more graphics sketches using TouchDesigner. Unfortunately I can’t say too much about projects/prototypes that are not yet public, but recently we made Herstory and Cheetos Vision.
Client
Hank Hine | Executive Director
Kathy Greif | Chief Operating Officer
Beth Bell | Marketing Director
GS&P
Jeff Goodby | Co-Chairman/Co-Founder
Roger Baran | Creative Director/Director
Nathan Shipley | Technical Director/Creative Director/Director
Otto Pajunk | CopywriterRicardo Matos | Art Director
Margaret Brett-Kearns | Co-Director of Production
Severin Sauliere | Producer
Tena Goy | Digital Executive Producer
Troy Lumpkin | Director Creative Technology
Shan Lin | Creative Technologist/Lead Developer
August Bjornberg | Creative Technologist
Matt Chiang | Junior Creative Technologist
Amanda Steigerwald | Line Producer
Michael Miller | Director of Photography
Emilio Diaz | 2nd Camera Operator
Andrew Butte | 1st Assistant Director
Luke Dillon | Executive Producer, E-Level
Steven Castro | Editor
David Michel-Ruddy | Sound Design/Mixing
Dave Baker | Sound Design/Mixing
Zachary Seidner | Motion Graphics
Anthony Enos | Compositing/VFX
Casey O’Brien | Account Manager
Casey Cooney | Assistant Account Manager
Meredith Vellines | Director of Communications
Sam Luchini | Creative
Thinking Box | Fabrication