Derivative: Emmanuel, could you share a bit about your background and how you first discovered TouchDesigner—what led you to learn, use, and ultimately teach it? How has that experience been so far?
Emmanuel Lugo: I began learning TouchDesigner ~2 years ago. I had mainly just heard of it as a real time Kinect tool and wanted to do some depth stuff with it with audio reaction. I learned almost completely in my own time and on YouTube. I completed any tutorials I could get my head wrapped around and slowly throughout the end of my undergrad, opportunities to use TD for experimental animation in class became abundant. TouchDesigner has quickly become my Swiss Army Knife, and over time I just became the person around campus who knew how to use it.
Picking up TouchDesigner became a gateway to GLSL, Python, and JavaScript programming.
If someone were to ask my background, I would say I am an animator turned creative coder, and I have the TouchDesigner community to thank for that. I picked up teaching by first helping workshop attendees at the SudoMagic visit to the STUDIO for Creative Inquiry. Having worked closely with Golan Levin for a number of years as a student, in Fall of 2023 he offered to have me as his TA, specifically focused on teaching TouchDesigner in the final unit. I have since been able to administer some level of workshop, unit, or lesson on the basics of TouchDesigner every semester since. I love teaching TouchDesigner! I am constantly finding out small tips and tricks, and the more I help folks out with specific fixes, the more .toxes I create that are helpful for my own workflow. Teaching this beginner TD course has inspired me to be doing daily work on my Instagram, and exploring the toolkit.
Derivative: How did you come to teach TouchDesigner at CMU, and how are you incorporating it into the Creative Coding class you teach with Professor Golan Levin?
Emmanuel Lugo: I am an alum at CMU and teaching assistant here in the School of Art and teach with professor Golan Levin for Creative Coding a course taught almost entirely with p5.js. The class is guided towards sophomore and junior students in the Art program, but also includes our Computer Science/Art and Design students. Rather unconventionally Golan allows me for the last unit of the course to steer the ship, give a few lectures and homeworks, and have the final project be a TouchDesigner intensive.
I officially introduced the unit with a overview lecture, and had assigned the first 10 topics on the TouchDesigner Curriculum 101 Course as homework beforehand. This was all leading up to a final "interactive environment" that will had a final crit on 12/10. The assignment was to create a TouchDesigner sketch that utilized external data, be it computer vision, OSC, MIDI, Mediapipe, or audio. There were a number of check-in stages where I individually consult on the state of students' projects.
Derivative: What was your experience using the TouchDesigner Curriculum and how did you use it in your teaching?
Emmanuel Lugo: The 101 Course was the part of the course I relied on the most as I had in total 3 weeks (11/20-12/10) to get students up and running. In that time I had to go over basics of manipulating TOPs and the Rendering pipeline. I knew I could only spend so much in-class time lecturing on these topics, so before my section of Creative Code began, I had them watch the first 10 videos of the 101 series and submit to me a screenshot of the completed coursework, as well as encouraging them to go beyond in the 100 course. This took a lot of weight off my shoulders. I spent much less time discussing how the runtime operated and could move straight to generating graphics.
The 100 Course was also an easy place for me to refer to how individual operators worked, relying on Matt and Zoe to explain an individual operator's use case made things very streamlined.
I didn't really get the chance to use the 200 Level Course as reference, but I would love to in the future (especially since I chose to teach certain types of signal manipulation in my own words). Overall the 100 Level Course allowed me to take a group of students from zero to using the software very quickly.
I have taught similar sections of this course in the past with TouchDesigner, each time focusing on audio reactivity. In the past, while each person's project was distinct I could rely on the inner workings to be using the "audio analysis" component. I wanted to challenge myself here by having everyone make a bespoke workflow. I ended up doing 2 days of direct tutorials and lecturing of CHOP, TOP, SOP basics. When students got back from Thanksgiving we moved straight into 1:1 meetings, where each person was given individual advising time to outline the sorts of inputs they wanted to modify, or effects they sought to generate. For this section of the class I relied HEAVILY on the work of Torin Blankensmith's tools. The Mediapipe .tox became super frequented, and I spent a lot of time compiling Youtube resources for websockets, OSC, Python, pointclouds,etc. On top of this I made a number of small toxes to solve individual problems for students and recorded myself using it (like creating a transformation matrix for Owen's box of responsive water).
Derivative: From your experience teaching TD so far what are some ideas for the next sessions?
Emmanuel Lugo: Regarding plans for the future, I would love to teach a workshop on time, motion detection, and frame differencing as the basis for an environment or piece. If I had a group for this same amount of time (~ 3 weeks) and could focus a bit more on a singular topic, I would only stay within TOPs and teach more about how slope/time machine/ glsl/and cache can activate their movement pieces all on the GPU.
TouchDesigner greatly accelerated my understanding of texture space and operations, and I would like to teach something that stays within the 2D realm.
The goal would be to create a reactive-imaging system. Additionally, TD gets rid of a lot of the overhead for realtime motion extraction and makes using those metrics outside of TD easier. I have since taught a workshop where students are only using TD for its fast openCV implementations, and porting out things like blobtrack values to Unity. I love using TD as a place for generating visuals and generative art but would love to teach TD as more of a system logic tool that can integrate into their other media arts workflows.
Derivative: For this assignment what was the students' design brief?
Emmanuel Lugo: The brief was to create a TouchDesigner patch that dynamically generates visuals in response to an external input. This input could be anything from a camera feed or computer vision data to OSC/MIDI signals, an Arduino sensor, Google MediaPipe body-tracking, audio, live screen captures, web pages, or even keyboard input. Their goal was to harness TouchDesigner’s real-time capabilities to transform incoming data into a captivating, interactive graphic display while exploring the endless possibilities of mapping data to visual output.
Below are eight student patches presented as video clips, each offering a unique and engaging response to the original brief. The students were asked two questions alongside their projects:
- Technical Explanation: What does the piece do, how does it work, and which nodes are used?
- Learning Experience: How did they find learning TouchDesigner, what was the class like, how did they use the TD curriculum page, and what prompted their creative choices?
Technical Explanation: Using the MediaPipe plugin and Point File In nodes, I made an interactive point cloud representation of my room that beats like a heart. Through hand gestures, the viewer can change the behavior of the points and sound, allowing them to conduct what the work depicts—room, feeling, or something in between. When viewed through red-cyan 3D glasses, the form pops off the screen.
Learning Experience: At first it was quite intimidating getting into TD because the workflow was so different from anything I knew, but spending so much time with it in class reinforced the idea that it is learnable like any other language.
It was really exciting when the nodes became familiar and I started understanding how they could fit together; it felt like seeing and thinking in a new way.
This piece was an exploration of that new understanding.
Technical Explanation: My goal for WaterBoxTwinning is a controllable, virtual water surface. Instead of the user puppeteering the water with a computer mouse, I made a specialty controller to enhance the user’s engagement. This controller contained and monitored the water level of a real body of water, and a matrix of sensors relayed this data in real-time via Arduino to TouchDesigner where the virtual water surface was generated.
Learning Experience: The primary challenge of this project was connecting the physical world to a virtual rendering while simultaneously using multiple products. TouchDesigner's unique "sandbox" style platform enabled me to integrate a wide-range of products however, which was incredibly helpful. I was also able to achieve this with Emmanuel Lugo's help (showing me how to interpolate in 3D to create many boxes with only 9 defined points).
Technical Explanation: I developed an interactive environment that dynamically morphs in response to rotational, acceleration, and directional OSC data. Utilizing the Bullet Solver COMP, Flex Solver COMP, and SOP, I created a physics simulation featuring rigid body collisions between boxes and a sea of spheres. Through this project, I aimed to explore the movement of daily mundane objects, reimagining their tangible behaviors and interactions. Made with the help of Seskamol's Bullet Solver, Flex Solver Comp.
Learning Experience:
Learning TouchDesigner felt incredibly intuitive, as the node-based workflow made it seem like I was assembling visual building blocks.
The ability to preview outputs directly on each node offered instant feedback and made it easy to understand how changes impacted the system. I thoroughly enjoyed experimenting with different node values to explore various possibilities and outcomes. The pre-built components were particularly useful in helping me grasp how different nodes functioned, allowing me to focus more on creativity and experimentation.
Technical Explanation: I created a TouchDesigner project that would dynamically change colors based on user input utilizing computer vision techniques to dynamically locate a color wheel and a pen on that color wheel so the user can navigate this space in real life to choose a color. I followed a Youtube tutorial by Lake Heckaman on caustics, which are naturally occuring interference patterns usually caused by waves, to create a GLSL shader in TouchDesigner that uses these interference points and emulates a real surface to create refracted light at each pixel location. Then I integrated my p5.js sketch to Touch Designer using the Websocket chop. Another tutorial by Torin Blankensmith really helped me navigate how to create a secure connection between the two ports in a way that facilitated live changes on both ends. Finally, converting the data being read between the servers from a JSON to CHOPS and DATS allowed me to finally find success and change the parameters of the COLOR op in the Geo box for rendering my caustics with dynamic color shifts in real time! (made with the help of Lake Heckaman's GLSL caustics tutorial)
Learning Experience: Learning TD in class was definitely a unique experience as I have never used a similar workflow before, but with the help of my amazing TA and Professor, the TD curriculum page, and a lot of Youtube tutorials I was able to create some projects that are really exciting for me.
I am interested in the metaphorical relations between UX design spaces and real life design, and how to subvert these expectations.
Instead of bringing a well known design mechanic into the digital world, I wanted to bring color wheels (that are most dynamic in the digital space as opposed to paint palettes in the real world) to reality through a "real life color picker tool". Later on, I hope to experiment with other patterns while keeping my color changing algorithm and to play around with TouchDesigner's sliders and select options to more dynamically change each parameter, RGBA by the user instead.
Technical Explanation: For my TouchDesigner project I wanted to reference Tribal - A club music genre and subculture popular in Mexico, particularly Monterrey, in the early 2010's. In addition to club music that fused cumbia with techno and edm, there was a DIY culture around customizing boots with extremely elongated toes as a form of self-expression. This project harkens back to this subculture by imposing digital audio reactive pointy boots on the person in view. I used Mediapipe pose tracking in conjunction with TouchDesigner's built-in audio processing and bespoke 3D models to create the project.
Learning Experience: The intuitive, plug-and-play nature of TouchDesigner made the learning process feel more as though I was opening doors than scaling walls when it came to learning the software.
It invites exploration and meandering rather than rigid planning on how to accomplish a structured task, making it a blessing for creatives of all disciplines.
It was an awesome experience working with Emmanuel and Golan, and I'd like further incorporate Touch Designer into my practice in the future.
Technical Explanation: This TouchDesigner project explores the loss of conversational cues in digital spaces, inspired by the limitations of text-based communication with AI. Using MediaPipe hand recognition, the interaction detects the "shushing" gesture, condensing free-flowing text scattered dynamically across the screen into silence, reflecting on what it means to digitally suppress conversation. By focusing on visualizing dialogue through motion and gestures, the piece becomes a reflective tool for processing thoughts, allowing text to grow or be silenced interactively.
Learning Experience: Learning TouchDesigner, while daunting at first, truly became an intuitive process. I think the fundamentals of understanding how data comes in and how it comes out made the tool of TD all the more useful as a way to "mediate the inbetween."
It is a tool for happy accidents, and the art came from simply distorting different CHOPS and SOPS between the input and output pipeline.
Technical Explanation: I was really interested in system processes and exposing what my computer was doing under the hood. I wanted it to serve as an alternate visualization to the Activity Monitor/Task Manager and be something that actually looks nice while still providing some information. The project is more of an idea or suggestion to what a final product could look like.
Learning Experience: Having such an open-ended project let me really hone in on what I was interested in and I really enjoyed learning on the go and solving problems as they arose rather than trying to learn a giant system, and I think TD really lets us tinker. I definitely enjoyed it more than a guided project, but having last year's experience helped a lot with being familiar with the interface.
Technical Explanation: I made a karaoke program that displays visual audio for the music in the background and the person singing the song in traditional karaoke style. My goal was to create a visual in which one could see both music and audio input at the same time, yet be able to tell between the two, and I think I mostly achieved this goal. Having the front be both the person's input and partly transparent seemed to work well. I see this could be used for actual karaoke nights at parties for a nice display of colors and visuals.
Learning Experience: Learning TouchDesigner in class was a nice change of pace. It was interesting to see all of the effects (both visual and audio) that we could implement. It wasn't too hard to get used to the interface, and I took a liking to visual audio. For the project, I wanted to do something different and thought of doing either a music or a vocal input, so I decided to do both. Thus, the idea for karaoke was born. Overall, it was a great unit in our class.
Follow Emmanuel Lugo Website | Instagram
Download Mediapipe Plugin for TouchDesigner