Close
Company Post

TouchDesigner Performance Interfaces

We are continually impressed by how many of you have created your own applications in TouchDesigner to operate and perform live shows - Large and small, these represent a tremendous assortment of personalized tools made with TouchDesigner.

Whether for video playback, generative visuals synchronized to audio, scripted shows driven by a cue list, improvised performances or for lighting control systems, we wanted to highlight the impressive range and variety of performance interfaces that have been designed.

So a few months ago we put a call out to the community with a few questions and what follows below is a wonderful set of performance tools.

mary franck's rouge

About Mary

I'm a new media artist, always moving between technical and aesthetic work. I studied Conceptual and Information Art at SFSU, after which I worked for three years at Obscura Digital. There I made custom projection mapping and show software in TouchDesigner for projects such as the Sheikh Zayed Grand Mosque illuminations and the YTSO Sydney Opera House projections. Recently I have been making laser-cut projection sculptures and doing live visuals for folks like the Do LaB and CCRMA. I occasionally teach TouchDesigner workshops: I enjoy enabling other artists and building community. I'm a geek: I love solving problems, but my real passion is making strange and beautiful things. Moving between installation and performance I create intimate, visceral experiences that stir the non-rational aspects of the human mind. These works instantiate new symbols and metaphors for visceral emotion in the era of the rational, binary machine.

D: Why did you build your own tool?

Mary Franck: This is a formalized version of a performance tool I have been developing for years. My first TouchDesigner-based shows and performances were custom TouchDesigner programs with each cue or effect individually mapped to my controller. It was time consuming and not very reusable. I was determined to make a more flexible, reusable tool and set my sights on a VJ paradigm. I wanted to make generative and controllable 3D modules, and I launched the first iteration of this at Public Works for Nosaj Thing. Since then I have abstracted and codified this tool into Rouge -- a live video performance tool and programming framework. I give it to my students so that they have something to start working with out of the box and an example of a complete TouchDesigner program.

D: Explain the idea of what you were trying to accomplish.

MF: This is a tool for live visuals performance and also a programming framework. I wanted to accomplish 3 things: first to have my controller and output already neatly set up. There's no sense in redoing that every time. Second, I wanted to be able to have performable 3D modules. This means that I can perform something along with a dancer, set something on auto mode for an installation or make something audio reactive for an AV set, it's just a matter of which controls I hook up in the module. Third, I wanted something simple that I could teach with.

D: What did you learn from building your tool and how would you do your next-gen?

MF: This tool is already several generations in. I keep the trunk of this code clean and fairly generic, and for a particular show or installation I'll make modules that are specific to that show or set of problems. It has really shortened my development time, and allowed me to focus on making gorgeous performable content rather than the nuts and bolts of the setup.

D: From idea to product how did it change? Happy accidents, "wow" moments?

MF: Since this idea was based on years of making performance tools for myself and Obscura Digital, I had a very clear idea of what I wanted to do and it didn't change very much.

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

MF: Since much of the intent with this tool is simplicity it was not very difficult. The UI programming is not as simple or easy to read as I would like, but that was a necessary trade-off to make it easier to use. The UI is what I am still refining.

D: How many iterations are you at?

MF: I have taught this to two workshop sessions, 18 people total, and used it for more than a dozen nights of shows and released Rouge to the TouchDesigner community in June.

Legend

  • Compositions: This is the heart of the software: generic controls for 3D modules I call compositions. I can parameterize the composition, run it off audio, and swap it out for other modules with the drop down menu.
  • MovieBin: Markus's clip player, with some small performance tweaks.
  • Audio In: A preview of the audio in to make it easier to adjust the levels.
  • Filters: Like the compositions the filters are modules that can be switched out. They intensity "I" is how much they are used and FX alters the filter settings. A is the channel alpha.
  • Output Preview: What I try to make look good.
  • Composite: Control for how the channels are composited together.
  • Control: The meta controls, final image, and audio controls.
  • Output Controls: Easily lets me change monitor or output resolutions and use cornerpinning or Kantan Mapper to setup my output.

Keith Lostracco's FragTool

About Keith

In 2002 I designed and built a three room recording studio near Nelson BC. At that time I also started to play drums and bass and started to produce electronic with my brother Greg. Around 2006 I started getting into 3D animation (Maya) and doing live visuals for local festivals and shows and for my brother's live electronic act Psudaform (soundcloud/psudaform). In 2008 after using Max/Msp at a live event I was told about TouchDesigner and soon after began using TouchDesigner exclusively. In 2012 my brother and I started Trypta, an audio visual act intended to blend the mediums of live audio (live PA and dance performance) and synchronized live visuals with the feel and presence of a movie while using the audience in the performance - it's hard to describe in words!

FragTool has turned out to be a key element in Trypta's visuals. Using TouchDesigner to build FragTool has enabled us to create 3D fractal animations in time with music in a way that was not previously possible with other software.

With FragTool I created a component to simplify slider creation, naming, type, functionality, ranges and the ability to receive preset changes with having as little impact on performance as possible. There are two main types of sliders: integer or floats. The first type is just a value slider the second is a value/mod slider where there is a button on the right that opens another two sliders and a drop down menu to be able to select a mod source and adjust the gain and offset of that source.

Inside the component that contains each group of sliders is a table with all of the attributes of the sliders, then a couple of replicator components just create the sliders dynamically when a new parameter is added to the table. This is the beginning to a dynamic menu, eventually there will be a table for each fractal type and a switch that selects the appropriate table when a fractal type (or render engine) is selected, in turn the replicator will recreate all the sliders.

The are over 100 parameters to control the parameters of the fractal and the other 4 pages. All your standard fractal, coloring, lighting, rendering parameters.

The controls on the top left area are for presets. The presets recall everything - animation keyframes and channels, mod sources/settings, and all other settings. Every parameter is saved out to a DAT table when storing a preset and scripts update all the UI elements back to their original settings.

FragTool - Legend Part I - Left Side (as pictured above)

1. Formula Settings

These parameters control the shape of the fractal. At the top the specific formula is selected triggering a sequence of actions that automatically create a new set of sliders. Some settings common to most fractals are Iterations (the number of times the formula is iterated), Scale (the scale of the amount of change in each iteration) and Julia (the distance of the iterations in space which in turn changes the shape).

2. Render Settings

These settings for the render engine control the amount of detail, render quality and camera lens depth-of-field. This uses a trick I came up with recently to blur parts of the image based on distance. The distance is rendered out as depth map which is fed to the Luma Blur TOP along with the RGB rendered image, giving the effect of certain areas being out of focus. This technique can be used with Render/Depth TOPs as well.

3. Camera Settings

Position, Rotation of the camera as well as some controls for 3Dconnexion's 3D mouse that is connected to the component via the Joystick CHOP. This allows realtime navigation in the scene using 6 degrees of freedom.

8. Presets, Outputs, Animation, Export, Resolution

Presets can be saved and recalled - all parameters, LFOs, mods, and animation channels and keyframes. You can open the animation editor to animate any parameter, or open the export movie dialog to export non-realtime renders. Resolution Multiply downscale the resolution in order to work at better framerates or upscaling for final renders.

9. Timeline and Tempo Settings

BPM, time signature, range Start, range End, timeline End inputs to control the current range and animation length all based in bars at the current tempo.

10. Bottom Scroll bar

This is linked to the TouchDesigner timeline and has markers for bar position and a cursor to scrub or choose playback position.

FragTool - Legend Part II (right side above)

4. Color Settings

These settings control the color of the fractal. There are few different type of shape coloring as well as some background options. The shape can be all one color or it can use "Orbit Traps". Orbit Traps color the fractal based on the distance from the center of the iteration as well as the direction from the center. It enables different areas of the fractal to be different colors (there are many types of Orbit Trap formulas which are calculated in the formula area of the shader). Glow can be set here as well as background color and background texture that would also be driving an environment map for lighting.

5. Lighting Settings

These parameters control the lights, reflections and shadows. In this particular render engine there is 1 spotlight (specular and diffuse), 1 camera light (ambient light), ambient occlusion (because of the nature of distance estimation real ambient occlusion is calculated almost freely), and an option for an environment map. There are also controls for shadows (soft and hard), reflections and fog.

6. LFO Settings

Here up to 10 LFOs can be activated (used for modulation) that have the standard waveforms (Sine, Triangle, Saw, Square) the period and phase can both be adjusted as well as a unique parameter called "Hold". Hold scales up the waveform and then clamps it between -1 and 1 (or 0 and 1 for a Saw wave). This creates a waveform part-way between a Sine/Triangle wave and a Square wave. I've found this really useful because a parameter will move smoothly from one position to another and will hold that position for a moment (depending on the period).

7. Mod Matrix

This is a new way of assigning modulation sources to parameters . The mod source is selected (LFO, audio, OSC, spectral band, etc...) and then a destination is selected (any parameter). Then there are gain and offset controls for each mod and at this moment up to 20 parameters can be modulated.

11. Audio and Transport

Play - playback control of touch - starts at current position

Start - starts playback at the beginning of the selected time range

Pause Anim - pause the animation component while the timeline is still playing

Audio Settings and level - sets levels for monitoring audio and opens a menu to control parameter for the audio mod source.

D: Why did you build your own tool?

Keith Lostracco: To animate 3D fractals in realtime.

D: Explain the idea of what you were trying to accomplish.

KL: I've found there are a few really good 3D fractal apps out there but they are very difficult and tedious to do animate with. Initially I built it to do performances and render live, but after using it a while I came to realize that it would work really well to animate in realtime and then to crank up the settings/resolution to record a non-realtime render to disk. Usually I can animate a lower resolution scene at >= 30 fps, to audio, in time to a tempo, using various animation sources - keyframes, LFOs, FFT, OSC, midi, 3D controller. Using a tempo reference with a tempo-based timeline I can animate to a piece of music and easily navigate to particular moments in the song.

D: What did you learn from building your tool and how would you do your next-gen?

KL: I've learned a lot about GLSL, math, fractal formulas, distance functions, ray marching and how to build a UI that has very little impact on performance. The next generation is going to have a better render engine (quality and speed), tile rendering, and a dynamically changing UI (dependent on the fractal type and render engine).

D: From idea to product how did it change? Happy accidents, "wow" moments?

KL: Mostly the UI has changed and almost every week now I create/add a new fractal formula. Most of the happy accidents come from creating/modifying formulas and then getting blown away by the images they produce.

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

KL: The biggest shortcoming is that even with using GLSL, 3D fractals can take a long time to render. The biggest road block has been understanding fractal math and distance estimation.

D: How many iterations are you at?

KL: 8 or 9. It seems there is always a better way to do something

All the recent videos on Trypta's Vimeo page were created with FragTool.

Itaru Yasuda's TouchMixerII

About Itaru

I studied Art History and New Media Art in Japan and started making music and visual around 2005. From the beginning my particular interest was how to make a perfect synthesis of music and visual with computer program like Cycling74's Max, Processing and SuperCollider. After graduation, I actively performed my audio-visual piece in Berlin, Linz, Barcelona and Tokyo. But I also needed a good job for my living.

I started working as a programmer mostly for interactive project in Tokyo. By chance, I got a connection with one of my most favorite audio-visual artists Richie Hawtin & Ali Demirel. They were looking for a young digital artist/programmer to be educated in TouchDesigner. I made i!. Now I'm working with Rich & Ali in a lot of visual and interactive projects. Needless to say, TouchDesigner is at the heart of the projects.

TouchMixerII - Features:

  • 2 Channel tox Bank and Mixer stocks up to 256 .tox components (TouchDesigner projects) in the bank and smoothly loads in realtime. Optimized implementation with procedural Python script and Replicator COMP.
  • Simple File System You can select your root folder which can have 8 subfolders inside it. Each subfolder will appear as a tab on the bank. A tab (folder) can contain 32 .tox files.
  • Practical Audio Input is available for audio reactive composition. You can control incoming signal with a Band EQ and Parametric EQ.
  • Parameter Auto-Assign Function You can reserve up to 8 visual control parameters in each of your TouchDesigner projects and the parameters will be automatically assigned to the control section.
  • Output Management You can define a master resolution and each project will follow the resolution when you load it from the bank. Transform and scale are supported.
  • Selected Visual Effects Simple but useful.
  • Selected Composite Operators Based on the “Blend” component in TouchDesigner’s Palette Browser, but tidier and optimized.
  • Flexible MIDI Input You can activate/deactivate parameters dynamically. I use KORG nanoKONTROL2, so a sample MIDI map file is included.

*To use all these features, you have to follow some rules when you make your TouchDesigner project. You can find the rule inside the template tox file. It’s simple. Download>

TouchMixerII - Legend

  1. Bank Select Button
  2. File Management Window Open Button
  3. Output Management Window Open Button
  4. Window Setting Window Open Button
  5. Audio Input Control Window Open Button
  6. FPS Monitor
  7. 2 Channel tox Bank
  8. Bank L Monitor
  9. Mixing Monitor
  10. Bank R Monitor
  11. Visual Control Parameter for Bank L (automatically assigned from selected visual component )
  12. Visual Effects for Bank L
  13. Visual Effects for Bank R
  14. Visual Control Parameter for Bank R (automatically assigned from selected visual component )
  15. MIDI Input activate/deactivate Button.

D: Why did you build your own tool?

Itaru Yasuda: It's just because there was nothing like that. TouchDesigner is obviously great software for creating generative visuals. But there was no practical software/tool for generative visual performance with TouchDesigner.

First I made just a simple visual mixer with TouchDesigner (image below).

I actually performed visuals with this primitive tool at several shows in Europe. But after a big show, more than 7 hours visual performance at Tokyo last December I realised I needed a more compact and practical tool for better performance. Back then I was also trying to migrate from TouchDesigner 077 to 088 and I thought that probably I would be able to make what I had in mind with the new Python features in 088. That was just right and I managed to make my generative visual mixing software with TouchDesigner which can be downloaded on the Derivative Forum.

D: Explain the idea of what you were trying to accomplish.

IY: I already knew how to make a visual mixer and controller section. But I had no clue how to make a media bank which is the '.tox' file bank in my case. I wanted to make a tidy grid layout .tox bank and combine everything into one simple window.

D: What did you learn from building your tool and how would you do your next-gen?

IY: I learned how powerful the Python and Replicator combination is!

D: From idea to product how did it change? Happy accidents, "wow" moments?

IY: Not really. I did a few sketches on paper and I actually had a quite solid plan. So I just followed my plan..

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

IY: Python! I never learned Python before this opportunity. Derivative's wiki and help component were really helpful to understand how Python works with TouchDesigner.

D: How many iterations are you at?

IY: Then it is 2.0 as the name 'TouchMixerII' says it all!

Ivan DelSol's CueDesigner

About Ivan

Ivan DelSol started writing graphics-generating code at the age of 7... 40x48 resolution on an Apple II+. This continued as a hobby for many years until the 1990s when he started doing projection art for dance parties in Los Angeles. This soon morphed into doing projections with performance art and theater pieces with Dream Circus Theatre, LA and Bedlam Theater, Minneapolis. For the last decade, he has been creating multimedia bits for community theater shows at the small town art center he co-founded in Cottage Grove, Oregon. Inspired by the X-Box Kinect and TouchDesigner, he has a renewed interest in experimental visual and interactive creations. He is currently phasing himself out of his art center with the intention of re-immersing himself in the world of programmatically-generated art.

D: Why did you build your own tool?

IDS: CueDesigner is a tool made to create and execute a series of audio, video, or any other types of cues that TouchDesigner can implement. It is designed for performances, presentations, etc. and is made to be useable by people who are unfamiliar with TouchDesigner while remaining extremely flexible and extensible for experienced TouchDesigner users. Currently its features don't do much more than, say, PowerPoint, but because of its extensibility it can be grown to do everything TouchDesigner can do. I look forward to including Kinect and object-mapping effects in future theater shows.

D: Explain the idea of what you were trying to accomplish.

IDS: I made this thing because I'm usually the sound and video guy at the small community theater I run. I wanted a tool that would do everything I can imagine and at the same time be useable by others without a ton of training in the event I'm not available. Because a show is automatically generated based on a table of cues and scenes, it is very easy for a beginner to create a show as long as they're using features which are already implemented. I also wanted a cueing tool that had a separate screen for interface and output, which TouchDesigner is particularly suited for.

The tool I used for this sort of thing before I discovered TouchDesigner was built (by me) in Macromedia Director. You can imagine what a nice change it has been to upgrade. I never even bothered trying to teach anyone else to build a show in Director's mediocre 'Lingo' scripting language.

D: What did you learn from building your tool and how would you do your next-gen?

IDS: I built CueDesigner as a "learning TouchDesigner" project, and I did indeed learn a ton doing it. It was fairly fast to create the tool, and along the way I had a great time exploring the TouchDesigner process for video and audio effects, the GUI elements, and the Python scripting interface. In retrospect, there are a few design techniques I would do differently, but for the most part TouchDesigner is straightforward enough that I generally did things the first time in a way that continued to be functional.

D: From idea to product how did it change? Happy accidents, "wow" moments?

IDS: As far as changes from idea to product, there were not many at all. Things flowed nicely. There are, however, a few features I found difficult to implement in TouchDesigner and those I will detail later. Along the way, there were quite a few “wow” discoveries, generally revolving around “Oh, there's already an OP for that!” or “Hey, this OP already has a parameter that does what I need!”

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

IDS: I have come to enjoy TouchDesigner quite a bit, but as with any developing product, there are a few short-comings. I hope to see them addressed in coming releases. These are the main three (details in the forums, username 'sunspider'):

  • I want to capture every keyboard event when TouchDesigner has focus, no matter where the mouse is or what control it's over. Haven't figured out a way.
  • It would be really nice to have a way to edit tables in performance mode. A simple editable table panel component would do it.
  • The Python system lacks as far as inheritance and efficient use. I'd really like to be able to give operator's member functions so I could do things like cueop.play() from script. I'd also like operators to be able to inherit those member functions so that I could build a basic cue script and then derive all cues from it. I built an experimental inheritance system in CueDesigner but it turns out to be rather clunky and ineffectual. Back to the drawing board on that.

D: How many iterations are you at?

IDS: I've used CueDesigner in two theater shows so far. It has been pretty easy to design new features as needed for each show, and of course I include those in future versions so if anyone else needs, for example, a video that shakes like there's an earthquake, they'll have it ready to go. The power and versatility of CueDesigner, especially given how quick and easy it was to develop even for a TouchDesigner beginner, is really pretty amazing. I'm looking forward to continuing to develop CueDesigner and have ideas for other, more experimental art-generating networks. I've also done some work on Mixxa, which is also a really great, versatile application.

CueDesigner - interface

Here above is CueDesigner in Perform Mode. You can see the list of cues in a scene called "urgrove".

  • A) Scene controls. Generally all an operator will do is hit Next at the right time but there are Restart, Back and Forward controls as well to alleviate the occasional but inevitable mix up. The user has the option of pressing the spacebar instead of the Next button.
  • B) This shows the onstage action that the operator is waiting for. When that happens, they'll hit "Next"
  • C) A sound cue in the list. It has, from left to right, a Play button (for jumping to that cue), Volume control, soundwave viewer, and the onstage action that indicates the time for that cue.
  • D) The projector preview area. It is a mini version of what is being projected.
  • E) A video cue in the list. Much like a sound cue, but with a video viewer instead of soundwave.
  • F) Two linked cues. Notice the link marker on the play buttons. This is a video file with a different sound file attached to it for a humorous mis-matched loop effect. The lit up Play buttons indicate that this is currently what's playing.
  • G) Purple highlight indicating the cue that will play when "Next" is pressed.
  • H) A next scene cue. This type of cue... you guessed it... goes to the next scene in the list.
  • I) The scene list. Shows the current scene highlighted. The very observant person may notice there is no next scene called "castle". That's because this is a condensed test version of the show.

Here is the show table used to generate the example shown above. As you can see, it contains general information about scenes and cues. It also has an "extra" column, where any special setting changes or features can be selected. These extras are written as Python dictionaries for ease of scripting, and end up being translated into TouchDesigner tables in their respective networks.

These are the different archetypes of networks that are used to automatically generate a show from the show_table. On the left are the different kinds of cues that can currently be created. There are some utility functions and the scene archetype in the middle. On the right are some graphic elements (markers) and a spiffy little fader network I created with built-in test controls. The fader is designed in a way that I have found quite efficient... input and output functionality on the component itself, plus built-in python script functionality for all features, plus a very basic control panel for testing or showing on other interfaces.

Here is the inner working of a scene network. You can see settings, utility functions, the cue table (similar to and generated from the show table, but only cues for this scene), control buttons, and the list of automatically created cue networks.

Here is the inner workings of a video cue network. Again we see settings, utility functions, and control buttons. In the lower left, there are some effects that can be applied, such as 'shake', 'flip', 'area-fit' and of course the fader.

Jim Ellis' Scratchola

Scratchola is a rough TouchDesigner prototype of a realtime video/audio player AND sampler created specifically for a multitouch environment.

It contains a UI where freewheeling broad physical gestures can effectively scratch and sample a selected video clip easily. Its oversized extremely wide virtual sliders make for a higher range of video scratch quality and a physicality more like that of a traditional musical instrument or turn table.

It is designed to be simple, encourage dance, and new improvisational patterns in sound/video collage.

D: How's it work?

JE: Select a video from the bins and begin sampling and scratching by simply placing your finger on the largest window near the top of the UI... it's actually a huge slider. Once this area is touched, the playbar snaps to your finger. Release your finger from the slider, and the Rainbow Play Bar moves forward again at normal speed from that point. By doing this you've also just made a looping sample of your finger gesture that is now repeating. The loop is not locked to BPM, but instead is the length of time that you had your finger pressed down inside the window/slider. To record over your previous loop, just move your finger around again.

Model: Slider length = Movie Duration. Left Edge of Slider = Start Frame of Movie Clip. Right Edge of Slider = End Frame of Movie Clip.

1. Movie Select

This is where to begin with Scratchola. Drag and drop audio/video clips into a library. The currently-playing audio/video file is the performance movie clip. This gadget is a modified “Movie Bin” from the TouchDesigner Pallete.

2. Trim Movie and Playback Phase Slider

This is a “Cropping and Looping” interface for the current movie clip. In the center is a long slider with multiple preview images from the selected performance movie clip, like cue points. This gadget also contains two small drag-able video preview screens that display the thumbnail images of the In and Out edit points.

This UI can also globally slide the phase of the performance movie clip. This Global Slide allows the clip to be pushed backwards and forwards, through its own In/Out points, changing what section of the clip is played without altering its duration.

3. Scratch Current Playback Frame (A:) and Sample Finger Gesture (B:) The Performance really begins here. At the top of this slider are multiple preview frames (chronologically displaying left to right) representing the entire length of the now-trimmed version of the performance movie clip.

A) Scratch the Current Playback Frame with finger gestures

The output of this slider can be seen in gadget “4:”. A rainbow-colored playbar incrementally ticks throughout the trimmed performance movie clip. The current playback image stretched out to the entire width of this slider.

Placing your finger within this slider shifts the current frame to your finger's position. Moving the finger back and forth gives the ScratchOla effect!!! When the finger is released from the slider, the movie's playback rate is determined by the radio buttons in the area labeled “AA”.

Finger gestures on the slider labeled “playback speed” (above the first set of radio buttons) are sampled and dynamically alter the performance movie clip speed. A graph of the recorded gesture is displayed in the slider itself. The playback rate of the recorded finger gesture can be multiplied further based on the radio buttons in the area “AA”. This recorded gesture can also be mirrored for smoother loops.

B) The Current Frame of the Sampled/Recorded finger gesture

A finger gesture on the Scratchola gadget “3:” is recorded and looped. The second rainbow-colored bar labeled “Sample” represents the current frame being played within that recorded gesture. The output is in gadget “4:”. When releasing your finger in “Sample" mode, the movie jumps to the frame corresponding to the start of the sampled finger gesture. This recorded gesture can also be mirrored for ping-ponging the transitions.

The playback rate of the recorded finger gesture can then be multiplied or divided based on values determined by the radio buttons in “BB”. The playback rate of the sampled loop may then be altered further by recording an additional finger gesture on the slider “Sample Loop Speed”.

4: Slider to Cross dissolve between “Playback Mode” and “SampleMode” 

D: Jim, why did you build your own tool and what is behind it?

JE: Hummmm, if you've seen the UI of Mixxa, then this may seem rather simplistic, and in many ways it is. The power is in the simplicity. The idea was not to duplicate the fine work that Greg had done with Mixxa, but instead try to increase the quality of gestural captures by gearing the UI design to be more accessible to a user's tendency to express time, thought, and emotion through gesture/dance. Oversized sliders on a multi-touch screen were the key to this.

The difference in gestural capture quality between moving/scratching a video/audio file with a large slider versus a small one, can not be understated. One pixel's worth of movement on a 100 pixel slider is ten frames' movement in a 1000 frame movie. Make the slider 1000 pixels, and you have one pixel of slider movement equaling one frame of the 1000 frame movie. With this boost in quality, new worlds of expression open up for scratching and sampling video/sound with finger gestures.

D: What did you learn from building your tool and how would you do your next-gen?

JE: I'm learning how to play this new instrument. Future versions will be better optimized to allow the simultaneous mixing/scratching of multiple video clips. I'd like to be able to save out (to re-access) the wave forms of the loops, as well as save presets. Maybe the ability to render and access new video/audio clips while performing. Add multiple chromatic piano keyboards that alters the playback rate/pitch based on the incremental divisions within the western musical scale.

D: From idea to product how did it change? Happy accidents, "wow" moments?

JE: Didn't change, it turned out exactly as I thought it would. Wow moments? When I first really got the Scratcher working, I melted my own brain. It takes practice to get this one working well, but when it happens, it really seems like a new world. It really is like an instrument.

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

JE: Just learning Python.

D: How many iterations are you at?

JE: I've made many semi-related variations of this over the years. For this latest particular device, I'm currently at 830 versions.

VSquared Lab's EPIC

Vsquared Labs' Epic system is a hybrid generative and media based content router and VJ system. We have deployed parts of it in many projects, as the framework is turning into something that is reusable for many things. Its primary use, and what it was really made for, is running improvisational visuals and serving as an ultra-flexible mapping tool for large LED arrays at massive (typically EDM) concerts.

Below shows our 3-display monitor arrangement:

What's Going On, Epic Performance Interface in 3 parts:

D: Why did you build your own tool?

Peter Sistrom of VSquared Labs: "Epic" has been organically developing at Vsquared Labs since before I got there. First versions were pieced together by Bryant Place and the head of VSquared Labs Vello Virkhaus and used at many events. I came shortly before the first Las Vegas Electric Daisy Carnival, and was tasked by Vello with making the system serve two HD displays worth of pixels. From there we decided to get more serious and steer the underlying framework to meet our needs for stability and flexibility.

D: Explain the idea of what you were trying to accomplish.

VSL: It became a real "tool" when Jarret Smith and I spent about a week developing a massive upgrade prior to Ultra Miami 2012 at Vello's request. This is where things became modular, a preset system was incorporated, and a sort of "rule book" gestated for how to work within the system. Shortly after that, the need for a networked master/slave computer arrangement came up and that component was added as well, again with some Derivative assistance.

D: What did you learn from building your tool and how would you do your next-gen?

VSL: Recently (prior to Ultra 2013 actually) I put in another session of full-on development upgrading the system to 088, adding more robust external mapping controls (such as midi learn-ability and some control recording abilities), and extending the flexibility of the final output mapping system to allow for even more combinations of sources and spatial LED mapping. It is now Epic MK3.

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

VSL: The system is great now because we can have multiple people working on it, creating custom 3D geometry and other assets for specific shows, while others conform it to the particular output arrangements, including projection mapping as well! MK3 just had its first show combining LED and projection mapping recently in Atlanta.

The Sahara tent at Coachella has had some form of Epic for the last 3 or 4 years, this year we were able to show off lots of the new toys and perform-ability during Moby's DJ set.

D: How many iterations are you at?

VSL: So this system is ever changing, which can be good and bad. I never stop thinking about how it can be better, and am always excited when I get the opportunity to reinforce its foundations. Maybe one day it will be in a place where it can actually see some distribution, we shall see!

Lukasz Furman's Vortex

About Lucasz

From an article we posted earlier this year: An Intermedia student at the Academy of Fine Arts in Krakow, Lukasz has been using TouchDesigner for just under a year and in that time has created an intriguing and investigative body of work. Much of this work is based on taking data from almost everything around him (to which Lukasz admits being 'addicted') and then using this data to build experiments using TouchDesigner.

Lucasz tells us "In my spare time I like to VJ at parties and of course it meant I wanted to use TouchDesigner for this as well. Actually, the very first thing I wanted to use TouchDesigner for were audio-reactive visualizations and I am happy to say that my visuals are now definitely audio reactive!

Vortex is a simple User Interface based on the surfac and controlled by a Noise CHOP. I first started using TouchDesigner to perform audio-reactive live visuals and Vortex is my first performance tool."

Legend:

  1. Control exponent parameter of noise operator to change the turbulance of the Vortex.
  2. Control amplitude parameter of the noise operator to change the dynamics of the Vortex.
  3. Controls contrast and feedback of the Vortex.
  4. Controls the harmonics parameter of the noise operator to change the flow of the Vortex.
  5. --
  6. Controls the shape of the Vortex.
  7. Control distance of the vortex in space(z axis).
  8. Output monitor.
  9. Controls the focus of the Vortex.
  10. Contros the focus range of the Vortex in space (z axis) DOF.

D: Why did you build your own tool?

Lukasz Furman: The Vortex was built as a realtime animation tool for live performance. It is dedicated for concerts. For now Vortex is not reactive to the music, but the way you can change parameters makes it intuitive to play with at live concerts.

D: Explain the idea of what you were trying to accomplish.

LF: I was trying to make some sets of tools for live performances, the first one is Vortex.

D: What did you learn from building your tool and how would you do your next-gen?

LF: It was first time I constructed a UI, so this part was really important for me. I learned many things about optimization and I learned exactly how to use the CHOP to SOP Operator.

D: From idea to product how did it change? Happy accidents, "wow" moments?

LF: The Vortex is closed in a kaleidoscopic loop. So from the moment when I constructed the 3D base I kept the focus on UI exports and the composition part.

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

LF: For now the most exiting thing for me is optimization, and I'm stuck somewhere in code for now. It will be a long way to understand programming and to use it in a creative way in TouchDesigner.

Greg Hermanovic's Deep Forest Remix

Although this was made pre-088, it is relevant to the theme of custom performance interfaces.

D: Why did you build your own tool?

Greg Hermanovic: I wanted to start from a blank slate and build the user interface as the ideas developed. I wanted to design interactive gadgets and position viewers however I felt most appropriate for performing. I wanted 3 video outs to projectors, and use a touch screen as the main control monitor to make it easy for the 8 people (1 or 2 at a time) controlling it to learn and play with.

D: Explain the idea of what you were trying to accomplish.

GH: The idea was conceived by Isabelle Rousset and I to make spooky trippy visuals for a Promise (Toronto) Hallowe'en party. We wanted to rear-project on a big sheet of spandex in one area, white fun fur at face-level in another, and a white circular screen embedded in a part of a dark army parachute over the main dance area. All three screens were visible at the same time so we wanted to create an integrated look where elements in one image appeared as elements in another image.

The slant of the piece was the ominous feeling of looking at shadows of off-camera dead vegetation illuminated from an unsteady hand-help light source, Hitchcock-style(!) Isabelle Rousset and I shot 16 two-minute video loops of said shadows. For the party, Brian Moore, Isabelle and I mixed monochromed pairs of these videos at varying speeds. The video was tinted using a color chooser that conformed to strict 2-color palettes (overview video here).

To combine with this, Isabelle created a series of animal icons, "totem buddies" as named by Brian, to play in the shadows. A bunch of superimposed icons were moved/rotated/delayed in Z using using hand-gestures, and mixed with the Difference option of the Composite TOP giving trippy colorful projections. The same icons were injected into the shadow videos. On top of this a video sampler component could grab a few frames of video that, when recalled, gave the occasional colorful time-warp.

I pulled parts from other projects to make this quickly: The video player at the bottom left is from Mixxa, the 2-layer mixer in the middle is from InstantG(ratification). At the top right is Jarrett Smith's image sampler/looper Skanners. Some parts are from the UI library UIg, and the rest of the gadgets like the 2-color chooser at the top left and the icon grid at the bottom right were made from scratch.

D: What did you learn from building your tool and how would you do your next-gen?

GH: Well, as head of Derivative, I use projects like this to test and push new features of TouchDesigner. While I develop the TouchDesigner files, where I find a technique awkward to create, rather than settle for a workaround, it often results in some new feature of TouchDesigner that gets programmed into the current build. Then after the events I report problems and suggestions to Derivative R&D that get fixed and improved.

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

GH: No, it actually went really smoothly. It was quite amusing buiding the whole thing up and getting it playable by others at the party, and watching the results.

D: How many iterations are you at?

GH: The title suggests this comes from the deep forest, but I intend some day that the piece be installed in the deep forest with some interesting twists. So it will make a re-emergence then!

Scott Pagano's Video Mixing Tool

D: Why did you build your own tool?

Scott Pagano: I have been building video mixing tools with modular software for about a decade starting with Max/nato.055, moving on to Max/Jitter, and then moving on to TouchDesigner. With tools such as Resolume in the world re-inventing the grid-video-sampler wheel over and over is unnecessary but TouchDesigner allows for deep customization and modularity and I enjoy streamlined purpose-built tools with procedural capabilities.

D: Explain the idea of what you were trying to accomplish.

SP: Video sampler with custom 2D/3D effects that allows for the incorporation of full 3D scene systems. Full control via TouchOSC on an iPad.

D: What did you learn from building your tool and how would you do your next-gen?

SP: The more I work with these tools and systems the more I learn that in a live context I prefer the simplest system with a small number of effective controls. I remember conceiving and building systems with buttons and sliders and controls for so many minute parameters - and in the end having one big slider per effect that may be doing ten things under the hood makes for a more impactful result that is more fun to control.

D: Any short-comings or road blocks you experienced in accomplishing your objectives?

SP: Jarrett Smith from Derivative was a massive help in getting my grid-based clip launching system up and running. At that point in time there wasn't a module to do this available and it was immensely helpful to have this guidance and I learned a lot about scripting and deeper TouchDesigner thinking.

D: How many iterations are you at?

SP: I have been revising this system since 2009 and while I rarely do live shows at this point - when I do it usually involves an overall tweaking and upgrade. It is good to take the thinking from all the other visual work that I do and bring it back to TouchDesigner and figure out ways to incorporate those visual ideas into my real-time systems.

Comments