Close
Company Post

Matt Guertin's Epic, Never Ending, Oculus Rift, Kinect, TouchDesigner 3D Project

Oculus Rift - Kinect TouchDesigner 3D

This video is the culmination of a year and a half of 'work' (playing and having fun would be a better way of describing it...haha). The reason I ordered the Oculus Rift in the first place is to do exactly what you see here. All of the programming is done in TouchDesigner. The live interaction method I developed along with the 3D drawing is done using GLSL Shaders that I wrote which is what allows it to be so fast and responsive as far as not having a noticeable delay.

Another thing worth mentioning is that the creation of this video involved a huge file with many components, and a ton of rendering which is the cause for some of the glitches (3D drawing ie.) and the slower frame rate. I will release a couple other videos soon which will focus on each of the individual components so that people can get an actual feel for how fast and responsive this setup actually is. The 3D drawing for instance when stripped down to nothing but it's necessary parts will run flawlessly at close to 50 fps and is much smoother than what you see in this video.

It is with great enthusiasm that we publish Matt Guertin's chronicle on the building of his ever-expanding project which will not be finished for some time if our last email exchange concerning the release of Oculus Rift's second developer kit is an indicator of things to come. Reinforcing this belief, 5 minutes before posting, Matt has sent a new 'must include' video that will "speak for itself" he says. "It's pretty crazy and really demonstrates the true speed/responsiveness of the real-time interaction." (video below)

By way of introduction, in August of 2012 Matt responded to an email I sent immediately after watching his quite strange video "I Like My Coffee Table" on YouTube. His response:

"I have been in love with gadgets, lighting, etc. ever since I was young but besides using Google Sketchup a little I have never used any other sort of 3D modelling software or done any kind of actual programming - besides what I have learned while using TouchDesigner which I started using almost a year ago. I guess that is a good testament to how intuitive the software is.

Currently I work as a lighting and stage designer in Minneapolis and have designed stuff for a wide range of shows which include a local modelling agency, corporate events, as well as some of the biggest names in electronic music. I was introduced to TouchDesigner about a year ago, around the same time I saw videos of the Amon Tobin setup.

Needless to say I was amazed. I would have to mark that moment as the beginning of my quest to learn as much as I could about the program and have probably watched the video about the creation and design process 100 times or so. TouchDesigner is pretty much all I prefer to do now in my spare time when I am not setting up and running shows and thanks to TouchDesigner I don't even have a cable television bill anymore. The 42" flat screen I used to watch Oprah on is now a giant computer monitor and much more interesting.

I have never really cared for video games much and cannot remember a time in my life when I looked up to an imaginary super hero. I feel like that may have changed though since discovering all of this. TouchDesigner is like an awesome video game that I finally enjoy and Vello Virkhaus is Superman."

If this account at a glance looks long, it is in fact very concise in view of what Matt is sharing here - the tools, research, process, trials and reconfigurations, introduction of new tools, new code... it goes on and the project evolves, improves. It is fascinating, at times very funny, and an excellent testament of deep prototyping. Enjoy!

Kinect TouchDesigner 3D Audio Warp

Real Time virtual interaction with 3D objects and audio using a Microsoft Kinect and TouchDesigner software.

From the Beginning

I still like my coffee table as you may be able to tell. Basically this has just been a continuation of that project as I learned more and more about how to achieve different things in TouchDesigner. Another way of looking at it would be as a never ending project...being that it still isn't complete by any means.

One of the hardest things about TouchDesigner for me is the fact that you can create pretty much anything you can think up which is difficult for me sometimes because there are a million different directions to go at all times and a million more ideas that I am constantly thinking up (even while sitting at red lights or brushing my teeth...)

I Like My Coffee Table

The first video I made which I very creatively named 'I Like My Coffee Table' involved augmented reality and 3D virtual reality in a way but was more of an illusion in that it looked 3D to anyone viewing it in the video I made but wasn't very realistic to me. This is what led me to start experimenting with anaglyph 3D at first while at the same time I was messing around with various aspects of character rigging inside of TouchDesigner.

I guess the very first time that I really became interested in trying to interact with stuff in 3D was when I set up a rigged model of a head and some arms that I was controlling with the Kinect's skeletal tracking points. I had everything scaled so that it lined up with the real world and then once I added in a rectangular 3D box and angled it up to sort of resemble a control panel the obsession began. I was fascinated with the fact that there was a 3D shape floating in the middle of my living room that wasn't really there....and I guess I was sort of bothered by the fact that I couldn't interact with it or affect it even though I could see right there on my screen that I was touching it.

It was not long after this that I figured out the 'magical code' to line up the Kinect's 3D point cloud perfectly to 3D space and around this same time I discovered the Oculus Rift Kickstarter campaign. I was actually looking into a couple of other virtual reality headsets such as the Sony HMZ while at the same time researching head-tracking solutions such as IR markers so that I would be able to get my head rotation into TouchDesigner somehow to control the camera.

One night while researching how to create my own version of the Oculus Rift a little Google ad popped up on the right hand side of my screen for the Oculus Rift Kickstarter campaign. $300? Yes please! It is actually pretty crazy as far as the timing and the fact that it was exactly what I was trying to piece together. I guess if there was one time that I could thank Google for knowing everything about me that would have been it.

My Virtual Living Room

The virtual living room idea came to me as I started to mess around with the Kinect and make sure I was scaling the point cloud correctly to real world points in the room. I had to plot in some reference points so it started out with a simple model to represent my coffee table and then that quickly grew to include my whole living room. Once I had a good deal of it completed and it was lined up perfectly with the Kinect I had to accept the fact that I had permanently blocked my hallway off and I began the practice of telling everyone who came into my house "OK - I HAVE ONE RULE - DO NOT TOUCH THAT LAMP, DO NOT GO NEAR IT, DO NOT BUMP IT, STAY AWAY!"

I have been walking through my kitchen to get to the rest of my place for the last year and a half instead of through my main hallway. The lamp stand that the Kinect is on is actually Gorilla-taped to the floor right now after I took care of a couple cats for a friend and was all worried that they were gonna start rubbing up against it...haha. My Kinect is very serious.

The couch is actually inserted back into the project as a static 3D point cloud in order to be able to achieve the realistic shadow effect. I did one scan where the Kinect is currently positioned and then did one more scan of the corner spot it can't reach, combined them in Photoshop, isolated the couch, and then fed it back into TouchDesigner through a Photoshop TOP.

I created all of the geometry for my living room model as well as the elevator and the Launchpad with Google SketchUp and then imported all of it into TouchDesigner as an .fbx file. The piano was downloaded into SketchUp and the keys were saved into individual groups which were then also imported into TouchDesigner as an .fbx file.

My Elevator (or 'Pod' as you like to call it)

Another random idea that came to me which I found highly entertaining (meaning I sat in my place laughing quite a bit at first as I climbed up and down from my coffee table) was my wanting to involve my coffee table again somehow to keep with the theme of my other video....so why not turn it into an elevator?

I originally created the elevator before figuring out the Kinect shader code so the initial idea involved a lot of layers, alpha masks, and turning things on and off. It didn't work too well in the beginning but became a whole lot easier to complete once I figured out how to place myself into actual 3D space via the Kinect point cloud.

I know it's completely ridiculous....and that is why I like it. I figured I may as well make it even more ridiculous and rep TouchDesigner at the same time by throwing in a little control screen and a keyboard.

The sound effects are from Freesound.org.

Kinect Point Cloud Lined Up Perfectly In 3D Space (Magic Formula)

The original GLSL shader code that I ended up tweaking came from TouchDesigner user 'theotheo' and was posted on the forum in April of 2012. Although this shader was the most awesome thing I had come across so far to generate a Kinect point cloud it still suffered from some distortion...meaning it wasn't lining up perfectly with the real world.

The 'magic moment' if you will came after I found a post on Microsoft's Kinect forum in which one of their engineers was explaining to someone how they arrive at their calculations for the real world skeletal points the Kinect outputs (as opposed to the uv/image space coordinates it also outputs).

The key to the GLSL Shader code is the 'x_mult, y_mult, and z_mult' variables which are plugged into 3 different uniforms in the 'Vectors 1' section of the GLSL Top named 'depth2pos' in the .tox file I shared.

The x multiplier '1.12' and y multiplier '0.84' represent the 4:3 aspect ratio that the Kinect outputs as its depth map. These two variables are then multiplied by the corresponding UV coordinate after each one has 0.5 subtracted from it (you end up with -0.5 to 0.5 instead of 0 to 1...this centers the x and y coords at 0 and is multiplied by the depth variable. The z multiplier '8.16327' is the main number responsible for scaling the point cloud correctly.

The Piano

The piano was my original goal as far as the real-time interaction is concerned. I figured it was a good starting point for what I was trying to accomplish. Basically XYY becomes RGB which is then processed and compared against the limits/extents of the objects to determine what zone you are in.

The zone is then assigned as an integer to the alpha channel of the image which allows you to know what particular button/key the data corresponds to. I compare those zones to a static overhead shot of the piano (where the piano keys RGBA color value actually represent XYZn where n = midi note #) and I use the GLSL distance function to output a TOP k x 1 pixels, where k = keys/columns. That TOP is then run into a TOP to CHOP to retrieve the data I need.

All of the interaction (including the sequencer grid and the Launchpad model) is done using the limits and boundaries of the buttons/keys. Max_X, Min_X, Max_Y, Min_Y, etc.

The piano is outputting a midi signal which is running into Propellerhead's Reason software via LoopBe Virtual MIDI software. I started out using MIDI-OX but was limited to 32 bit. This project wouldn't even load in the 32 bit version of TouchDesigner after a while (50mb .toe file) so I finally found LoopBe which supports 64 bit and all was well.

Kinect Piano Demo Running Into Reason

Finger Tracking and Drawing

The finger tracking is achieved by determining where the center of the hand is and then centering a 3D sphere on that point. Instead of using the Kinect's skeletal hand output which is intermittent I use the wrist points. I then figure out the hands center point by getting the averages for all of the RGB pixel values which make up the hand (RGB = XYZ).

The 3D sphere that I center on that point - and I guess the technique overall could best be described as a basketball, with your hand perfectly in the center. I am searching in the front hemisphere for the five points closest to the inside edge of the ball. I do this by running my hand and the ball into a GLSL shader I wrote which compares the two against each other using the GLSL distance function.

Remember that the sphere and the hand's RGB color values actually represent XYZ. Once I get the distance computed I output it through the alpha channel. I'm left with the position of a 3D point and the distance of that point to the closest point on the sphere.

There is a lot of work left to do as far as the finger tracking setup goes though. It still doesn't differentiate between which finger is which, and the technique doesn't work very well unless your fingers are pointing forward a little bit. The one advantage it does have over the k-curvature algorithm is that it can still get the points of your fingers even when they are pointing pretty much straight forward (such as when you are playing a piano or drawing....)

To use the finger tracking output for the drawing functionality I am just averaging the first two points together which are closest to the sphere. Being that I am only using two fingers for the drawing there will normally only be one or two points coming out of the finger tracking setup anyways. I am then 'recording' the trail of points by feeding them into a FIFO (First In First Out) DAT > DAT to CHOP > Limit SOP.

Sequencer Grid and Custom Programming Interface

The sequencer grid is tilted at 45 degrees so it makes it a little trickier to program. It involves the same concept though as the piano RGB = XYZ. 

To program the button grid I created a custom programming interface which makes it modular and expandable as far as geometry and parameters are concerned. It allows you to create a single geometry object in a network window and then outputs all of the data needed to instance the geometry on the GPU automatically. The CHOP that it outputs contains all of the channels needed to create the sequencer.

You can add multiple 'EVENT' components to the GUI and you can then click to add as many rows of 'FUNCTIONS' as you want to that particular 'Event'. Each Event component corresponds to a 'MINScale' value and 'MAXScale' value which is represented on the lefthand side of the GUI and which the Event component connects to via 'Strings' which are highlighted once that particular 'Event' is clicked on. The YMax and YMin value in the GUI are the minimum and maximum Y coordinate travel overall for the whole button grid. All of the names for the Events and Functions can be freely edited by clicking on the name and typing.

When you double-click on the little button to the right of the Functions label it automatically opens up a network window where you can edit the value coming in. The value I speak of is referring to the Kinect's point position in that particular grid zone which is closest to the YMin value. The network window that opens up contains various CHOPs and DATs which all have the correctly scaled value as it relates to the MIN/MAXSCALE value for that particular Event component.

Once you process the data using either CHOPs or DATs then all you have to do is name the output channel to reflect what Geometry Instancing parameter you want it to control. It automatically creates another CHOP network which outputs a CHOP with the same processing performed on it but which now contains all 128 channels needed to setup the geometry instancing.

By only having to create a single piece of geometry that gets instanced I am able render it out and use it in the GUI as an interactive preview model. The reason that I say that it is interactive is because you can click on the different parts of the geometry to highlight them. Once highlighted you can then drag the rectangular buttons/blocks to the right of the Functions title and drop them onto one of the parameters that you want it assigned to (the column of parameters underneath the button labeled 'container' on the GUI).

Oculus Rift Input

I was able to feed the Oculus Rift's head-tracking data into TouchDesigner by using a Websocket DAT and an application called 'RiftServer' that someone released on the Oculus forum which was bundled with a Google StreetView demo for the Oculus. It outputs quaternions so I had to run it through an Angle CHOP to get XYZ rotation values which are then fed to the two cameras to control their rotation. To output the correctly distorted image that the Oculus needs I wrote a shader using some code that was provided in the Oculus Rift documentation.

I also just noticed that TouchDesigner has supported the Oculus Rift since November. I wasn't paying attention apparently. Ha

 

And There's More! Other Projects From Matt...

Interactive LED Strip

Serial to Ethernet Converter, ArtNet to WS2811 Converter, Wireless Router, Ultrasonic Distance Sensor, and of course....TouchDesigner. 

ArtNet and Serial are both running over the same wireless network. Serial data from ultrasonic distance sensor is being fed into TouchDesigner using a virtual COM port. TouchDesigner is processing the serial data and then outputting ArtNet DMX which is transmitted over WiFi back to the ArtNet to WS2811 converter box.

Interactive LED Bartop Using 5 Kinects

An interactive LED Bartop I designed for a new video game themed club called Insert Coins in Minneapolis, MN. 

Everything is controlled with TouchDesigner. System uses a DMX input as well as a DMX output allowing for remote control via Lightjockey in the dj booth or control via Ipad running Touch OSC behind the bar. 5 Kinects mounted around bar above and pointing down provide the motion/depth sensing for the different zones (19 interactive zones in all).

Comments