Led Pixel Mapping

Hello guys… I’ve been working on touch designer from last couple of months and i’ve started learning on generative designs… From last few days i was working with mad mapper and resolute arena where i’m sending the visuals in addressable led’s… Now i was trying to use the same concept in touch designer where i can create the visuals inside the touch designer and by creating an art net output send it to my led lights. But i’m not able to create the led fixtures for pixel mapping… I’ve have tried looking for some examples and tutorials but there has been no success yet… Any help will be of great help to me… Based on the same concept i want to create a grid of volumetric lighting for the future…

The TOP to CHOP is a great way to get an image into channel data, and the DMX Out CHOP can be used to send data to LED control devices.

When it comes to light volumes take a look here:
matthewragan.com/2017/08/11/tou … ic-lights/

Hi Matthew,

I’m having a blast with your various touchdesigner tutorials. Thanks so much for sharing!
I’m still very green and I had a question on the following step regarding the led pixel mapping.
I made a very simple test led setup that consists out of 2 led strips of each 25 pixel RGB.
I’m able to generate some simple visuals on to the led strips.
Like Tejaswi I wanted to create visuals in a grid like space. I dove into your tutorials that make use of the instancing technique. I find myself stuck and find it difficult to wrap my head around how to get the visual exactly mapped onto the LED’s. See the file attached.
I simplified the version and used only one strip of LED’s. I made a visual that consists out of 1 rectangle that’s being instanced 25 times to have the same amount of pixels as the led strip.
I scaled the actual resolution to be 10 times higher in order to be able to make the visual crisper.
To make the ledstrip in touchdesigner I used the line sop with 25 points, when I combine the visual and the ledstrip in the top to sop the actual output doesn’t fit properly onto the strip. I find it difficult figure out how to scale it so that the mapping is perfectly aligned. Hope you can give me a nudge in the right direction.

Thanks for taking the time.

Cheers,
ridder
Instancingtest1.6.toe (10.6 KB)

Hi ridder,

Nice work so far.

The thing to keep in mind here is that the instances conversion is just to help us get color data onto our geometry in Touch. When it comes to driving actual LEDs you’ll want to use the first pass of converted data (what’s in your instances).

Consider this - your render is just a way for you to visualize what’s happening with your data, but not at all useful for driving your LEDs. Your LEDs want / need the same information that you’re using to drive your instances.

Does that make sense?

Have a look here:

derivative.ca/Forum/viewtopic.ph … +uv+offset

UV offsetting can be a useful way to think through how you’re sampling from an image to get color, and how you might think about building something to both previs your LED installation, as well as what data needs to get sent out to your LEDs.

Hey Ridder,

I had 15 to put together a fast example here:
base_pixels_to_LEDs_099.tox (2.41 KB)

The picture picture idea is that you think of your grid as a screen where each point / vertex is representative of a pixel. We can use instancing to quickly preview what’s happening, but ultimately you’re just going to send the color vals to your LEDs. In this example that would be what’s in the shufle1 CHOP.

Does that help?

Hi Matthew,

Thanks for taking the time to help out. Much appreciated!

Ah, great! I was thinking along the same lines with the point and vertexes being the same as the pixels of the LED screen :slight_smile: This was a vital piece of info though! “Consider this - your render is just a way for you to visualize what’s happening with your data, but not at all useful for driving your LEDs. Your LEDs want / need the same information that you’re using to drive your instances.” Thanks for making it visual in the file!
I used your example to see if it worked on my simple led strip and it did perfectly :slight_smile:

To make sure I understand it correctly when doing the same with a 3 dimensional led setup. For example in the touchdesigner_instancing_pixelmappingGeomotry file, you shared in the quoted thread, you have the process and instancing base where the content is being generated. In the instances base the null final is the data you wanna use for driving the LED’s?

Sorry to bother for so long, but I’ve a final question when setting up such a rig in 3D. Like you mentioned in the example, it depends on how the LED are addressed. But how can you adress a 3d rig, since the way LED strips are addressed is through snake mode. You will have to flip led strips 180 degrees digitaly in order to make the content be shown in the proper way.
Is there a connection with the way you sorted the box sop in the example file I mentioned above?

Thnx!

Cheers,
Ridder

Yes and no. In this example I’m really thinking about how to play with the idea of visualizing this kind of effect, not how you’d drive a volumetric display. You could use this approach, but the addressing is going to be difficult since the width of the TOP is the width of the screen times the depth. To get this correctly mapped onto a volume of LEDs you’d have to think through how you shuffle the samples and channels, and how to account for your addressing schema.

Volumetric displays are the wild west - and if you’re rolling your own solve here it’ll be a little bit of an adventure to find a solution you like. I’d probably think through how you could handle those flip or transform operations at the TOP level so you don’t have lots of crazy Chop Channel operations to get the orientation of the video right. You could probably write a simple shader to do those transform operations before you convert to CHOP channels - which is probably what I’d do these days as a solve.

Yes and no… again, the use of the Box sop is really about creating points for instancing - which is just your way to visualize what you’re trying to make, but not really helpful for actually addressing your LEDs. Think of your instances as your simulator - what you could do is flip every other set of planes in your box (you could get here by grouping and transforming) - this could potentially get you a digital mirror of how your LEDs are addressed, so you can work through how to re-arrange your samples / channels without having the whole rig built.

Does that help?

Hi Matthew,

These are some great leads to work with. Thanks!
I’m going to deep dive right in. Will update when I’m there, hopefully with out any further questions in between :wink:

Cheers,
Ridder

Here’s a quick look at the idea of flipping your texture at regular intervals based on a shader approach:

  • edit - better approach without an if statement
uniform float uNumReps;

out vec4 fragColor;
void main()
{
	// create some empty variables to change later
	vec2 newST  	= vec2(0.0);
	float index 	= 0.0;
	
	// index our space based on the number of repetitions
	float indexStep	= (1 / uNumReps);
	index 			+= step(indexStep, mod(vUV.s, indexStep * 2));

	// apply a coordinate reflection based on which rep
	// a given slice represents

	// this if statement approach was my first thought, though
	// it's generally bad form to use if statements in 
	// frag shaders.
	
	// if( index == 0.0){
	// 	newST 	= vUV.st;
	// } 
	// else{
	// 	newST 	= vec2(1-vUV.s, vUV.t);
	// }

	// instead, a mix function is a different way to approach this issue:
	// many thanks to Mike Walczyk for continued help and eyes on these things.
	vec2 uv 		= vUV.st;
	vec2 flop_uv 	= vec2(1-vUV.s, vUV.t);

	newST 			= mix(uv, flop_uv, index);

	// grab the input color from our source texture based on 
	// our new ST coords
	vec4 fromInput 	= texture(sTD2DInputs[0], newST);

	// output color
	fragColor 		= TDOutputSwizzle(fromInput);
}

Here’s the thing in action:
base_texture_flip_099.tox (1.87 KB)

Hi Matthew,

That’s awesome! Will look into it this weekend.
Question out of curiosity. With my greenhorn knowledge it seems to me that when you have a proper knowledge base of touch you can setup the mapping of the LED rigs quite fast, at least when they are straightforward in shape, and generate some awesome content.
You think it can compete with the amount of time it takes setting up such a rig in terms of mapping in for example Module8/Resolume? Or a better workflow the generate the content in touch and use Syphon/Spout to transfer it into Module8/Resolume?

Thnx.

Cheers,
Ridder

I don’t think it can compete, if you’re building from the ground up every time. Turnkey or bespoke tools will almost always be faster to configure. The trade-off is that you’re forced to conform to the ideas of other programmers and developers - which can sometimes mean that realization of a design or idea is about changing your art, rather than changing the code. Some of that’s about style, and some of that is about design and functionality.

If you mean, can you build a tool to map LEDs with touch as fast as you might map them with another application, the answer is yes - absolutely. If you haven’t yet, you should look at Lucas Morgan’s killer project geopix:

enviral-design.com/geopix/
enviral-design.com/geopix/ge … el-mapper/

Does that answer your question?

Wow, a killer project indeed! Awesome.

Completely answered my question. Thnx.