UV Unwrapped Rendering

Please post on my RFE if you’d like this to be added as an official feature:
derivative.ca/Forum/viewtopi … =17&t=9582

So - many systems either use unwrapped UVs to render, or allow you to ‘bake’ your feeds into a flat movie. The common way is to use UV coordinates and unwrap.

Advantages:

  • a single viewpoint render is restricted by its angle in what parts of a geometry is visible, and what the pixel density is at that point. This method allows you to define the rendering resolution by UV mapping,
  • If some of your content was produced by another artist or method, you can render to a map and perform simple video operations to transition or mix between your content and theirs,
  • In projection mapping depth illusions (trompe l’oeil), you can make the best quality intermediate render from a specific viewpoint, then map onto an object for projection,
  • Some systems like pixel based LEDs are inherently a pixel map, and you can render to that map from a physical representation of the LED structure/object.
  • You can use TouchDesigner as production software, delivering baked renders for use in Touch or other software.

Method:
The key is to ‘stash’ the UV coordinates you need in a vertex attribute. Now you can re-texture as needed to draw in any way you need, as normal (the Texture SOP sets UV coordinates).
When you render, you need to make a custom Vertex Shader. A starting point can be generated from a Phong mat or PBR mat. Make the render look as close to your final product as you can - adding some render features require a change in shader, so try to get close enough to just need to tweak parameters and light position/angle etc.
Once you have a GLSL mat, apply textures and everything you need to make the render using that instead of the donor phong or PBR mat.
Now add near the top of your custom vertex shader:

// Use the custom attribute we added to save orig coords in vec3 uvMap;
And at the very end of your shader, right before the closing squiggle } :

// Leave the vVert stuff, but change position! gl_Position = vec4 ((uvMap.st * 2.0) - 1.0, 0.0, 1.0);
How it works:
TouchDesigner Fragment Shaders use the vVert structures to calculate all lighting and shading, effectively ignoring the built-in gl_position, but: the final output position of your vertex is based on that position.
It’s a hack (which is why I’d like it built into TouchDesigner) that gets the best of both worlds at the cost of some simple GLSL code (not that it was that simple to work it out!).

Check it out.

Bruce
UV unwrapped rendering.zip (11 MB)

can’t wait to try this +1 for official feature

Hi Bruce, I’ve been looking for a way to bake textures in TouchDesigner. I was happy to find your script.

Specifically, I’m using a light as a projector (using the Projector Map Attribute) to cast a render TOP onto a Geo. I want to then integrate that casted projection into the texture of my Geo (bake it) so that I can use it in a projection mapped project Im working on. What’s tricky is I’d like to do all this in real time, as well. Your script is the only solution that seems to come close to letting me be able to do this, but I can’t wrap my head around it.

I’m having trouble getting your script to work on my models. I have no idea why, but when I export my models with their UV sets as .FBX from Maya, and import them into TD and plug the mesh into your example program, the mesh doesn’t unfold and the “unwrap” container is blank. I took your boxes included into the example into Maya to see why they work, and I tried distorting them in every way and changing names and your mesh still works fine when I export as .FBX and reimport into TouchDesigner. So I don’t know why mine don’t work. Is it something in the scripts? Nothing jumped out at me.

Thanks a lot for any advice you can throw my way!

Glad it’s helpful (assuming it works!).

The key to everything is UV coords in the mesh. I don’t know Maya, but possibly your mesh doesn’t have them? In Cinema4d, the object would have a texture tag.

Try importing your model in touch and in a geo, look at the viewer display options to turn on visible UV coordinates.

You should also be able to texture it in Touch (texture SOP) and then stash those coordinates, but you need them in the model really.

Bruce

Thanks for your quick reply Bruce!

I ended up trying with Cinema 4D, and sure enough, my models are working now. The same model that works when exported as .FBX with Cinema 4D does not work when exported with Maya. The vertices are listed and the model indeed shows correct UVs when applying a texture, and yet my Maya exports just don’t want to work with the unwrap container. I’ll use Cinema 4D for now.

If any Maya users find out what the reason for this is I’d love to know!

And thanks again Bruce for your reply, I’m breathing a sigh of relief to see a way forward :slight_smile:

Sounds mostly like user error in Maya. Make sure on your FBX export options that you have UV Write selected. I never had any issue bringing in UV’d geo to Touch.

Hi,

Just jumping in the bandwagon…
So the idea would be to render the scene as seen by the projector ? meaning converting a projector to a camera and project it back through the real projector ?

Rendering the scene as seen by the projector is a different issue, and there are threads about that. Look for camschnappr. The question here is to render ‘unwrapped’ so that you can manipulate then rewrap onto your shape and use that for a projector setup.

This feature is going to be built in soon, I believe.

Bruce

Yep it’s in the 2019.10000 series of builds

Is it an official feature?
Where I can find this?
Palette?

Thanks

In the Render top

This is the part I’m still a bit unclear about, and after looking and tweaking the examples posted here and the RFE thread, i’m still not sure I see the advantages of this method or why/when you would use it. Why can’t you manipulate the texture/material without unwrapping it?

I think I understand that it’s useful for LED screen setups that aren’t a standard flat rectangular screen, so it can help with mapping content on there, but I’m not understanding the workflow. Maybe it’s hard to imagine when i don’t have the actual physical output to see the differences.

If anyone has spare time, could they update this to use TD’s built-in UV Unwrap render mode, and then if possible, show the differences between using this method vs not using it?

Cheers

In short, it turns a render into a texture. So you can paint and light an object, then render it unwrapped. From there, you can place it back on a flat shaded model to project it from multiple angles.

You can do something similar with other texture modes, but they won’t have the pixel density you want.

The key is that projection mapping (or a crazy shape LED) is a two part process - a render of effects, maybe from a viewer position, and a second pass for display. This method gives you the best potential quality in between those two steps.

Bruce