NVIDIA SMP

I can imagine it would be great for cave type setups.
youtube.com/watch?v=mq8T42ymiBw
Zero performance impact, what do you guys think?

Interesting. I watched the whole thing. It would be a performance boost for anything that relies on separate render passes where the only difference is the camera projection matrix. Relevant examples would be CAVEs or VR. Currently VR works by having separate render passes for each eye, and each pass computes the world (in the vertex shader and optionally in a geometry shader). The speakers in the video point out that this is redundant. The only difference between each render is the camera projection matrix. Wouldn’t it be nice if there was a hardware solution to compute the world once but give it different projection matrices for different screens or different VR outputs? Well now there is! And they say the hardware supports a practical limit of 16 passes that share the same world.

In TouchDesigner, the vertex shader might look like this:

[code]vec4 worldSpaceVert =TDDeform(P);
// vec4 camSpaceVert = uTDMat.cam * worldSpaceVert;
// gl_Position = TDCamToProj(camSpaceVert);
gl_Position = uTDMat.camProj*worldSpaceVert ;

[/code]

The new feature will put the uTDMat.camProj transformation somewhere after the vertex shader, after the optional geometry shader, and somewhere before the rasterization step, which is before the pixel shader. This info is in the last 8 minutes of the video. I hope that this could be brought to TD as a new feature.

Ya I just got a 1070 with the intention of adding this at some point soon I hope.

That’s really cool malcolm, what do you think of the new “FastSync” buffer arrangement? Could solve a lot of tearing problems?

I don’t think that solves any tearing issues. It just reduces latency. It also means that frames aren’t guaranteed to make to the screen, which will cause stuttering playback if you are looking for perfect 30 or 60 smoothness.
FastSync is for gaming performance, not prettiness really.

Oh ok. I got the impression from the presentation that it was actually for eliminating tearing, but I’ll take your word for it.

It’s to eliminate tearing caused by having vsync off. Tearing that plagues multi-screen setups isn’t caused by having vsync off, but by other things such as Aero issues, or mismatched EDIDs.

I’ll mention that you should always use the TD*() functions for outputting your gl_Position. For this feature I’ll need to do extra work to the vertex position, and I can only do it in the TD*() functions.

Cool. It looks like I’ll be buying a 1070 too :smiley:

So far the ‘Stereo’ optimization has been released in build 4500. The more general multi-projection will be coming in the next build.

Awesome!

this is brilliant for dome / 360 projections. Very curious about the real-life impact when using 16 different camera’s. Thanks so much for integrating this so fast Malcolm, it’s immediately super-helpful for our current project.
Also that shaders now will have access to info about the current camera is fantastic, it will enable me to easily create a steroscopic 360 GLSL Mat !

Hey Malcolm

The render top wiki says multi camera is only supported for 2D and cubemaps. Is 2D a typo? If cubemaps work, it seems that 3d should already work.

Is it technically possible to use the multi camera with variable render resolution, I.e specify resolution per camera and still get all the multi cam benefits. The use case is vr and renderpicking. Ideally we could use a single render top for the 2 vr views and for all the controller views (but at lower resolution)

Hey Achim,
When it says 2D, it means the output is 2D as opposed to a cubemap. Any render is always 3D.

The Render Pick nodes already render at a tiny resolution, just the Pick Size x Pick Size. So by default the render is 1x1. The resolution of the Render TOP doesn’t matter, only the aspect ratio (since that’s how the UVs will be mapped into the scene).
The idea with the multi-camera rendering is exactly for VR cases. You can now have your VR Render TOP which will be doing 2 eyes in one pass. And then a Render Pick DAT which will be doing all the picking in one pass also, for example from 2 controllers and the head position all at the same time.

So that is all in all 2 passes , one for both eyes and one for all the controllers/picking.

I was wondering if all that could happen in one single pass?

An in addition maybe also render a cube map (which will need a different resolution than the VR renders) in the same single pass

Correct, it’s all in 2 passes right now.

It’s possible that other extensions would allow for efficiently rendering a cubemap at the same time as the main render in the same pass. Picking I don’t think so since picking is done with a different shader than a regular render.

I got my 099 license and Pascal card :smiley:. Can I still use TDWorldToProj in a geometry shader? On 5580 I see Geometry Shader Compile Results: 0(73) : error C1008: undefined variable "TDWorldToProj"

Not currently, I’m considering how automatic I want geometry shaders to be, since they allow lots of custom work to be done. Let me think about that a bit more.

AMD just released their equivalent features:
[url]http://www.roadtovr.com/amd-radeon-crimson-relive-adds-asynchronous-space-warp-latest-radeon-software-update/?utm_source=Road+to+VR+Daily+News+Roundup&utm_campaign=6635ac90a5-RtoVR_RSS_Daily_Newsletter&utm_medium=email&utm_term=0_e2e394ad33-6635ac90a5-161841085[/url]
So with some extra effort maybe there could be cross platform support.

Bruce

It’s actually been in AMD drivers for a while, they’ve just added some branding for it is seems. We already support it on AMD as well, it’s the same GL feature.