3D volumetric rendering (using near/far field on camera Comp)

I made this patch for a specific type of 3D volumetric display LedPulse Dragon-O which has unlike other volumetric displays an organic placement on the LED's meaning there is an offset in x and y on every layer (1-4 and then repeating) instead of a straight grid.

Because of the offset they can pack a higher virtual resolution 120x120pixels on 3x3meter in a lower resolution videosignal 60x60 pixels which drives the LED's over an LED controller.

So only rendering on the spots where the physical LED's are gives the possibility to pack 2 virtual layers into 1 video layer. This gives a more accurate picture of the shown content.

The engine slices the picture and by making the slices thinner and the render resolution higher you can play with the accuracy of the model. I even went to 480x480pixels to get a more accurate representation of the physical LED placement.

The amount of renderpasses is only possible with these low resolutions, the other bottleneck is then the SOP(=CPU) side. I am trying to get my next version accepting point clouds to stay in TOP(=GPU) world.

Normally you would need to render from 3 sides to get an accurate model, when rendering a box as a SOP because the camera is straight with the edges these will dissapear. That is the reason why I render twice: once normally and the second time with a wireframe.

The filter container filters out all the unwanted positions, not representing the leds, and then converting it to a lower resolution (not possible with just lowering the resolution).

Another solution would be using a SOP grid that represents the LEDs and then pixelmapping for every layer, but with higher resolutions this slows down the computer.

See example here:

Update: when I was doing a UVmap for a LED sphere I found out that with instancing with an orthographic camera you lose 1/2 pixel on the edge's of the render, I adopted the same technique to compensate on the Dragon renderpatch.