I made this patch for a specific type of 3D volumetric display LedPulse Dragon-O https://www.ledpulse.com/technology/organic-system which has unlike other volumetric displays an organic placement on the LED's meaning there is an offset in x and y on every layer (1-4 and then repeating) instead of a straight grid.
Because of the offset they can pack a higher virtual resolution 120x120pixels on 3x3meter in a lower resolution videosignal 60x60 pixels which drives the LED's over an LED controller.
So only rendering on the spots where the physical LED's are gives the possibility to pack 2 virtual layers into 1 video layer. This gives a more accurate picture of the shown content.
The engine slices the picture and by making the slices thinner and the render resolution higher you can play with the accuracy of the model. I even went to 480x480pixels to get a more accurate representation of the physical LED placement.
The amount of renderpasses is only possible with these low resolutions, the other bottleneck is then the SOP(=CPU) side. I am trying to get my next version accepting point clouds to stay in TOP(=GPU) world.
Normally you would need to render from 3 sides to get an accurate model, when rendering a box as a SOP because the camera is straight with the edges these will dissapear. That is the reason why I render twice: once normally and the second time with a wireframe.
The filter container filters out all the unwanted positions, not representing the leds, and then converting it to a lower resolution (not possible with just lowering the resolution).
Another solution would be using a SOP grid that represents the LEDs and then pixelmapping for every layer, but with higher resolutions this slows down the computer.
See example here: https://forum.derivative.ca/t/3d-artnet-mapping/8699/18
https://www.facebook.com/reel/916189039550480