hello there.
there have been some discussion on this forum about how to do fulldome-ready images but no ready-to-use solution has been presented.
below you’ll find a patch that demonstrates how to stitch a cube-map generated by the render TOP to a fulldome image by means of a very simple glsl shader.
the trick is easy. do cartesian to spherical and back to cartesian. feed the result into textureCube() and voila. i actually had some troubles believing that it’s so easy.
so you need to render a cubemap which is not very fast. there is another solution. one can bend the vertex position with the same technique in a vertex shader. that only needs ONE render but you have to tessalate the geometry a lot which might lead to even longer rendering times.
have fun with this and create astonishing immersive content!
documentation is in the patch. cube_to_fisheye.toe (9.19 KB)
Can you post an example Sheff? I’m unable to reproduce your problem. You shouldn’t be able to see seams in the original HDR map though. If you do then they are going to show up in your final render
Could the cubemap to fulldome shader be used to render pre-rendered cubemaps to fisheye masters? Im experimenting with new Corona Renderer in 3DS Max which has no spherical camera yet.
You can get rid of the seams if you change the vertex shader for the surface you are rendering. The environment map and rim lighting is based on camSpaceNorm in the default PhongMAT.
You can change this snippet to have the envmap and rimlights based on the surface’s worldspace normals:
In vertex main():
my problem is how to take an input of fisheye fulldome images and turn them into a cubemap - i.e. five separate images i can then put through 5 projectors and map/mask them to fit a dome?
i don’t understand why i would want to warp to fisheye then unwarp to projectors. there is a lot of distortion and detail lost at the edges of a fulldome image. i’d rather not warp to fisheye at all with 3d content… BUT with video i have captured with my 4.5 sigma lens, i would need to unwarp it to mix with the realtime objects, then use a camera rig of the corresponding projectors.
For a start you will be loosing a ton of pixels projecting a square image from each unit but the real issue is how do you set up projectors to replicate cubemap positions? Splitting and warping from fisheye allows you to choose projector placement, geometrically correct and warp and specify what each unit should output, i.e. spanning multiple cubemap faces.
problem with fulldome is you are 1. putting a circle within a rectangle (creating a lot of blank space) and 2. stretching the edge pixels to infinity then unstretching them in the projection.
So I’ve playing with Steve Masons uber cool Evil Space Flame patch for fulldome projection using Bergi’s great cubemap to fisheye file. Occasionally the cubemap faces become visible however. I’ve attached a file if anyone wants to have a look. cube_to_fisheye 4.toe (12 KB)