camschnappr best practices

I have been using camschnappr for a while now and really appreciate the continued work updating it.

I have some questions about how to get the most accurate results and some assumptions over time that I have made that would be good to have clarified.

Are more points always better?
It seems like it only needs 5 to resolve, but do additional points add more positions of error or more precision.

Does z-depth matter
It seems the objects with the most success have a variance of z depth towards the camera.
Opposed to using a flat pattern or checkerboard, something like a cube works better.

Lens shift and wide angle?
Historically I try to move the lens shift to the center of the lens as much as possible. It seems like the distortions are non uniform if the light is not exiting the lens in its center. This is usually not adjustable as much with short throw.
How does camschnapper guess for these intrinsics? Can it properly understand short-throw lensing?

Filling the scene, close or far
I have tried calibrating objects closer to the projector to get more of the projection area covered but have come to the conclusion that a farther distance is better if you can fill the scene with reference points. What would be the best arrangement to get the best results.

I understand that for most use cases the mapping is close enough for static shapes. However for my uses I need more accurate mapping for moving objects.
Any suggestions are appreciated. Perhaps an updated manual exists for some of these answers.

Hey,

those are all great questions and perhaps the should go into an faq eventually!

If more points give you a better result depends greatly on how precise the virtual model fits the real one. If the fit is not very tight, more points could increase the error. The openCV cameracalibrate function returns a reprojection error, perhaps we should expose that so users can estimate the quality of the projection.

Z Depth does matter - if you try to calibrate with a checkerboard pattern, usually you would take multiple calibration runs. This is not implemented in camSchanppr but can be done with calibratecamera.

camSchnappr uses the Guess Intrinsic flag of the calibrateCamera function. There is a “Fix Principal Point” Flag on the Advanced Page of camSchnappr which should solve for center lens projectors.

I think the further away results are better as you have a bigger range of values to work with. You can try to play with the Iterations and Precision parameters on the advanced page - this might offer improvements.

Will update the wiki with the current parameters and some more in depth info.

Does this help a bit
Markus

D3 actually shows this in the viewer, the flag goes from red to green as it gets a better calibration. You can also change the calibration type although 99% of the time everyone sticks to the zhang standard calibration.

I’m guessing Harvey that you’re attempting a moving projection and getting errors as you move the model? I’ve found that the best practice camschnappr wise is to have the extremes dealt with. So if I have a box and I calibrate the corners of that box everything will be perfectly aligned right up until I move that box out of its “bounds” and then things will go wrong. What is required is a set of known points in the environment around that object. This helps calibration massively.

But what about the Camschnappr ‘Advanced’ Tab?

I love camshnappr. It’s been a great tool for me on a few jobs now. Still, the mystery of the Advanced tab still kinda baffles me. There’s nothing in the Wiki about these params. A few questions:

The Main one: How come the camschnapper wireframe calibration doesn’t change when I change parameters? For example, I would expect that when I change a parameter like “Precision,” I would see an update to the output calibration. This ins’t the case. When does it update?

Why is FOV a parameter? Isn’t that what camschnappr is solving for? Should I just put in my best guess here?

What is an Intrinsic Guess? How does that relate to Fix Principal Point?

How does a higher Max Iterations affect the output calibration?

How does higher precision affect the output calibration?

Thanks for all your hard work on this tool!