Kinect2 - Projector Calibration

Thank you so much for releasing this. I have been trying to figure out a better way then importing the calibration from RoomAlive for a while now. This is a much cleaner solution.

Hey,

I posted an updated version (viewtopic.php?f=22&t=12895#p49372) that fixes the issue with offset grid points when running the kinect at a lower resolution (mainly when trying this with the Non-Commercial version.)

The issue was that I had a fixed offset and orthographic camera width that assumed a camera resolution of 1920x1080.

Cheers
Markus

PERFECT! THERE IT IS! I WILL TRY IT LATER!

THX

Hi Markus
during the last week I did a lot of calibrations on differernt locations using your .tox.
Everything worked fine: It is stable, the results are reproducible, the UI is good.
Thnx a lot, great job!
I use this camera for the kinect directly but also for the vive tracker as an alternative to
camschnappr, if I dont have a 3D model of the object.

I will now invest some time to understand what needs to be done in order to get the best results possible by using this checkerboard approach with the kinect.
The quality of the results is today not easy judge, because this can only be done by projecting
the skeleton on a person and then visually inspecting the image (as far as I know).
I would expect that there is somewhere inside opencv some kind of value, that describes the quality or preciseness of the parameter estimation.
Is this true ? Do you plan to make this value availabale through the UI?
Could you give a starting point if I would like to do that on my own ?
thnx for your help
knut

Hi Knut,

the calibrateCamera returns a value that should indicate how precise it is.
If you go to the DAT called Calibrate, find the function of the same name:

[code]
def Calibrate(self):
fov = 180
pWidth = int(op(‘monitors1’)[parent.Kinect.par.Monitor+1,‘width’])
pHeight = int(op(‘monitors1’)[parent.Kinect.par.Monitor+1,‘height’])
size = (pWidth,pHeight)
ret, mtx, dist, rvecs, tvecs = self.calibrateCamera(self.objPoints, self.imgPoints)
rot, jacob = cv2.Rodrigues(rvecs[0],None)

extrinsic = self.returnExt(rot, tvecs[0])
intrinsic = self.returnIntrinsics(mtx, size)[/code]

the “ret” should be this value, so if you add:

parent.Kinect.par.Message = 'Calibration Error: {0}'.format(ret)

it should output it to the little Message field on the parameters.

def Calibrate(self):
	fov = 180
	pWidth = int(op('monitors1')[parent.Kinect.par.Monitor+1,'width'])
	pHeight = int(op('monitors1')[parent.Kinect.par.Monitor+1,'height'])
	size = (pWidth,pHeight)
	ret, mtx, dist, rvecs, tvecs = self.calibrateCamera(self.objPoints, self.imgPoints)
	rot, jacob = cv2.Rodrigues(rvecs[0],None)

	extrinsic = self.returnExt(rot, tvecs[0])
	intrinsic = self.returnIntrinsics(mtx, size)

	parent.Kinect.par.Message = 'Calibration Error: {0}'.format(ret)

Will add this in to a later release!

HI Markus
thnx. That just works. I will now play around for while to find out what this error means in
my environment and if this number allows me to find precise results faster…something
like a criterion to stop calibration, a little more accurate then your advice to collect something like 10-12 point pairs.
Greetings knut

I’m a little newer at TD. THis is amazing and I was able to accurately sync the skeleton to myself in real time. I guess my question is - how do I transmit this data into other work?

Would someone be able to create a very simple file that creates a circle TOP and then uses this projector calibration tox to align it to the right hand? I was able to successfully use it but wasn’t sure how to integrate it. Thanks so much!

Hey,

there is a skeleton inside the network. Go to kinectCalibration/projectorView: There you will see a kinect CHOP called kinect1 which reads all the channels from a skeleton. Theoretically you can just select out the wrist channel and use the same network to do what you want. Optionally also just copy that section out into your own scene.

Best Markus

Thank you for your reply! Do you mean, copy the section from kinect1 to null2?

I don’t know if you had a moment where you might be able to create a very quick file with this - maybe map the bannana to the right hand so after the calibration is complete, the bannana is following the hand? Just so I could reference and learn from it. I would super appreciate it if you could help with that.

If not, totally cool and I appreciate that this exists. Just trying to connect the dots that I’m missing to apply it properly.

Or what I mean to ask. Once it’s calibrated, you had said that it works as a camera. I was just wondering how to integrate it as such. I was assuming it would be able to track and put on my hands and it would keep it accurately on my hand even if I move on the z axis a little bit?

If you happen to have an example of it being used in such a way, or any example files that demonstrate using this tox file, it would really be helpful as I could analyze it! Thanks so much!

Posted a crucial fix where for builds 2018.27550 and upwards the resulting matrix had rows and columns switched creating a bad calibration result.

You can download the latest version from here: viewtopic.php?f=22&t=12895&p=49372#p49372

Best
Markus

Hey,

here is a little example showing how you would use the Kinect Calibration Component as a Camera.

Best
Markus
kinectRightHandExample.toe (21.9 KB)

thnx for the new version. do I need to use the latest td version for that always or does it work for older versions also?
knut

Hi Knut,

you will have to use 2018.27550 (latest official).

cheers
Markus

Hi Snaut,

Thank you for this tool.

this was my first post but i found what i did wrong:

My data points where to close together, now i went all over the place to get points and it worked.

Kind regards
Groeten
Gertjan

Thanks for this! It works fine with calibration and everything! Really good job!

But I experience a bit of delay. I guess that is hard to avoid though … But is there a way to at least to minimize the delay? At the moment if I am drawing just lines between all the joints creating a stick man and projecting it back onto my body the delay is pretty noticable as long as you dont move really slow.

thanks

It worked perfectly fine at home but when Im at the theatre where I shall run a show Im constantly getting error message. “error to high, try again” when I am trying to calibrate.

Anyone who knows what that means?

Hey Acroscene,

I would first check the lighting conditions and if the camera sees the checkboard ok. I also find it useful to project the checkboard onto a black surface rather than a white one. You can play around with the Grid Level Parameter to see if you get better results.
Also projecting onto reflective material can be difficult.

Essentially the message tells you that the reprojection error is too high meaning in essence that the algorithm figures the combined errors are so high that it can’t return a satisfactory solution for the camera position.

Regarding the delay, this is how TouchDesigner receives the data from the Kinect. The delay can be reduced by turning off Joint Smoothing in the Kinect CHOP though…

Best
Markus

Hi Markus
I would like to project the mask, that I derive from the kincet player index on the moving player.
Is there a way, that I can use the camera data out of this calibration for that purpose?
greetings knut

Hi Knut,

that should be possible. Your Kinect is assumed to be positioned at root (0,0,0) looking straight down the z axis. Trick would be to position a rectangle now in front of the kinect (straight down the z axis, no other translation or rotation) that holds the video texture from the kinect. A good question is how far out it would have to be - Needs to be far enough to be seen by the TouchDesigner Camera… Maybe you would have to play around with distance and size a bit to get it right…

Best
Markus