Kinect2 - Projector Calibration

HI Markus
thnx. That just works. I will now play around for while to find out what this error means in
my environment and if this number allows me to find precise results faster…something
like a criterion to stop calibration, a little more accurate then your advice to collect something like 10-12 point pairs.
Greetings knut

I’m a little newer at TD. THis is amazing and I was able to accurately sync the skeleton to myself in real time. I guess my question is - how do I transmit this data into other work?

Would someone be able to create a very simple file that creates a circle TOP and then uses this projector calibration tox to align it to the right hand? I was able to successfully use it but wasn’t sure how to integrate it. Thanks so much!

Hey,

there is a skeleton inside the network. Go to kinectCalibration/projectorView: There you will see a kinect CHOP called kinect1 which reads all the channels from a skeleton. Theoretically you can just select out the wrist channel and use the same network to do what you want. Optionally also just copy that section out into your own scene.

Best Markus

Thank you for your reply! Do you mean, copy the section from kinect1 to null2?

I don’t know if you had a moment where you might be able to create a very quick file with this - maybe map the bannana to the right hand so after the calibration is complete, the bannana is following the hand? Just so I could reference and learn from it. I would super appreciate it if you could help with that.

If not, totally cool and I appreciate that this exists. Just trying to connect the dots that I’m missing to apply it properly.

Or what I mean to ask. Once it’s calibrated, you had said that it works as a camera. I was just wondering how to integrate it as such. I was assuming it would be able to track and put on my hands and it would keep it accurately on my hand even if I move on the z axis a little bit?

If you happen to have an example of it being used in such a way, or any example files that demonstrate using this tox file, it would really be helpful as I could analyze it! Thanks so much!

Posted a crucial fix where for builds 2018.27550 and upwards the resulting matrix had rows and columns switched creating a bad calibration result.

You can download the latest version from here: viewtopic.php?f=22&t=12895&p=49372#p49372

Best
Markus

Hey,

here is a little example showing how you would use the Kinect Calibration Component as a Camera.

Best
Markus
kinectRightHandExample.toe (21.9 KB)

thnx for the new version. do I need to use the latest td version for that always or does it work for older versions also?
knut

Hi Knut,

you will have to use 2018.27550 (latest official).

cheers
Markus

Hi Snaut,

Thank you for this tool.

this was my first post but i found what i did wrong:

My data points where to close together, now i went all over the place to get points and it worked.

Kind regards
Groeten
Gertjan

Thanks for this! It works fine with calibration and everything! Really good job!

But I experience a bit of delay. I guess that is hard to avoid though … But is there a way to at least to minimize the delay? At the moment if I am drawing just lines between all the joints creating a stick man and projecting it back onto my body the delay is pretty noticable as long as you dont move really slow.

thanks

It worked perfectly fine at home but when Im at the theatre where I shall run a show Im constantly getting error message. “error to high, try again” when I am trying to calibrate.

Anyone who knows what that means?

Hey Acroscene,

I would first check the lighting conditions and if the camera sees the checkboard ok. I also find it useful to project the checkboard onto a black surface rather than a white one. You can play around with the Grid Level Parameter to see if you get better results.
Also projecting onto reflective material can be difficult.

Essentially the message tells you that the reprojection error is too high meaning in essence that the algorithm figures the combined errors are so high that it can’t return a satisfactory solution for the camera position.

Regarding the delay, this is how TouchDesigner receives the data from the Kinect. The delay can be reduced by turning off Joint Smoothing in the Kinect CHOP though…

Best
Markus

Hi Markus
I would like to project the mask, that I derive from the kincet player index on the moving player.
Is there a way, that I can use the camera data out of this calibration for that purpose?
greetings knut

Hi Knut,

that should be possible. Your Kinect is assumed to be positioned at root (0,0,0) looking straight down the z axis. Trick would be to position a rectangle now in front of the kinect (straight down the z axis, no other translation or rotation) that holds the video texture from the kinect. A good question is how far out it would have to be - Needs to be far enough to be seen by the TouchDesigner Camera… Maybe you would have to play around with distance and size a bit to get it right…

Best
Markus

Hi Markus
OK i understand. I will try that.
Thnx for your help
Knut

Does this work for Player Index? If so , any pointers pls

Thanks

Justin

I tried to follow the advice from markus with no success.
Need a little Bit of Time to understand where the problem is and to come back with questions if necessary.
Knut

I was trying to get this to work with the Intel Realsense for awhile.

I couldn’t get any reliable results. I consider two things may be an issue.
It may be that the camera FOV is different for the Kinect vs Realsense so I tried the different OpenCV toggles to guess focal length and aspect, didn’t help.

The next guess was that the realsense rgb camera doesn’t have the same FOV as the 3d depth image. I would guess if the point pairs are compared, it would be sampling an incorrect position.
I tried the “depth aligned to color” settings, but it seems like the pointcloud top doesn’t have this option. The “color aglined to depth” doesn’t find any checkerboards.

Any more suggestions about what to try to get the realsense to calibrate?

I was also guessing that the depth data would have different values considering the realsense can be used closer proximity. But that shouldn’t matter the calibration shouldn’t need actual distances, should it?

Sometimes It appeared that the depth data was not discovered on the white board I was holding. Would that record 3d position of -1 or discard the point pair?

I dont have no realsense so I cant comment on that.

But some learnings from using the kinect a lot

  1. 7 to 10 point pairs a normally sufficient with a kinect 2.
  2. Lots of problems if I try to get data when using a short throw projector. If I include data
    from checkerboards that are closer to the edges of the projection area, the data becomes unreliable - the projected image does not hit the person any more.
    Never saw that on projectors with a “normal” lens. Seems to be a problem of the projectors lens
    although I would expect that the calibration would cover that…
  3. still problems to use the kinect player index to project a mask on the person. I cant get it
    to work in my enviroment

good luck