Depth Camera for large area

Hi guys,

Can you lead me through a setup (with depth cameras I suppose) that can tracks the position of multiple users at the same time in a substantially large area [50m²]. Hopefully something that is
compatible with Mac and TD.

Thanks.

Depending on the size of area you want to track, but for 50 meters square you might consider something like OpenPTrack:

openptrack.org/

Ian Shelanskey made a C++ CHOP for it here:
ianshelanskey.com/2017/03/27/tr … nt-clouds/

That would off load all of your tracking to a dedicated system of machines for doing just that, and in Touch you can just receive the localities of participants.

OpenPTrack seems really great, particularly for tracking groups of people. Another idea is Leuze ROD4, which is paticulary nifty if you have little infrastructure/time to set up sensors and only need 2D position tracking. derivative.ca/wiki088/index … _ROD4_CHOP

Thanks for the information guys.

To be honest I didn’t think I would need a system of multiple machines to work this. I am only scratching the surface on this regard, so I apologise for asking noob questions.

OpenPTrack seems like the way to go for my needs.

Do you think the Kinect will do the job, since the other sensors tested are either discontinued or the sensors cost quite a lot?

I’d say that your sensor choice really depends on what your mounting situation is going to look like. Thinking through the FOV of the kinect and the number of sensors needed to get the coverage you need is going to be a big piece of this. Large area tracking is always more complicated than you imagine that it’s going to be, especially if you want persistence of participant IDs across the large area.

You might start by building out a simple model of your space, and then build out a few models of possible cameras with their FOV represented as a frustum - that will let you see in a model what kind of sensor density you’ll need in order to get the fidelity that you want / need. I think the kinect is a great solution as long as it’ll meet your needs in terms of what it can see (the height of your ceilings and where you mount cameras will be an important part of the puzzle).

I think OPT is also working out how to support using the NVIDIA Jetson - which would make for a slightly cheaper build out.

Hope that gets you moving in the right direction!

Thanks again for the prompt and good information Matthew. I’m much closer now than where I was yesterday. Still a long way though.

I will definitely start building a model of space and cameras like you suggested and see from there. Will keep you posted if you guys don’t mind :slight_smile:

Is the NVIDIA Jetson a good graphics card to have in each node (machine) where the sensors will be connected to? (but they’re still working on it’s support?)

I couldn’t find the contact details in the OPT website for some more information.

The github wiki has a little bit more information that might be worth looking over:
github.com/OpenPTrack/open_ptrack/wiki
github.com/OpenPTrack/open_ptra … d-Hardware

The announcement about the GTC presentation:
openptrack.org/2016/03/openptrac … ed-at-gtc/

I think the TK1 is a little underpowered, but the TK2 looks like it might be right. I’d ping them on twitter or github to get a sense of the status regarding using jetsons. Regardless, I think you can only use those for nodes, but still need a beefier box the calibration and operate the UI.

Ian is another good person to ping about a consultation, as I know he’s recently done some work with REMAP:

ianshelanskey.com/

Thanks a lot again Matthew. All the information will catapult me towards the goals of this project. You are a very helpful person. Thanks for everything.

One last thing - I was chatting with someone from REMAP last night and they were a little uncertain about kinect placement with steep angles. I’d make sure to reach out to their team (you might have to do some digging) and see if you can chat with someone about possible issues you might encounter before spending any cash on hardware.

Hi guys,

I’m currently researching for a similar project, so i’m curious, digitalnature did you go with OPT or Kinect v2?

Right now I have a matrix of 3x3 Kinect v2 sensors at 6m of height, but I’m trying to deal with the specs for the computer hardware of the nodes. Microsoft recommends i7’s for kinect2, but it seems an overkill for the task.

I’m planning to have the nodes to get the depth from the Kinect, do blob track based on chroma TOP, and send the info via OSC to a master PC.

Cheers,
Rui