Two kinects(v2)

It’s limited by the SDK to only use 1 per machine:

“Kinect for Windows supports one sensor, which is called the default sensor”

msdn.microsoft.com/en-us/library/dn782033.aspx

I know some people working with lots of Kinect 2s and have put each one on those new nVidia Jetson TX1’s.

Hi elburz

Thanks for your reply. I need to investigate nVidia Jetson TX1…

Best
In Dae

They will work but require a separate USB card (rosewill usb 3 was recommended to me) and some extra development. But probably for simplicity, you can get an Intel NUC or Zotac small form factor machines and they will be able to run the extra Kinect v2’s and they’ll already have windows so you can use the native kinect sdk and TouchDesigner to retrieve and parse data.

Thanks for the info, Elburz.

Here’s the project my colleagues at UCLA are working on which will have more info on using Jetson’s:
openptrack.org

But again, if you want the simple route, probably just easier to buy some NUCs or Zotac EN970 or similar type machines.

nice topic! how do you pass a kinect texture from one computer to another?

Ian Shelanskey has been working on building a OPT C++ CHOP for easier parsing in Touch. I’ll try to get that up here once it’s live.

***Edit: turns out i’m not paying attention, and the following link-reference requires multiple machines as well. I posted this without paying attention! Nevermind but have a look anyways cuz its cool

***original message below

I don’t know jack, but it seems to me that 2 sensors may be possible.
Brekel has a beta out right now, for it’s paying customers only, that supports multiple Kinect 2. His current software requires Kinect SDK 2.0, perhaps there is a way.

[video]https://youtu.be/1bodwMQ2zv8[/video]

brekel.com/multi-sensor/

Looks cool for calibration for sure. Not sure why MS limited devices in the SDK. A definite pain in the ass.

I’ve been playing with streaming a second Kinect pointcloud from an Intel NUC.

[video]2 Kinect Pointcloud Merge Test - TouchDesigner - YouTube

The best way I could figure out so far is Pack TOP → Touch Out TOP → [LAN] → Touch In TOP → Pack… it has decent quality uncompressed, but super laggy. Maybe it’s just the little i3 in my NUC, but I imagine that super high res image is heavy for a single stream. Any suggestions?

if reduce resolution whats happens .?

what hardware you use for the network?

Hey Will,
I actually have a Kinect hooked up to an Intel NUC (i7) setup for a current project. I’m running a lot of the computations I need on the NUC itself, but I’m sending RGB, IR, and tracking data feeds over Cat6 with no problems. I think at one point I had tried sending the packed depth data and it worked fine, but I’ll do a more intentional test later today and report back.

EDIT

So I tested the following setup:
Kinect v2 on Intel NUC (i7) → Cat6 → Desktop

TEST:
On the Intel NUC…
Kinect TOP (sending Color Point Cloud) → Pack TOP → Touch Out TOP.

On the Desktop…
Touch In Top → Pack TOP (set to Unpack RGB) → Malcom’s example pointcloud.toe ([url]Kinect Point Cloud texture]).

Observations:

  • The Desktop only received usable data when the NUC’s Touch Out was set to Uncompressed. I’m assuming this is because the Pack TOP uses the alpha channel and HAP does not support alpha.
  • It was laggy, similar to the video you posted.
  • Maybe the 5760 x 1080 resolution from the Pack TOP is crushing the NUC’s iGPU? Performance seems to tank on the NUC once I add that to the mix. Also, sending the uncompressed 32-bit float RGB texture of the Color Point Cloud seems to slow down at the network level.
  • Halving the resolution somewhat improved performance, but it still wasn’t smooth (and that’s not ideal).

Hope this is helpful!

What about trying the new NDI OPs since they transit alpha?

The NDI In and Out TOPs are really exciting. From an initial look at the NDI TOPs, it seems like they send an 8-bit fixed RGBA texture, which may still require the Pack TOP on the NUC and therefore the 5760x1080 resolution. We just deinstalled and are on another project now, but I’ll give it a shot on the NUC as soon as I get a chance. Regardless, I’ll definitely be using the NDI TOPs moving forward. :smiley:

Ben, is there any way to access the shader code and/or matrices used to align the Kinect v2’s 512x424 depth image with the 1920x1080 RGB image for the color point cloud image? I’m wondering if it might be possible to send the smaller depth and RGB textures over the network and do the alignment after receiving it, since the NUC seems to handle those alright.

NDI is a lossy compressed format as far as I can tell so you can’t pack pixels with it. Pixel packing only works with uncompressed or lossless compression.

I just checked and unfortunately no you can’t do the remapping on a second machine. The code that does this is hidden by Microsoft and comes from the sensor directly (since they would all likely have slightly different remappings).

So far it seems we’ve all been talking about sending the second pointcloud over network, but what about via capture card? With a high quality capture card, could you pack 1080p 32bit to 4K 8bit on sending machine then capture that with a 4K capture card and unpack back to 32bit?

I only have some 1080p Blackmagic Intensity cards so I can’t test. I remember using a Datapath card once that had a “force 444” mode we used to ensure accurate checkerboard mask for a 3D screen. Is there a capture card that would preserve the 8bit image well enough to unpack to 32bit?

My understanding of the lower level process is lacking, but I’m wondering: Is there a way to do this with pixel math as opposed to bitwise math? I.e. create 4x 8bit images that would additively composite together to represent the original 32bit image? Maybe this would be more noisy but I’ve had times when unpacking failed altogether because the bits didn’t line up(?), giving a blank image. I’d take noisy over nothing… just an idea.

I got some way, but it is not final answer,
and very busy for this project now, maybe I will sharing code after this week(after refine it) and expect someone to re-refine it.

[video]Derivative 05 29 2017 _TD_TWO_KINECT_V2 - YouTube
[video]Derivative 05 29 2017 05 33 40 28 - YouTube

(inverted colors is the second kinect)

1 Like

very nice! Would be very interested to learn more about your workflow when you have the time. :nerd:

I’m curious about EF EVE software, somehow they created a driver to use two Kinects on the same machine: experimental-foundation.com/

I also found some information about plugin two Kinects, each one in a different USB bus.