8K HapQ

About to start on an 8K video wall, 4 x 3 1920’s. I’d like to encode the entire video as 1 movie but would I be better to split the image into sections? Will be running on a fairly decent machine, i7, GTX 780 loading from SSD.

Might need to RAID0 some SSD’s but otherwise should be fine. We generally split them for convenience of not having a single gigantic file.

If you are using TD to encode the files and using the newer officials (20000+), I automatically split the file into 12 sections internally. So when they get decoded it uses 12 threads to decode them. There is some cases where that isn’t enough, so potentially using 2 files may be worth it, but I wouldn’t go more than that for any performance reason.

Did anything else implement your HAP tweaks yet?

It’s using chunks to allow Snappy decompression on multiple threads, right?

So it starts from one thread, getting from the disk, and ends on one thread getting onto the GPU anyway.

Side question - do the ‘async upload’ options take advantage of the dual copy engines in Quadro GPUs?

Bruce

PS - not just geeing out, but we’re big Hap fans right now - at least HapQ.

PPS - Any word on HapQ+ Alpha?

Thanks Malcolm, can u explain your internal and automatic splitting process?

Its something I got added to the HAP spec a while back. The latest plugins they released can decode it, but nothing else encodes using it yet that I know of.
The idea is that HAP is a 2 stage compression. ‘Snappy’ CPU compression of DXT5 YCoCg frames. Originally the snappy compression was a single large piece, but the new spec allows for that to be split into any number of pieces, and each piece can be decompressed by a separate CPU. TouchDesigner will always split the files into 12 pieces.
An unofficial HAPQ+Alpha exists in TD, but it’s very very slow to encode. 4 seconds per frame for 1920x1080 video.