Close
Community Post

Think and Sense

Think and Sense produced a remarkable large-scale point cloud-based exhibit and media control system for an LED wall at the entrance of Tokorozawa campus, a new office of KADOKAWA CORPORATION. The entire end-to-end process other than the photogrammetry itself was created using TouchDesigner. The team built "Ambience", their proprietary system using shaders where all the processes from real-time visualization to point animation effects or image effects could be edited, uniting several functionsThey succeeded in visualizing over 16 million points in real-time. The output of the point cloud images exhibited will be constantly be changing based on weather, daylight hours, and other seasonal information obtained via the network. Thus, the atmosphere of Sakuratown depicted in the display will change according to the changing season of real-world Sakuratown.

 

Derivative: What operators are you using to animate the turbulence and to blend the points with the captured data sets? You are using custom GLSL shaders?

Shuhei Matsuyama: custom GLSL shaders 

Derivative: How many points per file? How many files did you display at the same time?

Shuhei Matsuyama: We used  2048 x 2048 size .EXR files and Anbience runs 3 or 4 files simultaneously. This means we had 20 million points at 60 frames per second.

Derivative: If I understand correctly, you preload all the files you need when you start TouchDesigner, is that correct?

Shuhei Matsuyama: Yes, our system has a pre-load function. When TouchDesigner starts, all of the point cloud files load on video memory. 

 

This is a clear step forward for the community and us as it cracks a few nuts: They externally got a colored point cloud scanned from a series drone images, and wrote them into a set of point cloud files that TouchDesigner understands. Presto no modelling of a city required, they showed there’s an existing workflow for this kind of thing 
The point cloud data sets are large, so they have it stored, loaded and displayed in parts: They are loaded as needed, and mixed as needed. They proved they can manage large data sets in TD.  It uses our newish rendering of point clouds from TOPs. 
They are blending parts of the points clouds together, including randomized swirly points, mixed with the original model (which is what the Fields COMP in Community Posts do).  Like what Kurokawa (?) has done in the past without TD. 
Animation COMP for animating (we gotta revisit that!) 
Special interface for photo-shoots and other events.    And a UI for scheduling/timing it.   It is a general system they built for multiple projects, which is great – leveraging a built system.

Derivative: Can you explain your process?

The first step was capturing drone footage of the wide-area Tokorozawa Sakuratown and then creating point cloud data of the town and its surrounding area by photogrammetry with that footage. With that data we visualized over 16 million points in real time using a proprietary point cloud visualizing system that is composed with TouchDesigner. We constructed a real-time visualizing system that allowed the point clouds to be displayed and edited which is a specialty of THINK AND SENSE. We pursued an approach for the best expression of 6K LED high-definition vision, setting the color of “cherry blossom” (part of the name of Tokorozawa Sakuratown) as a point of representation.

Derivative: Can you explain how TouchDesigner is used in the process of creating the work? There are a few parts.

Shuhei Matsuyama: The entire process of point cloud creation was done in TouchDesigner except for the photogrammetry. The movie is real-time rendering, which means  whole of process (render to post process) was worked in TouchDesigner.

Our point cloud system has some of proprietary functions focused on point expression: point size randomize, point color gradation based on camera distance and so on.

The most significant point of our workflow is that everything is working in real-time.

Traditional movie creation is usually separate from the work section. This time at the onset we created one scene of the whole Sakura Town point cloud that also include post effects. The next step, we  considered story, camera works and development. In this step, we already had high quality visualizations that were very close to the final look. I simulated several types of  rough movie stories using a 3D mouse to control cameras. This meant that there was no need for storyboards and movie editing and also no need for camera animation keyframe editing. We were able to create roughly edited movies using only a 3D mouse.

Derivative: Can you please explain how the photogrammetry workflow?

Shuhei Matsuyama: We took drone footage of Tokorozawa Sakuratown area including surrounding environments, and with those images created 3D-point cloud data by photogrammetry. The imagery was imported into Reality Capture to carry out a 3D-restoring with the multiple data sets and then the point cloud files were output to TouchDesigner.

Reality Capture exports .xyz files. We created a converter in TouchDesigner from .xyz (ASCII) to .EXR (binary) files. The .EXR files are read efficiently by the Point File In TOP in the visualization project.

The point cloud data was stored as image files (EXR files) and loaded into TouchDesigner TOPs (where each pixel represents one point) to be visualized in real-time. Since the point cloud data of Tokorozawa Sakuratown and surrounding area was huge, we divided this data by a processing unit and then depicted images. Real-time points switching by scenes was made possible by loading the depicted point cloud files in advance using TouchDesigner’s shader system and switching them in real time. We constructed a proprietary system with shaders where all the processes from real-time visualization to points moving effect or image effect could be edited, to unite the several functions.???

Derivative: How are you rendering the points?

Shuhei Matsuyama: We didn't use geometry shaders, only point sprites and custom shaders to change the point shape from square to circle.

Derivative: What did you do to make the point clouds look as good as possible on the 6K LED display? Did you do a lot of adjustment on-site to make it look and feel better, and did you make any adjustments after the public experienced it?

Shuhei Matsuyama: We tried different types of styles and  looks. 6K LED is very high-definition and has beautiful color. In the end we decided to use a simple post effect to enhance each point.

Derivative: What were the main concerns and guidelines that you got from the client before and during the project development? What did they ask you to make or propose?

Shuhei Matsuyama: This 6K High-definition LED was first installing in Asia. Our client wanted to claim this high spec and beauty. So, point cloud was suited for their request. The output of the point cloud images exhibited will be constantly be changing based on weather, daylight hours, and other seasonal information obtained via the network. Thus, the atmosphere of Sakuratown depicted in the display will change according to the changing season of real-world Sakuratown.

Derivative: It seems the client wanted something that untrained operators could adjust - schedule etc. Was that a large part of the work?

Shuhei Matsuyama: We already have Ambience, our media control system made with TouchDesigner and so for this project we added some customized functions. Ambience can control media files and Tox TouchDesigner program files in the same field - this is the most important point. So untrained operators are able to use media files and interactive programs just using drag and drop.

The high-definition 6K/2K LED is controlled by a single control system. This is a system tailor-made for the Tokorozawa campus based on “Ambience”, an original media control system provided by THINK AND SENSE. Dot-by-dot video replay, and working with interactive contents for an automatic scaling through recognition of contents resolution are made possible by this system.

 

Timeline and Event List

Ambience enables the making of playlists easily by dragging and dropping. It is adaptive to various resolutions and aspect ratios, automatically recognizing the content's resolution and scaling to realize optimal display. In addition, it enables not only the replay of video but also the reading of interactive content and/or working with external inputs. Furthermore on the Event List, content can be replayed at the desired timing by setting the time on the imported contents to the list.

 

Photo Cell Mode

In Photo Cell mode pattern of images with logos are arranged to be displayed. These are used as a photo panel that can be changed dynamically at photo session during an event.

 

Window Designer

Window Designer lays out the presentation images on the displays at the desired locations. With this function, layouts can be changed freely by using a mouse and made to suit the various styles of the presentation. .It is also possible to support more external inputs by adding capture hardware. It supports various event needs such as multi-screen display for e-sports..

 

Credit List

Technical Direction / Movie edit: Shuhei Matsuyama

Point Cloud System Design /Media control system design  :Takamitsu Masumi

Photogrammetry Engineering: Naoya Takebe

Sound Design: Intercity-Express (Tetsuji Ohno)

Media control system Design :Yuki Hikita

Media control system Graphic design:Yuki Soejima

Comments