Derivative: What work were you brought on to produce for the Winter Olympics opening ceremony?
CaoYuxi: We were responsible for the production of all the pre-creative aspects and the in-depth design of the visuals, including the pre-ceremony performances, such as the snowflake convergence, the five rings display, the ice cube laser interaction, and other program segments, as well as the complete post-production of the pre-ceremony performance videos.
Derivative: Can you detail how you used TouchDesigner to produce the work and the other tools involved?
CaoYuxi: We used TouchDesigner, Nuke, Cinema4D, Blender and Unreal Engine throughout the project, with TouchDesigner playing the role of environment simulation, effects compositing and rendering output. We took advantage of TD's real-time rendering feature, where we could see the real-time effects after importing the China National Stadium and LED screen stages models. You can simulate the view cone angle of 3D perspective at the viewing platform position to observe the real situation without waiting for rendering, which greatly improves the work efficiency. And TouchDesigner can view the effect of each node in real time and can quickly build dynamics and realize effect composition.
Derivative: Can you talk about the process of working with the designer and director?
CaoYuxi: The content we make is eventually presented on a 10,000 square meters of ground screen, while in the production we can only see on the regular computer monitor, the effect is very different. Zhang Yimou Director ultimately needed to judge the production of the content according to the live scene. For example, a pixel-thick line that we see on the screen may be as thick as a person's arm when put on the ground screen. So, all the production team and the performers team are testing as they go along, and after testing, we meet to discuss the direction of the changes. Then I translate the demands made by the director in the meeting into a specific work schedule and assign them to the team members responsible for different parts.
The resolution of the ground LED screen is very large, greater than 16K, equivalent to 16 of 4K outputs, and the duration of content that we need to produce is 30 minutes. The frame rate of the video content needs to reach 50Hz, and in such a large volume of projects the rendering pressure is quite high, and TouchDesigner's rendering efficiency is very high. When the project is optimized well enough, it can turn unnecessary dynamic animation frames into static frames for composition, then we almost get 5 seconds to render one frame in the resolution of 16K * 7K - it was a very fast output pipeline. And this content can be encoded directly with GPU using the component Movie File Out to directly output the four H.265 codec files that conforms to the official playback file to backstage.
Derivative: Which part was fully pre-rendered, and which part was live, like the lasers?
The elements like the logo, ice/ice cube were all pre-rendered, but the composition of all those elements is live-generated in TouchDesigner.The lasers interact with the LED cube to simulate the curving effect, those parts are timeline-based sync with laser vector playback. Also in the Peace Dove part we were using the camera AI-based system to track the children's location and generate snow-like particles on the ground during the children's live performance.
Derivative: Is there anything you can think of that you were able to achieve that surprised even yourself?
Under such heavy rendering pressure, we didn't expect to reduce the rendering time of 30 minutes of 16K resolution content to 8 hours with continuous optimization and be able to make changes and debugging in almost semi-real time, which could take more than 10 hours or even dozens of hours even in a rendering farm under traditional workflow making it very difficult to complete rendering tasks.TD's rendering efficiency advantage on the other hand, not only helped us meet the requirement of updating content within 48 hours kick around during the final rehearsal stage of the opening, but also gave us time to help with content slicing and output in other segments.
Secondly, after a year's time, the artistic large ice block which was thought to be difficult to achieve in a real live stage for an on-site audience and feed into the live broadcasting signal to the world, was made from scratch with the texture of real ice and a nice warm Chinese New Year's atmosphere. The illusion of this 3D perspective experience for audience was realized quite successfully.
Derivative: What else is on your horizons these days?
CaoYuxi: In the process of doing the winter Olympic opening ceremony project, there are many great drafts concept that didn’t get realized to the final stage. Some of these we think are more interesting and can be further deepened into a complete work, such as Chinese landscape painting but with ice and frozen texture, all of which are being refined. At the same time, we have a new Audio-Visual project in the works, and we look forward to completing and releasing it as soon as possible. Also, we will revive the artwork ORIENS (which was featured in the Derivative blog in 2017) into a new performance series that collaborate with a bunch of different Chinese traditional instrument performers but with experimental ambient music taste.
We wrote about CaoYuxi's Oriens in 2017, read here.