Close
Company Post

Leviathan's Extraordinary 150 Media Stream a Living Digital Sculpture

Gracing the expansive lobby of 150 North Riverside in Chicago is an elegant and impressively large "dynamic digital sculpture" conceived of and produced by talented innovators and home-team creative agency Leviathan

The scale and technical feat is indeed remarkable. Made up of 89 LED "blades" the video canvas dubbed 150 Media Stream, spans 125 feet wide by 25 feet high rendering out a total of 33 million pixels at a steady 60 fps!

The installation exhibits commissioned works by artists from across the globe via an intelligent content library designed to continually transform over time to keep content from becoming repetitive and to empower 150 Media Stream's curators with maximum visual flexibility.

As kindly and cogently detailed below by the Leviathan creative and technical team, TouchDesigner was integral to Leviathan's realisation of the entire project, used for all aspects of its creation: for prototyping and perfecting its real-time generative visuals, building users interfaces, interoperating with devices and other software tools, building the scheduler of the visual scenes, and the multi-computer system that runs the installation.

It's a phenomenal achievement and a beautiful work of art that entices and inspires. We are delighted to share this report with our readers with many, many thanks to Leviathan.

Tech Specs, Hardware, Performance

Adam Berg: Lead Engineer, Director of Research & Engineering: The 89 LED blades of the Riverside video wall are driven by four active PCs that run in sync. There are TouchDesigner runtime licenses on five computers with one being a hot-swappable backup. A custom video switcher (by McCann Systems) routes four of the five PC video outputs to the LED processors and to the LED blades.

The LED wall is about 125' wide by 25' feet tall, with a 3mm pixel pitch. The total canvas we render to is 15360x2160 pixels (33 million pixels), with about 10 million active pixels at the wall. We are running at 60 fps.

PCs are Dell Precision 7910 rackmount servers:

  • Windows 10 Pro
  • Single Xeon E5-­2637 v3 CPU (4 cores, 3.5GHz)
  • Single Quadro M6000 24GB

Each of the 4 active computers outputs one-quarter (3840x2160) of the total display canvas (15360x2160). We built the 5th computer as a hot-swappable backup that replaces any downed system in seconds.

Most scenes run smoothly at 60 fps, though a couple we had to drop to 30 fps to maintain consistent performance.

TouchDesigner Project Structure

David Braun, Lead Engineer: TouchDesigner's software environment is so flexible that we used it for both generating real-time visuals and scheduling all of the visual scenes.

All five computers run the same server-client software, and one of those computers takes priority in trying to run the scene-managing server. This server role instructs all available clients to open, fade in and fade out, as well as close Windows programs such as TouchDesigner or Unity. It also tells the clients to run the programs with certain settings such as color palette, which quadrants to render, and whether the scene project itself is a server or client.

Chris Hall, Lead Engineer: Due to the size and modular nature of the programs we were building, we relied heavily on source control. Each generative concept's configurations and presets were managed in separate text files so they could be versioned in parallel to the backend development. We used BitTorrent Sync to keep all computers' assets up to date and ensured that no broken or missing content was shown.

Since we were designing and building the system before the construction was even done, we used VR to view the project to scale. We were able to take the architects inside of their creation before it was even built!

The Server Project: Coordinating the Opening and Closing of Scene Projects

Adam Berg: Certainly a major challenge of this project was a request to not only build our own library of scenes, but to also allow other generative artists to run their software on the same system, not limited to TouchDesigner. It was a scary proposition – the last thing you want to do in a permanent installation is to let people touch the show computers and settings.

We ended up building a framework in Python and TouchDesigner that manages launching and closing scenes according to a schedule, whether those scenes are built in TouchDesigner, Unity, or any other framework. The icing on the cake is that we used Windows' desktop compositing features to actually crossfade from one scene to the next. So even though each scene is an independent piece of software, they transition from one to the next as if they're one.

Our "Master Controller" software, built in TouchDesigner, pulls schedule data from our web-based CMS, and manages configuring, launching, and transitioning between scenes on all the other computers.

The Scene Projects

Adam Berg: We built a total of 10 generative concepts for the wall, plus the capability to simply play video. The techniques we employed are radically different from concept to concept, so the flexibility of TouchDesigner was a huge asset.

Each of our scenes has a standardized structure to allow synchronization and rendering to each quadrant of the wall. We used a standard MVC structure (Model, View, Controller) to cleanly separate the control elements from the render elements, which made it simple to assign Master/Slave roles and render quadrants on the fly. We used Touch In/Out DATs, CHOPs, and TOPs to communicate data across computers and keep the computers in sync.

Configuration Utilities: Give Previews of Scene Projects with Configurable Settings

David Braun: As engineers we can play around in the TouchDesigner network editor all day, but we wanted our client to enjoy the same sense of discovery from adjusting visual parameters. That's why we made configuration utilities that exposed only the parameters that were influential to the visual output.

For example, one might use a configuration utility to build a playlist of images and see an accurately composited image of what would appear on the full wall. If it looks good, the user just needs to press a button to copy those settings as text to the Windows clipboard, and then paste them in the Content Management System (CMS) website. The CMS works like Google Calendar with an inventory of scene settings and playlists of those settings.

Adam Berg: All of our scenes (except one) have extensive customization options, ranging anywhere from new image or video assets, to color, speed, and endless simulation settings. We provided five to twenty or so presets for each scene, but since everything is generative, there is no reason to limit the configuration to that.

We built custom configuration utilities for each scene to allow the client to explore different settings and and save their own presets. The client can adjust colors and speeds, add new image media, videos - even add new line drawings for the "robots" to trace in City Builder. This is important for the long-term evolution of the wall, as well as easily customizing the wall for special events or VIP guests.

Where media is concerned, we use the configuration utility to check important specs like resolution, to ensure that the best quality images are used.

One scene we built simply plays video content. Using TouchDesigner, we built a powerful configuration utility that allows the client to preview new media to ensure that it lines up to the pixel map, and also encodes it to HAPQ. The utility gives flexibility for the content creators, as it can ingest both image sequences and video in many formats. We've found HAPQ to be an extremely fast codec, so encoding a piece of media even as massive as 15360x2160 pixels (150 Media Stream's resolution) only takes a few minutes.

GLSL Shaders

David Braun: Many of our generative concepts rely on custom GLSL shaders for simulations such as fluid motion, iterative collage effects, and "robot cars" moving around city roads. For the "City Builder" concept, each RGBA pixel represents a car with an XY position, an angle of rotation, and a lifetime value. In order to decide how to steer, we used a computer-vision technique for 2D motion so that each car looks ahead of itself by sampling the texture of a grayscale roadmap. Additionally, any black and white image can work as a roadmap, such as vector art or op art.

TouchDesigner was essential in rapidly developing and debugging the shader network. It allowed us to inspect every step of the shader pipeline and build 3D textures that represent trails of car positions all while rendering thousands of cars at once. Adjusting parameters for the motion or appearance of the cars was simple thanks to TouchDesigner's UI tools and close integration with Python. There isn't another GLSL shader playground as fun or as powerful as TouchDesigner.

Generating Artworks and Content

Bradon Web, Senior Creative Director: For 150 Media Stream, we used a wide variety of programs to ensure the best creative outcome. Project elements were exported from software such as Houdini, Maya, openFrameworks, and C4D to leverage a wide toolkit. There's no shortage of creative coding techniques when it comes to I/O and TouchDesigner. We used any and all processes and formats to get the job done.

TouchDesigner's strength is its ability to turn one format into another. 3D can convert to 2D which allows for very sophisticated GPU-accelerated techniques. It's about building bridges to discovering new ways of working and linking different software together.

In our concept "Vertical / Triangle Generator", we used point data attributes for every LED panel that we created with Houdini. The geometry imported into TouchDesigner as bhclassic, and then a SOP to DAT drives a GLSL shader which affects the geometry based on the point data attributes. This approach created a procedural mosaic where the shader texture would pixelize in real time using different triangle or vertical patterns. The best part is that the content management tool allows our clients to import their own imagery which keeps the content fresh.

David Braun: "Vertical / Triangle Generator" is a concept that iteratively subdivides high-resolution source images into smaller pieces until each piece is only a single pixel. One mode subdivides triangles, and another mode subdivides rectangles. In both modes, a separate grayscale texture specifies how much to subdivide in a local region. This grayscale texture animates over time, so all regions fluctuate between different degrees of subdivision. The most satisfying visual characteristic of the concept is that the largest triangle or square shown on each panel perfectly fits the width of that panel.

Bradon Web: In other concepts such as "Picture Window", we used pre-fractured geo from Houdini. The geo, animated in TouchDesigner, opens a wall of refractive glass with a custom GLSL shader and reveals panoramic footage of nature and cityscapes. For the footage, we created a video slicing tool which takes the full 15K resolution canvas and renders 89 individual smaller videos, excluding the gaps in the LED wall. The HAPQ codec was crucial for a more efficient playback system, which allows us to seamlessly playback and transition very large video files at 60FPS, without dropping a frame.

We also created an HDR panoramic conversion render system for the glass and water shaders. By doing this, we can import various formats of environment maps and reflection cards which allows us to convert them on the fly for moving and panning reflections in real time.

We created a scheduler that can run data-driven concepts for rain, wind, snow, and clouds. Weather data is pulled by the web service Wunderground, and will automatically pair content with weather conditions. In the "Natural Forces Rain" concept, this data drives the intensity and size of the raindrops.

For some of the motion patterns in "Natural Forces Winds" and "Clouds", we used simulations in Houdini and Maya to render 32-bit velocity textures. Other patterns such as the "digital circuits" pattern were made inside TouchDesigner.

The openFrameworks add-on ofxFlowTools is used as a sub-process in the "Natural Forces Clouds" concept. TouchDesigner spouts out the velocity and pressure maps to an OF app, which does the fluid simulation and sends the rendered image back to TouchDesigner for additional post-processing, all in real-time.

In our concept "City Builder", there is an autonomous agent simulation that is driven by high resolution black and white image maps. Within a GLSL shader, particle agent points spawn from the road positions and follow a series of forward lookup rules to stay on course and steer their behavior which makes them look very similar to traffic. However, it's not only limited to map images, the system can accept any black and white line art or op art patterns for input, and the agents will trace the image and generate a new course.

Our concept "Picture Window" uses two variations of a GLSL shader to displace the geometry of the LED panels, which reveals video content such as landscape timelapses. By inserting a simple black-and-white mask, it instructs the "Picture Window" project to displace the panels—either vertically or horizontally—in order to reveal video in the white region of the mask.

The "Pixel Fountain" concept treats each of the 89 LED panels as a particle-emitting fountain. We have a bank of animations that are triggered in increasing frequency over time. A GLSL shader samples the active animations and launches particles. While developing the project, we used a MIDI keyboard to drive the fountain in real-time.

Documentation Guide for Artists to Create Their Own Work

David Braun: Leviathan has created instructions and tools for artists to add new content to 150 Media Stream. Generative-media artists can follow instructions to communicate with our scene management framework. This ensures that their projects fade video and audio in and out like any of Leviathan's generative concepts. It also ensures that if a computer fails, the fifth computer immediately works as a backup no matter which generative concept is active. We're all very excited to see more art debut on 150 Media Stream in the near future.

To artists interested in submitting artwork for 150 Media Stream, please send a portfolio and resume to project curator and creative director Yuge Zhou at info@150mediastream.com. Those under consideration to become a featured artist will be contacted.

Accreditation

Client: Riverside Investment & Development Company

Executive Vice President: Anthony Scacco

Creative Director: Yuge Zhou

Creative Agency: Leviathan

Senior Creative Director: Bradon Webb

Senior Producer: Ellen Schopler

Director of Research and Engineering: Adam Berg

Lead Engineers: Adam Berg, David Braun, Chris Hall

Web CMS Director: Austin Mayer

UX Design: Austin Mayer, Adam Berg, Billie Pate

Web Developer: Fujio Harou

Content Development: Bradon Webb, Gareth Fewel, Alexis Copeland, Anthony Malagutti

2D Design/Animation: Gareth Fewel, Alexis Copeland, Ely Beyer, Nik Braatz, Andrew Butterworth, Jesse Willis, Matt Burton, Becka Riccio, Yuan Chen, Dakota Hopkins

Editor: Kirill Mazor

Engineers: Scott Pagano, Elburz Sorkhabi, Mary Franck

Contributing Photography and Video: Chris Pritchard, Jay Worsley, Nick Ulivieri

Executive Producer: Chad Hutson

Executive Creative Director: Jason White

150 Media Stream: Building and System Development

Building Developer & Project Commission: Riverside Investment & Development Company

Building & Lobby Design: Goettsch Partners

Video Blade Design: McCann Systems; Digital Kitchen

Software: Leviathan

Systems Engineering: McCann Systems, Leviathan

150 Media Stream Branding: The Narrative

“150 Media Stream” Video: Leviathan

Creative Director: Jason White

Directors: Daniel Ryan, Jason White, Bradon Webb

Director of Photography: O’Connor Hartnett, Mike Bove

Associate Producer: Brittany Maddock, Erica Grubman

Editor // Colorist: Kirill Mazor

Assistant Editor: Alexander Ward

2D Animation: Gareth Fewel, Nik Braatz

Design: Gareth Fewel, Nancy Hu

VFX: Anthony Malagutti

On-Site Engineer: David Braun, Adam Berg

Lighting // Gaffer: SEM-Q Productions, LLC

Camera Assistant: Mitch Buss, Matthew Bowie

Original Score and Sound Design: Joel Corelitz, Waveplant

Video Content: Leviathan

Talent: Enza Lappo, David Keohane, Ellen Schopler, Kirill Mazor, David Braun

@2017 Produced by Leviathan

lvthn.com

Comments