Close
Company Post

Julien Bayle's the.collapse

the.collapse is a live performance created by the prolific multi-disciplinary artist Julien Bayle in early 2021 based on his eponymous album released on ETER Lab in 2020 during the pandemic crisis. Impressed by TouchDesigner's offline rendering and the ability to increase the quality of his visual output as well as his familiarity with python and its efficiency, the.collapse marks Bayle's first foray into incorporating TouchDesigner into his longterm toolkit of Max/MSP, Ableton Live and of course, machines.
The work's visual aesthetic is based upon old failed polaroids scanned at a very high resolution by the artist so as to reveal the frozen chemical processes. Via granular zooming into and scanning of this raw matterial, failed in their primary directive to bring forth depictions of reality, the.collapse addresses the idea of deconstructing processes and structures and as per Bayle, shows us a whole abstract world illustrating the permanent failure of the building process. It's our pleasure to bring you here this fascinating deep dive into the artistic and technical acumen of Julien Bayle.

Derivative: Julien can you tell us a bit about your background, the tools you use and what really interests you?
Julien Bayle: I studied computational biology and have a Masters in Computer Sciences with a strong focus on CG. I was mainly focusing on solving and visualizing biological macromolecules 3D structures. Beside this I was always composing and using computers for midi control at the beginning, and then as very early DAWs. Around 1996 I used Fasttracker II a lot as it was providing a nice way to trigger sounds and to have a kind of control over them. Then I used early versions of Cubase with hardware sound modules, around 1998, and as computers’ power started to be ok to process signal & audio, I discovered Rebirth & Reason by Propellerheads Software, and finally Generator, which was a very early version of Reaktor by Native Instruments.

I discovered Max/MSP around 2000 and it has been a massive notch in my creative timeline. Besides Generator which allowed me to create very flexible and modulate-able contexts for creating whatever I wanted to create from train of triggers to sound itself using custom rules, stretching and shrinking time, using non-linear rules to generate sounds and more, Max/MSP was a total blast for me. I started to learn Max/MSP by myself, and ended by teaching it around 2008 after having used it for many personal & collaborative projects. I also played with early Ableton Live versions around 2004 which drove me to become certified as an Ableton Certified Trainer in 2010 which I’m still today.

Since 2010 I mainly create and teach quite often too. I have my own studio, now smaller and more compact than before, where I work on my own projects. I merge visual arts, music composition and physical approach of sound art by creating advanced programmed installations and audio/visual live performances. My work is mainly based on both experimentations and programming using concepts of time expansion & contraction as the main guidelines. I’m interested by microsound and granular synthesis and use a lot of field recording-based sounds in my practice, besides algorithms and machines.

 

My scientific background allows me to work with Research Labs and I feel very comfortable there. Scientists and artists have the same motivation for trying to reveal what we don’t know or can’t see and hear. I used to work with LMA (Laboratoire de Mécanique et Acoustique – Acoustics & Mechanics Research Lab) in Marseille, France; this prestigious institution was Jean-Claude Risset’s home, one of the most incredible scientist/composers ever. I recorded 2 hours of silence there in the lab’s anechoic chamber, which is one of the most silent in the world, and it ended in Violent Grains of Silence album released on ELLI Records: https://julienbayle.net/works/violent-grains-of-silence/. ELLI Records is one of the labels on which I have released the most. It is an experimental music label founded in 2014. Elli is an artistic platform open to all forms of creative experimentation. I also facilitated the creation of one of the main Ableton Certified Center at the core of this lab.

Technically speaking, I’m currently working with Max/MSP, TouchDesigner, Ableton Live and some machines including a very compact modular system. I connect sound and visuals in a way that the sound always alters the visuals. I used to extract sound descriptors based on sound perception (loudness, centroid, transients and more) and to feed my visual systems with these. I do a lot of field recording and track sounds and vibration with microphones & contact mics. I like to capture the parasites and buzz from hardware and electrical wires.

I recently started to use TouchDesigner. I knew about it since years but never got a chance to dive into it. I knew some people from legacy raster noton label or Leviathan company who were using it and I was amazed in 2011 by Amon Tobin’s ISAM which I knew had been designed with TouchDesigner too.

Derivative: How did you came to use TouchDesigner for the.collapse?

Julien Bayle: One of the first things which drove me to TouchDesigner was the offline rendering option. I didn’t get it with other software and I wanted to be able to increase the quality of my rendering (higher FPS and larger visuals size), even if I mainly focus on minimalism, devoting a kind of cult status to the primitive line.The second thing was the python possibility at the core of the system as I already knew this language and its efficiency.The third thing was the inspiring projects designed with it that I have previously watched.

I officially started to work with TouchDesigner in the middle of January 2020. I watched some tutorials and as I used to do, I learned from documentation as the first source of knowledge. I started to post on Derivative’s forum and I really enjoyed sharing ideas and learning from people there. I met a very dense community of skillful and expert people there and I always got answers, ideas and ways to follow.

Derivative: What did you want to use for the.collapse project (genesis of the visual side of the project, polaroid scanning, feedback textures etc)?

Julien Bayle: the.collapse was initially a very special project. It has been recorded and mastered in early 2020 just before the pandemic, and released on Eter Lab in the middle of the French lockdown late April 2020. I used to design small systems for composing. I need tools allowing me to play with variable tempo, to slice time and reorder slices, almost on the fly and even if I reuse a lot of setups, I used to build new ones very often. The idea behind this is the control: like being able to generate something but something I could shape and change later by deconstructing and reconstructing. the.collapse has been very different and I performed it live and just push the record button. I thought it was very important to get this idea of irreversibility for such a project talking about tensions & ruptures.

Mois Multi festival in Québec, Canada got in touch with me late 2020 and proposed me to play a remote audiovisual live performance early February 2021 as well as exhibiting FRGMENTS online. As soon as I got this proposition I decided to design the audiovisuaI version of the.collapse and I aimed at TouchDesigner as I had the opportunity to get a new computer for my studio. I built it for TouchDesigner with a big amount of RAM and a Nvidia GPU.

As the.collapse concept is omnipresent in the album’s sounds I wanted to use it  everywhere in the live version, and mainly in visuals aesthetic.

This idea of a permanent collapse loop (without ending to touch the ground) and the irreversibility concept in the same place drove me to go back to my past. It was easy to find my failed polaroid stock from my (almost) 10 years of photography practice from 2000 to 2010. I fell into this idea of reusing these, addressing (again) the cut-up concept I used to follow in 99% of my creation, as in take something from the past and use it to create something new.

I scanned all the polaroids at a quite high-resolution. I wanted to use the pictures’ grains and crackled matter as the raw matter for my visuals.

Derivative: Can you explain the process by which you were analyzing the audio in real time to alter the visuals?

Julien Bayle: Since 3 years ago I have wanted to design a program able to scan and dig the polaroid matter which I could use for video feedback. I wanted to have my program scanning the polaroid texture like having a walk on it.

The goal was to build that in no more than 3 weeks, without having previously programmed anything with TouchDesigner. It was a challenge but it ended well. Coming from and still using Max/MSP on a daily basis, I had to open some new mind pathways and start to think about not only push-system paradigm but now for the first time, a pull-system one. Without adding too much details it means one program paradigm is based on source objects pushing messages to destination ones, and the other one is more like destination objects requesting sources if there is something new to know about for them. It was the first small challenge.

In the past I have built reusable systems, even basic ones. For the.collapse, I wanted to build my very own custom (and easy) framework allowing me to do what I needed at each step in the process, from the creation part to the performance part.


I created a kind of frame with these features:    

  • able to get MIDI data (notes, cc, program change) from another computer,  
  • able to listen to the sound from another computer,   
  • able to switch through a set of previously programmed contexts,   
  • with a unique visuals engine, exposing a bunch of parameters sometimes externally controlled by the sound or MIDI flows, sometimes moving “by themselves”, sometimes static.

This is almost always the frame I need in my projects.

 

I used two computers for the.collapse live performance: 

  • one Ableton-Live (and max for live) driven for all sound generation and sound analysis,
  • and one TouchDesigner-based for visuals generation.

 

At first, I designed some parts like :    

  • something able to zoom/crop and travel across all my polaroid scans on the fly,    
  • a very custom texture feedback system.

Then I decided about which parameters could be exposed for further control by the sound or some other TouchDesigner endogenous processes. There were a lot. And as I used to let my systems explore territories and generate matter by themselves, here it was very hard for me to make the decision to limit/constraint the system a bit more in order to be sure that it would produce interesting matter always. 

After this important step I put my network into a container named Base COMP in TouchDesigner.I designed a gate-like / switch system able to switch midi and audio external flows to the right context; a context can be defined as a set of actions connecting triggers and visuals parameters. I’ll describe and show some pictures below.

Each the.collapse’s songs has its corresponding context on the TouchDesigner side.

I just start a scene in my Live’s liveset and use MIDI program change to switch TouchDesigner from one context to another.

At the switch there is a routing of my flows from the previous context to the other, letting the first context quiet and not cooking to save CPU/GPU performance. But there is also an update for all visual parameters with static values. Like updating initial conditions.

In the current context, my MIDI and audio flows drive many visual parameters.

I can increase a sound effect on Live’s side. For instance I add harsh white noise to a track and my sound analysis in custom Max for Live devices get it and fire new sound descriptor noisiness values to TouchDesigner. This latter, for instance in that current context, has a specific action for that and continuously increased the feedback level. If I stop the harsh noise generation TouchDesigner reacts and decreases the feedback level.

Designing reacting systems like this one since many years I knew it would be a hard work to set it up and to subtly set each value.

When I used some recorded sound materials in my performance or installations, I used to do a pre-analysis by having a big graph showing me my sound descriptors’ evolutions on the time axis. Like this I can see minimum, maximum values, how the graph is evolving fast with peaky values or smoother and cool ones. It helps me to connect the external flows to visuals parameters and gives me the range, more or less, in which I can use it. With generated material, it is harder as I don’t want to constrain the composition/live performance too much for fitting any further controls (visuals parameters here) and in the same way, I don’t want to go beyond some threshold that will make the visuals not interesting aesthetically speaking (for instance, pushing the noise I was talking about before beyond a limit and having my whole final texture totally white without anything visible).

I observe the systems I built as if they are totally outside of me. They are like animals I can observe in their own territories. I see if they go there often or not, or if they push their travel more into that part. I used to do that in Max with a multislider object with Slider Style set to Reverse Point Scroll, making it like a value history graph. I do that in TouchDesigner with Trail CHOP node. I can observe the global history, while external sounds & midi are flowing into TouchDesigner. And I use a lot of Math CHOPs for scaling external incoming values’ ranges to the right visuals parameters’ range I need. If the loudness is used to control the global brightness of the pictures, if it is near to 0 (linear scale), I don’t want my visual totally black and the opposite so I use a Math CHOP with an incoming range 0. to 1. And my output range is set up to 0.1 to 0.8 for instance. Sometimes it takes more scaling/offset/threshold nodes. It really depends on how I want my sound and MIDI to influence the visuals.

After this huge work for each context I ended the project with 5 contexts related to the 5 tracks for the album.

The way I’m working takes time and hard work because I want to explore and explore again. I could have followed some shortcuts if I had decided some parts before, but my work is also based on exploration and I like to see what a system can render for me. I build a wide outline, even if it’s a very custom and specific one, following very precise ideas at the beginning, but in this outline the system can give very different results.

Derivative: TDAbleton links TouchDesigner closely to Ableton Live using MIDI Remote Scripts which are an unofficial but powerful Ableton feature. Your website hosts one of the best resources for Remote Script documentation and examples and was instrumental in the development of TDAbleton. How did you get into MIDI Remote Scripts and what did you use them for?

Julien Bayle: Yes, Ivan DelSol who designed TDAbleton told me recently that he used a lot these documentation and sources. I know that I was surprisingly one of the first to have uncompiled MIDI Remote Scripts .pyc files. I don’t really host them as they are on Github and as some others also host that. And I know a lot of people built from hardware to software using these, forking them, asking me some custom parts too. Actually I did that in early 2011 and kept it updated to better understand how the system was working. We didn’t even have Max for Live at that time. I even had a discussion with Robert Henke in the South of France after an Ableton Showcase about this. They said “we know people can do that and we don’t want to stop it”. I think it also contributed to get more people involved in Ableton’s custom programs coding. If people are happy and create/extend what I have (just) uncompiled, I’m more than happy.

Curiously, I don’t really use these. I use Max for Live and JS coding for programming Ableton Live when it is required.

One year ago I was thinking about python for my composition tool GRID which ended with a first album made only with it. This album has been released on our precious ELLI Records too. https://ellirecords.bandcamp.com/album/grid It is a custom Max patch that can be reused and which communicates with Max for Live devices based on seq~ object. But in the end, it was only Max and Ableton Live with Max for Live. I shared a short video on the page about GRID which is still under construction.

Derivative: Have you used TDAbleton?

Julien Bayle: New to TouchDesigner I used it a bit.It is very interesting as it takes a bunch of data from Live that can be used in TouchDesigner.

From someone on the Derivative forum I discovered a developer who was working on a very low latency system based on Shared Memory. I recently worked on a prototype of a global system (basically a template) that I could use for my live performance and audio-reactive design rendering with Ableton Live and TouchDesigner and the more I was testing, the more I was feeling the system as very fast and stable. Da Xu designed some Max externals for sharing data with custom TouchDesigner nodes he also coded. Basically, as the system uses direct memory access for sharing data from a software to another one, I could even remove OSC links or virtual MIDI bus from the equation saving both CPU and removing a potential bottleneck. I also don’t need to use a virtual sound card or internal audio routing inter-application as his system is able to share 8 audio signals from a program to the others! It sounds really crazy and it works incredibly well. I can’t tell much more as I didn’t code these custom nodes and Max externals and it’s Da’s work, but what I think I can say is that Da seems to have planned to release port for VCV Racks, and also to extend the system as VSTs.

It allows a lot of channels to be shared from an application to another one. Basically, we can send float numbers as channels as in the figure below.

I designed very custom Max for Live devices using these externals focused on my own use which I can use in my different livesets' tracks. One captures all MIDI notes and fires them to TouchDesigner. It also contains 8 automation lanes I can use for controlling values from Live to TouchDesigner with a very high accuracy and from 0. to 10000 for instance. Another one runs different sound analysis of the incoming stereo signal. It can fire normalized values of different sound descriptors like loudness, centroid and more to TouchDesigner directly as channels here too as shown on the next figure

I also have one which is more for global use like :    

  • automating the switch from a visual context to another one and also including the way to have sub-contexts by just automating some device’s parameters in clips directly,   
  • getting Live’s transport beat and exporting each time the beat occurs into TouchDesigner,   
  • another automatable parameter which I used to call timeline and I often use to "inform" the visuals system that I’m in this part of my song or in that part of it. This parameter is usually automated by a ramp from 0 to 10000, 0 and 1000 being the beginning and the end of the song.

Then I have another one which is very nice: it fires 8 audio channels with a possible dynamic routing which is allowed by the recent version of Max for Live. I can choose 4 tracks in my live set and send the audio directly from Live to TouchDesigner. I used to do that when I needed to convert audio into textures (or matrix, in Max/MSP) in a raw way.

This new prototype which is becoming my global template for all projects I want to design with Live & TouchDesigner is a kind of extension of the.collapse system but it is now lighter and more efficient and I really thank Da Xu and some people from Derivative’s forum who answered my both broad and sometimes very specific questions about the work on this prototype.

I basically use the same idea with incoming flows, which I can route to the right Base COMP. Each Base COMP is a context as described before. If I need only one visual system or multiple, I have them in other Base COMPs.

While I’m composing music I used to assemble bits and elements on the visuals side too. With my new prototype/template I can very easily do that without thinking too much about the infrastructure side because it is already created and I just have to duplicate global nodes and eventually connect them.

For instance I created a visuals generator in the right place of my prototype/template. I chose a couple of parameters which caught my interests for tweaking: for instance, a color (I name the channel which will control it “colorValue”), a scale (I name it “scale”) and the camera position which will be controlled by 3 channels: “camX”, “camY” and “camZ”.

In my visuals engine Base COMP in which I have all nodes related to visuals generation, I have a network ending in a Null CHOP. This is the single one which will export to visuals parameters, and which is fed by the global channels flow coming into the visual engine. And Global channels come from the current Base COMP context of actions linking triggers and flows from Live on one side and visuals parameters influence on the other one.

Here below is a context. On the left there are incoming flows, on the right the channels dedicated to control visuals, and in the middle my custom actions related to the current context.

This system allows me to work on visuals. Without sound. Improve my networks. And as soon as I’m ready to connect the sound I just have to copy-paste my whole visuals engine network into my template and I can connect all the controls in few minutes. I shouldn’t say this now but rather after a year of use, but as I already have done this same kind of very global template/infrastructure action in Max 10 years ago, I already know that this is this kind of template I’ll use and reuse every time. Actually, this is very flexible as it can be used to produce visuals for a track release, to design a live performance, to control visuals by storing all controls/automations in Live for rapidly being able to recall what I want.

Derivative: What are your next ideas and projects?

Julien Bayle: I’m currently working and writing a couple of pre-ideas, pre-projects.

I have album(s) project(s) on the table. This will trigger microsounds, parasites and algorithms concepts. I’d like to generate visuals from sound triggers and get back from visuals to sound too as a kind of mutual influence. This is work in progress at the moment.

I am also working on point cloud ideas and I have this idea of mutual influences in mind. The sound is always the start in my work. I’d like to have sounds triggering things and influencing a point cloud and having the point cloud depending on points density around the camera to influence the sound too.

I was interested in GAN for generating but I’d need a proper research residency for this in order to build what I have in mind. I’d like to have a light system that could generate pictures in a controllable and influenceable way. Almost on the fly. I feel this field could drive me too far from the emotion I need to trigger while I’m working on my own creation and this is what I’m trying to avoid.

And of course I’m also writing ideas for a new live performance design. I really miss playing live and I’d like to go back to my first very abstract and minimal art forms. I need to trigger very low-level (in the sense of very physical, sensitive, perceptive) more than using very figurative things. I’d like to extend and produce something with the same kind of energy as ALPHA. I feel definitely concerned by the way the world is going - especially with the pandemic - involving information saturation and data manipulation. I’d really like to be able to sense new art spaces and create more, and look forward to travel again. Let’s hope!

 

Follow Julien Bayle
Art website: http://julienbayle.net
Tech website: https://structure-void.com
Twitter: https://twitter.com/julienbayle
FB: https://www.facebook.com/julien.bayle
Soundcloud: https://soundcloud.com/protofuse
Vimeo: https://vimeo.com/julienbayle
Insta: https://www.instagram.com/julienbayle
Github: https://github.com/gluon
LinkedIn: https://www.linkedin.com/in/julienbaylethereal/

Comments