Derivative: Marian, can you tell us a bit about your background experience and the things you like to do? Tools you use, projects you work on, things that inspire and motivate you…
MONOCOLOR: I grew up in a very creative and artistic family, being always surrounded by a lot of contemporary music as well as contemporary visual art. I think both these sonic and visual stimuli during my upbringing had a huge influence on me and my work.
I used to play the piano, later played in a band and also started to produce and perform electronic music. I think my artistic practice really started to develop when I discovered analogue photography. I was very fascinated with the immediate contact you could have with the material and this was even more apparent once I started to experiment with developing my own films. I was never really interested in depicting reality, rather I was fascinated by creating abstract images using unconventional methods like experimenting with the photo chemicals.
I got more into digital approaches when I started studying media technology in 2013. These studies opened up my eyes to the huge amount of possibilities in the digital world and showed me a lot of exciting tools. Just like with photography I started to disassemble these tools to try and create something that was different than the outcome the tool was initially intended for.
I was lucky enough to meet some very inspiring artists and mentors during that time. This was also when I realized that audiovisual and media art is the perfect way to combine all of the different interests I have developed over time - visual and sonic art, design and performing. This was a huge revelation for me and once I finished my studies in media technology I started studying Digital Arts which I am still doing today.
Right now, I would definitely consider myself an audiovisual artist. The interrelation between sound and image is at the core of my work and so I try to explore this field in both performance-based and installation-based works. I like to work as a solo artist as well as in collaborations with other artists like musicians and performers. I am really interested in building my own tools, so working with visual programming as well as electronics is at the heart of my practice.
Derivative: How did you first encounter TouchDesigner and how does it fit into your art practice? Has it changed the way you think about and make your work and if so can you give us some examples?
MONOCOLOR: I first encountered TouchDesigner back in 2014 during my studies of media technology when I was taking a course on DSP with Patrik Lechner. We were using Max/MSP as a tool to explore different techniques for digital signal processing and as a side-note he mentioned Jitter, the extension of Max that is used to create realtime graphics. I remember him saying that “Another great tool to create some really complex stuff is TouchDesigner”. This was the first time I heard about Touch and when I looked it up I was very excited to see what people were making with the software.
At that time, I was mainly working with rendered material but I was very fascinated with generative realtime graphics. However, many of the tools I had encountered felt very difficult to get into and it seemed like a lot of the outcome just couldn’t reach the level of complexity I was looking for. TouchDesigner seemed to be a tool that could solve these problems - the interface felt very familiar because I was already using node-based tools like Davinci Resolve and the performance and level of visual quality also convinced me. I kept it in the back of my mind for some time but I only really got into it in 2017 when I participated in a beginners workshop with Patrik that helped me to get over the first learning curve.
When I’m working this way, I feel like I often take the role of a curator. I design a process, the process creates an outcome and based upon my artistic vision and experience I evaluate this outcome and make some changes to the process. This feedback-loop approach feels extremely natural in TouchDesigner. I feel like it has a good balance between being a very open and creative tool, yet being also able to be very technical and analytical (and stable!) when you need it to be.
Derivative: How did Latent Space come to be? And second part to that question is how did it evolve over the iterations over the course of a year and in different domes?
MONOCOLOR: It was actually quite serendipitous! The department of Digital Arts at the University of Applied Arts in Vienna, where I am currently studying was holding an open call among their students for an exhibition on immersive media called “#fuckreality”. They were looking for works for VR, AR, mixed reality and also the fulldome. Back then I was only studying there for a few months and had never worked in the fulldome, yet I was very fascinated with the medium. At the same time I was in Berlin for the TouchDesigner summit and was participating in a workshop on fulldome techniques with Mathieu Le Sourd. There, I learned about the fundamentals of creating images for the dome in TouchDesigner. With this newfound knowledge I decided to participate in the call. I only submitted a couple of sketches and concepts, but I was lucky enough to have professors there that saw some potential in my approaches. So I continued working very intensively on the work and created a 5-minute rendered version of Latent Space that ended up being shown in the fulldome at the exhibition.
After the exhibition was over, I had the strong feeling that I didn’t really exhaust the potential of the material of Latent Space. I put the work aside and thought I would revisit it some day when the opportunity presented itself. And it did! A few months later I was very lucky to be invited to do a residency at the SAT in Montreal. They were looking for live performances for the iX Symposium and Elektra Festival and so I decided to adapt Latent Space as a performance.
When I arrived in Montreal and started working in the massive 18-meter Satosphere, I realized that due to this huge scale I had to change almost all of the fundamental parameters of the piece like scaling and speed to create the impact that I wanted to achieve. The version I ended up presenting at iX Symposium and Elektra Festival was definitely a step in the right direction but still, I feel like there is a lot of hidden potential there, so for each new performance of the piece I tend to change a lot of things to keep it fresh and exciting for me.
Derivative: In the description of Latent Space you say: “The omnipresence of the virtual realm is transposed into the physical space of the dome to unmask the often proclaimed boundlessness of digital space.” Can you talk a bit more about that in relation to working in the dome?
MONOCOLOR: Nowadays all of us are constantly surrounded by invisible, all encompassing digital layers. These layers promise endless possibilities and options, yet, can also be extremely restrictive and limiting. I wanted to transport this by creating a virtual space within the fulldome that can seem very vast at times but can also feel quite claustrophobic. For me, the fulldome is the perfect space for this kind of topic since, unlike many other immersive media technologies, it is a social space where people can experience things together.
Latent Space is as much of a research project as it is an artwork. I wanted to explore what it really means to create images in this very specific space. First, I worked a lot with the actual, physical form of the dome, working with images that highlight the hemispherical shape of it.
This with the addition of having a horizon and an extension of the floor when working in a 210 degree dome like the Satosphere can create a very powerful physical effect. I think all of these things can only be achieved in the dome, so I feel like Latent Space can only exist in this space.
Derivative: Can you take us through the practical part of creating and performing this work with TouchDesigner?
MONOCOLOR: Latent space consists of two banks of three grids that are stacked on top of each other. One grid per bank is used for the rows, one for the columns and one for the particles. I can control the displacement, opacity, lighting and transform parameters for each of these grids. The idea is to create an abstract space that is constantly changing, morphing and mutating, so there is no fade to black during the performance. For me, creating a highly dynamic and fluid piece is very important. On stage, I basically blend between different states of this space to create a coherent composition in realtime.
Derivative: What was your inspiration for the sound which is beautiful and gripping all on it's own... and did the sound precede the visuals or what was your process here?
I am always very interested in the tension between analogue and digital aesthetics. This is something I seek out in my visual work that is very often changing between architectural, rigid structures and organic textures. I also try to use the same approach for my sound work. I am working a lot with recordings that I then manipulate using various techniques to create sounds that almost feel synthesized, even though the basis of the it is a sample.
I wanted to keep the sonic level quite minimalistic for this work since I was aiming to create an almost hypnotic environment. Also, in a space this massive even small gestures can have a huge impact. The sound mainly consists of slowly shifting soundcapes and drones, that, like the virtual spaces I am constructing on a visual level, morph into each other. Of course, another important aspect of working with sound in the fulldome is spatialization. Creating sonic spaces in conjunction with visual spaces can be extremely powerful tool to create highly immersive works. This is also an area I want to further investigate in the future.
For Latent Space, the visual concept came first. When I knew that I wanted to work with a very restricted set of materials - lines and points - I created a set of different states of this system that all only used these two basic elements. Once I had created these building blocks, I started working on the sound. From then on I was going back and forth between sound and image. I usually like to work this way because this allows both sound and image to influence each other, since I always strive to create an audiovisual unit, rather than images that follow sound or vice versa.
From a technical standpoint I run TouchDesigner on one machine and Ableton Live on a different one. The two computers are linked with a wired connection and communicate over OSC. The audio machine acts as the master, controlling all the sound and control data and the visual machine as the slave, meaning that I only need to interact and perform with the audio machine which clears up a lot of mental capacity on stage. I tried to create a system that is very robust and stable yet also allows for experimentation and exploration on stage. This is a very difficult balance to strike and something I definitely want to improve in the future. For the OSC controls I use LiveGrabber, which is a great set of Max for Live Devices that allow you to easily send OSC data out of Ableton. I actually haven’t looked into TDAbleton yet because this approach has worked great for me so far but seeing what is possible with TDA makes me want to try it soon. Maybe for my next AV performance!
Derivative: Can you talk about what is attractive to you about working in real-time, with audio and visuals at the same time and creating “immersive” experiences. And how do you find TouchDesigner to be useful in these pursuits?
MONOCOLOR: Working in realtime allows me to be very flexible. Of course, actually presenting something in realtime is not always necessary but it makes the whole process of creating feel very natural and fluid. I really love to work in a procedural way, building systems and processes that are predictable yet unpredictable at the same time.
The basic concept of parameter mapping, that is at the core of digital audiovisual works is so deeply embedded in the software that it is very easy to create the kind of responsiveness I want. While I don’t think it is always about synchronicity (working with counterpoints can also be very interesting), having the possibility to be very tight and have a very direct connection between sound and image is very important to me.
Derivative: Do you have projects in development you would like to mention here?
Monocolor: I still feel like I only scratched the surface of what’s possible in the dome. I am currently working on a new fulldome performance that builds and expands on both the conceptual and aesthetic qualities of Latent Space that I hope to premiere in mid to late 2020. While the dome is great, it is also limiting in the way that you need such an elaborate technical apparatus to show the piece. AV performances for regular screens are much more versatile and adaptable to different situations and spaces.
I am also exploring more physical, sculptural works. I have been quietly working on some kinetic objects in the last years and hope to further build on this practice in the coming months.
Derivative: What has been useful to you in learning TouchDesigner? How did you learn the software?
Monocolor: After the beginners workshop I participated in, I just started experimenting and making stuff. For me, this is the best way to learn a new tool, just trying out different things and then troubleshooting once I hit a wall. The TouchDesigner summits in Berlin and Montreal have also been a great way to learn new approaches as well as the fantastic and hugely helpful community online.