Close
Company Post

Pauric Freeman On Working with Sound, Image, Input Devices and Human Perception

We recently caught up with Dublin-based creative developer and sound and media artist Pauric Freeman who first came to our attention via his very compelling series of live a/v Instagram posts made with TouchDesigner and an Eurorack. His career path like many artists in this community started with playing and programming electronic music which lead to working with synthesizers and generative graphics and eventually to designing interactive installations, mobile applications, and various interactive projects for clients and studios. We are very pleased to share with you our recent conversation. 

Derivative: Can you tell us a bit about yourself and your artistic and professional background? How did you get into this field?

Pauric Freeman: My name is Pauric Freeman, I am a creative developer and media artist based in Dublin. In my professional work I freelance with different studios and clients, where I design and develop interactive experiences for mobile, web, installations, and AR & VR. As an artist I explore sound and visual perception through generative systems and audio-visual performance.

I got into media technology through music. I started collecting records at age thirteen, DJing shortly afterwards and eventually moving into synthesis and sound production. Growing up in rural Ireland at that time I had little access to art, so it wasn’t until I started college that I was exposed to other areas that interested me. I was introduced to interaction design through programming applications with Flash, and playing with Arduinos and input devices. During my masters I started working with Processing and studying abstract animation, visual music, psychoacoustics, and musique concrète, which in turn inspired an interest in visualising sound and the psychology behind audio-visual experience.

 

 

After my masters I moved to New York for a year and then Melbourne for two years. During this time I was collecting a lot of synthesizers and playing around with Processing, MaxMSP and MIDI data. I was using it to generate visuals in realtime and occasionally creating artworks for installations and festivals, later focusing on live audio-visual projects. During this time, I was working in production and media art installation at venues like the National Gallery of Victoria, but I soon realised I wanted to move more into the development side of things. When I returned to Ireland I began freelancing as a creative developer.

Derivative: How did you encounter TouchDesigner and how does it complement your toolkit? What have you been using it for?

Pauric Freeman: I started playing around with TouchDesigner at the end of 2019 as a way to prototype installations, but it quickly became my software of choice for audio-visual performance, commercial projects, and teaching.

In my live performances, I use it to create interaction between sound and visuals. As seen in the clips I share online, I create music with my modular synth and Ableton, which then feeds into TD to generate visuals that respond in realtime to what I am playing. I use colour, shapes, texture and motion to visually represent the music. The flexibility of the system is very important, I can easily switch my input data between CV (control voltage), MIDI, TDAbleton, or audio inputs.

Designing in TouchDesigner is a very fluid experience which makes working with the technology enjoyable; I can easily spend hours getting lost in a network and that’s an important part of the process for me.

In my professional work I use TouchDesigner to build unique user experiences. To achieve this I work a lot with input devices like Kinect depth cameras, Leap Motions, webcams, microphones etc. It’s so quick to prototype and develop ideas in TD that it’s become my platform of choice. Each project is different and sometimes you are forced to work with a specific engine or platform to build an experience, but if the project can be developed in TD it means quicker development time which is crucial for commercial projects.

 

 

Derivative: We first came to know you via your very popular series of modular synth/TouchDesigner/Ableton audio-visualisation Instagram posts. Curious to know how that  series came about, how these works are created and how you are getting data to and from TouchDesigner?

Pauric Freeman: The videos are excerpts from my live performance exploring cognition and perceptual experience. Focusing on elements like motion, temporality, punctuation and repetition within the sound and translating that in the visuals, it gives you a lot of room to bend and twist the perception of what viewers are experiencing. Because the sound and visual are actually separate entities, you have to work carefully to merge the two in the viewer's perception. Of course, there is a limit and, beyond that, you can lose the believability of the experience, but I like playing with that to see how far it can be pushed.

At first I was using CV to MIDI converters to transfer data between the synth and the computer, but it was quite a convoluted process, so I started using DC-coupled audio interfaces. I use the Expert Sleepers modules for this and it’s great as I can input CV and audio directly into TD as CHOP data from my Eurorack synth. I use this alongside the TDAbleton package, and I have edited the TDAbleton Max patches a little to get additional data that matches my modular from TDAbleton too, like triggers, and pitch as a continuous value. This makes it easier for me to switch between data sources during the creation process.

 

 

Derivative: You've been performing live recently, can you tell us more about the performances and how you approach them?

Pauric Freeman: I had two audio-visual performances at the end of last year. The first at the Light Moves festival in Ireland was in support of Beatrice Dillon which was amazing, her work is really incredible. Following that, I had a performance at The Complex Gallery in Dublin for Dublin Modular. The performance is constantly a work in progress and takes quite a bit of preparation each time, but at its core it is about designing an audio-visual space and navigating through it.

I work on the overall theme and direction first and then build a body of sound around this, before applying a loose structure for how it will progress and move through sections. I then design visuals in TouchDesigner and control it with data from the synth and Ableton. I’ve usually got a lot to choose from here, like triggers, gates, pitch, LFOs, S&Hs, FFT analysis, audiowaveform, etc.

I’m interested in human perception, how tangible our cognitive processes are and how you can shape them with the right sensory information, so choosing the right data source plays an important role in this.

Generating graphics with an audio waveform vs a gate signal will have very different results, so it’s about designing the experience and carefully selecting data that will help you achieve that.

Derivative: Please tell us about the inspiration for River Poem and if you could detail your creation process?

Pauric Freeman: My colleague at the time Dr. Jeneen Naji approached me with the idea, as she was interested in exploring the work of James Joyce through contemporary technology and knew I had some experience with machine learning models. At the time, GPT2 was the publicly available Open AI language prediction model, so I retrained the model on Finnegan's Wake, the final and most complex piece of writing by Joyce. I used the trained model to generate new Joycean prose, and with a bit of tweaking of the variables the results were pretty fascinating.

I developed a generative system in TouchDesigner to continuously and dynamically select sections from the text results, creating an endless flow of generative poetry. This was projection-mapped onto a large 3D model of Dublin city and animated in realtime along the River Liffey. This referenced lots of themes of flow and motion adapted from his writing style, and the entire book is cyclic in nature (the last sentence recirculates back to the first sentence), so the element of endless variation played an important role in representing that within the artwork.

 

 

Derivative: You are also teaching interactive media at the University of Melbourne. It would be interesting to know how you structure your courses and teach TouchDesigner and how the students respond and learn.

Pauric Freeman: I teach interactive media with the Fine Art Animation students at the University of Melbourne. The module helps students move from working with non-realtime animation to working with realtime systems, allowing their work to communicate with user input. The lectures explore how we communicate with technology through interaction and the benefits of realtime animation. We examine generative art, interactive installations, audio-visual performance, gaming environments, and augmented and virtual reality, through this lens.

The tutorials are designed to allow the students to develop their own realtime projects. I teach them the fundamentals of TouchDesigner, creating visuals and using data as an animation source. I then introduce them to different input devices, starting with mouse input, then audio inputs, webcams, Leap Motions, Kinects, TDAbleton, etc.

Derivative: Have your students surprised you much?

Pauric Freeman: Yes, massively. I’m here teaching them what CHOP data is with mouse input and they come back in the following week showing me spatial synthesizers they made with a Kinect.

I think one of the strongest points for TouchDesigner is the consistency in how different input devices work; if you understand how to input data from a mouse you can easily work with data from a Leap Motion, Kinect, or even a modular synth.

The students understand that very quickly, and it’s a great way to introduce them to interaction design.

 

 

 

Derivative: Are there any new technologies or developments in our field that really excite you right now?

Pauric Freeman: I’m very interested in the tools we use to make art and media and how we interact with them, and I think there’s a lot more to be explored within the area of Machine Learning and interface design. Projects like Never Before Heard Sounds, the technology behind the Holly+ project, offers realtime timbral transfer across instruments or performers, allowing musicians to interact with voices in a way not previously available to them. I’m interested to see more applications like this in our field.

Derivative: What do you have on the horizon?

Pauric Freeman: I have two performances coming up at the end of summer, the first in Kassel, Germany happening during the documenta 15 festival, and the other in Dublin in September. The Kassel performance has a unique projection setup so I’m developing site specific work for that. And I’m just back from two months in Melbourne where I was teaching and co-curating an exhibition of audio-visual works, which is happening in July during Open House Melbourne. There are also some collaborations with sound artists and visual designers, which will be ready to show later in the year. I’ll be based out of central Europe for the next few months, so I’m looking forward to a new environment for the next while.

Follow Pauric Freeman Website | Instagram

Comments