Close
Company Post

Commons, Cameras, and Code: Art as Resistance with Matthew Biederman

Montreal media artist Matthew Biederman has spent the past two decades wielding data, light, and open-source code as instruments of cultural resistance. An early TouchDesigner adopter, he began folding AI and machine learning into his projects, using GANs, computer-vision pipelines, and other generative models to critique surveillance, perception, and algorithmic control. In our conversation, Biederman unpacks the technical and conceptual engines behind his practice. We take a deep dive into Situational Compliance the public sculpture that hijacks the childhood game “Simon Says” to expose the mechanics of pervasive monitoring surveillance, in the process revealing how effortlessly we give up autonomy to the authority of machines. 

Derivative: You have a  prolific, fascinating and quite diverse art practice that explores  themes of perception, media saturation, and data systems through video, performance and installation work plus the concepts of science and community as demonstrated in your Arctic Perspective Initiative.  Can you share a bit about this trajectory, how your art practice has evolved, and the factors that influence the direction of your future work?

Matthew Biederman: I was always drawn to ‘new media’, when I was studying art as an undergraduate, I took a class in "intermedia – time-based art" and ended up majoring in that. 

It appealed to me because it wasn’t just one medium, it was everything that had to do with time, so we had classes in electronic music composition, 16mm film, video, performance art, and stacks of 35mm slide projectors.

Even more importantly was a particular instructor.  Her name was Mary Zerkel; she was a recent graduate from SIAC and had a very non-technical bent to her teaching, which was a lesson that stuck with me.

It’s never about the technology itself—it’s about what becomes possible, and the relationship between artist and viewer.

When I began almost 30 years ago, I thought was video as an art medium interesting since I was someone who grew up with television. So I ran with that for a long while. After graduating I moved to San Francisco and started working at Artists’ Television Access – an artist run center dedicated to all sorts of underground and avant garde video and film practices.  We ran production classes, and had a computer lab where local community member could do all sorts of projects.  On Friday and Saturday nights there would be video screens, expanded media performances, music+image performances, and so on.  

Around this time I started getting a little bored with video-making and slaving over minutiae in editing, and discovered some of the tools I used for electronic music composition were being expanded to video and I was able to translate that to real-time generative video and I haven't looked back.  But, like I mentioned, it is never really about the technology in and of itself; its about how we connect (or don’t) through technology, where it comes from and how it's used in the world that interests me, not only the latest techniques.

That's how I ended up working in the Arctic and starting the Arctic Perspective Initiative (API) with a group of friends and colleagues but to keep it short: working in the north was a way to extend these ideas of autonomy through technology, that if people have access to the latest technologies with an understanding of how to use them for their own means then we might understand each other a little better.

So all of these experiences have created what I now think of as these simultaneous trajectories of my own work – from audio-visual performance to community-oriented "tactical-media" practices and more considered visual work that stems from an engagement with the art world and its ideas of beauty and philosophy where I find inspiration.  

I won't elaborate on my 15 years of working as a VJ in the club and rave scene in San Francisco, nor my time working at SFMOMA caretaking the media art collection and installing new works there, but collectively these all had a great influence and where I’m at today.

Derivative: AI plays a central role in a few of your recent works: Who is Afraid of Dreaming in Red, Yellow and Blue, Star Valley (Sirius), and most recently the interactive audiovisual installation work you showed at MUTEK 2025 Situational Compliance, where you've constructed a fun yet sinister new version of the game Simon Says, inviting participants to engage with AI-mediated gameplay.

Can you explain a bit what interests you about working with Artificial Intelligence and Machine Learning and how AI and ML are used as a mediums in your art practice?

Matthew Biederman: I’ve been working with AI since about 2016 or so and showed my first GAN-based work in 2017. I think that AI is an oversized elephant to try and talk about at the moment, entangled with issues - economic, ecologic, societal biases and inequalities. In a way it kind of reflects everything going awry in the world today, and oddly enough maybe that is its role at the moment – but of course Silicon Valley would like us to believe it’s going to save us when we have AGI. As an artist who regularly engages with technology and how it’s embedded in our world, I can’t not look at it.

Star Valley (Sirius) (in collaboration with Marko Peljhan) is a work that was basically an early LLM, but I trained it on a corpus of documents from the US military and NATO consisting of code names and their operation descriptions.  In the room are two 80,000 volt spark gap generators connected to this LLM.  One side generates, or imagines a new code name and the spark gap blasts it out over Morse code and then invents a description for it. The other spark gap picks up the transmission and generates a different description for the same code name, highlighting the machine's ability to continually imagine and reimagine even with the same inputs, in this case new descriptions for military operations.  

The work marries the oldest form of wireless communication - the spark gap, to the latest, highlighting how foundational technologies shape seismic shifts in human exchange. 

The military-industrial complex birthed many of the technologies we celebrate in digital art; as artists, we should never lose sight of that lineage.

Another reason we chose the code names and descriptions is that in 1975, the DOD wrote software called NICKA, the “Code Word Nickname and Exercise Term System”. Essentially, it was a generative algorithm that created nicknames based on factors like location, branch, and prior use which sounds like an artwork!

Who Is Afraid of Dreaming in Red, Yellow, and Blue?” emerged just as diffusion-based generative models were arriving, and I became fascinated with pushing these models beyond their design. It was also a time when there was (and still is) considerable hand-wringing about AI’s potential to kill creative work.  

AI can be a tool in art-making. No tool is ever neutral, but it is a tool nonetheless and  I was curious to engage with this particular aspect of it. It struck me that the notion it would kill art echoed many art movements, beginning with Alexander Rodchenko’s basic color planes of 1925, where he painted separate red, yellow, and blue canvases and declared painting dead. Well painting still isn’t dead, but so many artists have picked up the idea and made it their own over the years: Barnett Newman’s massive color "zips" titled Who Is Afraid of Red, Yellow and Blue, Robert Irwin’s massive red yellow and blue sculptures, both frightening and beautiful, and Tony Conrad's The Flicker, which reduces analog film to a binary (if you don’t count the title and warning frames), essentially "killing" that format as well. 

So I thought, "Who Is Afraid of Dreaming?"—because the idea of AI “hallucination” was also in the zeitgeist at the time. I set out to create a video, or film if you prefer, using AI diffusion to produce an abstract piece the AI itself wouldn’t recognize as abstract, yet one that still bore my own hand.  Using tools built by @dotsimulate and TouchDesigner I was able to build  a system that prompted and guided each frame as an homage to those earlier works. I invited  Pierce Warnecke  to create a soundtrack and I couldn’t be happier with the result.

Derivative: What prompted you to start working with TouchDesigner and how has the software affected your work/workflow?

Matthew Biederman: TouchDesigner is such a great tool; anything you can think of, you can pretty much do in it.  It’s the openess that is critical for me to have, it's more of a lego-style building block for the digital world that is extendable to the physical world. 

I grew tired of making linear videos and re-editing them until I no longer wanted to watch them.  TouchDesigner let me create rule-based artworks that still surprise me; it’s more of a creative partner. 

I never trained as a coder, but I have trained myself to code to make art, and TouchDesigner reduces friction and lets me focus on the ideas rather than debugging.  It rarely takes long to figure out something new, and that speed is invaluable.

Derivative: “Situational Compliance” places viewers inside a live, rule-based system of cameras, commands, and scores. For someone who hasn’t seen it, can you describe what actually happens when a participant approaches the piece and how the experience unfolds from start to finish?

Matthew Biederman:  The sculpture resembles an urban security pole outfitted with cameras, antennas, and multiple displays. Two screens face the players, while a third addresses onlookers. Each player’s display shows a skeletal outline - how pose-detection “sees” the body, along with a threat rating, voice-wave visualizations, and a running score. Spectators, however, are shown even more: cropped faces, “threat” ratings, statistics, recent pose snapshots, and a gimbal-camera feed that randomly tracks people in the crowd.

Situational Compliance turns a children’s game into a commentary on ubiquitous surveillance—you obey a machine before you even realize it.

Step into view and a voice explains the rules. The system issues a command, with or without "Simon says," evaluates your pose, updates yourscore, and assigns a threat level. The more faithfully you comply, thedarker the experience becomes: instructions grow authoritarian ("kneeland put your hands behind your head"), and the soundtrack turnsmenacing. I'd hoped players would balk, but most try to "win,"surrendering autonomy to the algorithm.

Once a compliance or threat threshold is reached, the LEDs flash statistics and a final message: "Thank you for your Compliance” or “Your recalcitrance has been recorded” Walking away, hopefully, you realize you've just obeyed a machine without question, or realized it somewhere along in the process of ‘playing’ and exercised your autonomy.The work comes out of this desire to create an interactive work that rather than ‘reacting’ to a visitor’s motion in a room, in front of a screen and the work reacting to that, I wanted to make a piece that directed the visitors – AI, surveillance, tracking all added up to Situational Compliance.

Derivative: Situational Compliance frames surveillance through play, what inspired you to take this childhood game as the structure for examining such a serious theme? 

Matthew Biederman: My work and research in AI and computer vision converged. I wanted to reverse the usual dynamic: instead of viewers waving their arms so the computer reacts, the computer tells the visitor what to do.  It is also simultaneously a way to talk about how these systems perceive us and their proliferation now to devices as common as doorbells. I also like taking current technology and translating it into terms everyone can understand, much like the old idea of a TV in everyone’s home. See for example The Paper Cup Telephone Network (PCTN).

Using a children’s game to discuss surveillance and AI seemed perfect. I had been dreaming about this piece for years, and when the call came from Mois Multi+MUTEK to create a work for public space it was the perfect opportunity to finally do it.

Derivative: How do exposing the technical methods of the installation help audiences better understand real-world surveillance systems?

Matthew Biederman: AI, computer vision and surveillance technologies are typically hidden "black boxes".  I wanted something as exposed as possible: the cables connecting all the LED modules are exposed, the cameras are exposed, and how computer vision sees you is exposed. So you see yourself being watched and monitored, and hopefully the next time you see a little camera you will think about how you are being observed and who is observing you.

Derivative: Why was TouchDesigner the right environment for building  a system that combines AI, computer vision, and real-time audiovisual feedback?

Matthew Biederman:  TouchDesigner is not only very deep but also intuitive and lets you create fully functioning projects very quickly.  For this project, TD was ideal for several reasons: it's the tool I know best, it runs Python scripts, and community-made components made the work possible. On the audio side, the ability to run VST synths directly within TouchDesigner kept everything clean and well integrated.

Derivative: Could you walk us through the use of computer vision  in this work, how is “compliance” and “non-compliance” detected?

Matthew Biederman: The pipeline consists of a couple of VideoDeviceIn TOPs, one for each player,  that send video over Syphon/Spout to a script running MediaPipe to detect keypoints. These keypoints are normalized and compared with an SVM trained on a set of poses. The script then returns the pose detected by the SVM; if it matches what “Simon” requested, the viewer is marked “compliant” or “non-compliant.” The keypoints are also sent back over OSC and used to process the camera inputs.

Derivative: Simon reacts to the player's actions changing the course of the game - Simon might even get a bit testy with participants when they stumble along. Additionally there are various different analytics shown in what feels like a way to intentionally overloaded players and onlookers with information. How did you implement this game logic and the management of all the info graphics?

Matthew Biederman: I couldn’t have done it without AlphaMoonbase's BananaMash - Finite Statemachine component, an exceptionally functional state machine that lets you build states and their connections graphically and use callbacks to trigger actions when states are entered, exited, or transitioned between. Without this component, handling the logic part of the entire system would have been much more difficult. Once this was functional, I could use series of components that were triggered through the system. Graphically, it was important to use only the keypoints, extracting and rearanging the user's body based on those keypoints to highlight how these systems perceive us and to share that with viewers.

Derivative: From a sensor standpoint the setup is fairly simple using video cameras and MediaPipe to track players and onlookers as well as to get skeleton data. But while MediaPipe can return positions, there must have been a whole process of actually recognizing player poses. Can you explain a bit what was necessary for this to work outside and inside of TouchDesigner?

Matthew Biederman: I tried it a couple of different ways, the prototype version used a script that didn’t use MediaPipe at all.  It used a set of different libraries and a trained YOLO model which then used the shared CUDA TOPArray by intentDev,  with the script running inside a Script TOP.  It worked very well, but it was heavy and slow (because of the TensorFlow implementation, not TD); even with a desktop 4090, the best I could get was about 15 fps... workable, but I had hoped for better.

I met with a programming consultant at the SAT in Montreal who explained that my approach didn’t need such a complex deep-learning model. Instead, I could use MediaPipe and an SVM which train and run very quickly, even with multiple detections. This let me support multiple people, exactly as I had envisioned. 

Both versions do require training data, so I played a lot of Simon Says in the studio to supply it with enough data.  MediaPipe’s robustness really helped. It works flawlessly even in challenging or changing lighting conditions; it also sticks to people without jumping around  which had been another issue with the YOLO implementation.

Derivative: How did you design the audiovisual layer in TouchDesigner to achieve such a convincing sense of surveillance?

Matthew Biederman: I knew from the beginning I wanted to use many different screens, and have this sort of dystopian ‘lightpole’ kind of look. I had some LED modules left over from a previous project, so TouchDesigner’s ability to quickly and easily map its outputs to all these different panels, enabling rapid prototyping and adjustments was a major advantage. For the particular ‘look’ of things, I kept just trying to be as basic as possible and to not overly ‘gamify’ it. While it looks like a game and plays like a game, at the end of the day it's not a game, so that kept me focused. 

As I mentioned above, the audio (programmed by Lucas Paris) is all set up to run generatively/interactively through VSTs along with a set of voice prompts that I scripted and had output from ElevenLabs. This is further processed to sound a fair bit like the voice in Jean Luc Godard’s "Alphaville" helping to convey the authoritarian feel. 

Visually, I just wanted to display statistics about the interaction. My least-favorite moments in films are when a hacker shows up and an over-the-top, purely CGI interface appears, so I definitely wanted to go in the opposite direction. The lower resolution LEDs were a great obstruction to work with (see Lars Vonn Trier’s “Five Obstructions”).

I really had a limited palette to work around. Being able to design directly on the LEDs with pixel-mapped graphics in real-time through TouchDesigner was extremely flexible and helped define the overall look.

I also looked at the interfaces used in law enforcement, surveillance, and data processing to try and mimic these as much as possible. I used the TimeMachine TOP extensively to store different values collected in series of DAT tables. 

Derivative: How have you found the typical response  when  people realize their movements are being tracked and judged and what  kind of conversations or reflections do you hope audiences take off after engaging with Sittuational Compliance?

Matthew Biederman: Some people treat it entirely as a game, they do the poses I'd included that I hoped no one would do.

It’s funny that people really want to “win,” yet when I created this, I always felt the real way to win was to walk away, remain as non-compliant as possible, and tell the machine to go to hell. 

I've seen some people dance, make up their own poses, turn around so as not face the machine, little acts of rebellion, and when I see those I know they get it.  

The people who actually “win”, performing enough poses in succession to satisfy the machine, make a statement, as do the statistics about cameras in public spaces worldwide. I hope it sparks debate about the camera networks watching us and shows that the “AI takeover” is already here. Having complied in this safe setting, viewers might react differently in the real world.

Derivative: What's coming up on your horizons?

Matthew Biederman: I’m working on a couple of new performance pieces both developed with long-time collaborator Pierce Warnecke. One uses AI very directly  and is called IRRRI – or the Intertemporal Research Retrieval and Reflection Institute. StyleGAN models for both audio and visual output will be used alongside TouchDesigner to do all the processing of model internals.

The other is a continuation of a work we started a few years ago in Northern Portugal to address environmental concerns and ideas around technology and the environment.  We worked with local activists to map out large portions of an ancient collective cattle herding area that was going to become a lithium mine. We created massive point clouds of the area through aerial photography, and Pierce wrote a composition for electronics and a local quartet of musicians and I did real-time processing of the point clouds along with their music. 

During researching of that work we found an old BYTE magazine article where the idea of using topology in described as an instrument for synthesis. We fed the 3D scans and their height data into TouchDesigner to synthesize audio for the piece, working with the land instead of simply extracting from it. We wanted to push back against the extractive mindset that’s still common, even in the arts.

We were recently invited by the Skanu Mesz festival in Riga, Latvia, to create a follow-up piece, so we scaled things down. I’ve been experimenting with Gaussian splatting ever since Tim Gerritsen released his TouchDesigner renderer, and we built a large database of macrophotographed peat-bog scenes and nearby environments in eastern Latvia, Europe’s largest carbon sinks, yet still harvested for fertilizer. The landscape is otherworldly, formed over hundreds of years but dug up in weeks. The new work, Phytomorphic Topologies, “plays” the land - moss and fungus alike - both aurally and visually.
 

Derivative: What emerging technology/ies have you excited in terms of being useful in your work?

Matthew Biederman: I’m excited by a wide range of technologies and constantly reading papers for both the technology and critiques of these technologies,  considering them in direct lineage with one another, wondering for instance how radio still connects to AI. Photogrammetry’s boom fascinates me: from remote sensing using UAVs as we did in the Arctic in 2004, to NeRFs, real-time point cloud  manipulation, and now Gaussians (4-D Gaussians are next). 

AI is the 9000-pound gorilla in the room, so I'll keep learning and using it; while it’s become as general as “computer,” certain cases remain compelling.

I’m still driven by what it means for a computer to synthesize images from vast sets of statistics and what that means for artists and humanity. The key question is who owns the models and the compute, I think we really need to reclaim the idea of a commons again.

 

Follow Matthew Biederman Website | Vimeo | Instagram