TouchDesigner Developer – Audio-Reactive Rendering Engine
I’m looking for a highly skilled TouchDesigner developer to build a structured, automated audio-reactive visual engine.
This is not a live VJ setup or experimental art piece.This is a clean, scalable rendering system designed to run in the background and integrate with external systems.
If you enjoy building robust, modular TouchDesigner systems that can operate headless and be triggered programmatically — this is for you.
What I’m Looking For
-
Advanced TouchDesigner experience (production-level)
-
Strong understanding of audio analysis:
-
Frequency bands
-
RMS / energy
-
Transients
-
Structured parameter mapping
-
-
Experience building automated render workflows
-
Comfortable with:
-
Python inside TouchDesigner
-
Command-line launches
-
Config-driven setups (JSON or similar)
-
-
Ability to create clean, modular .toe files built for scalability
Bonus:
-
Experience integrating TD with backend systems or APIs
-
Strong GPU optimization mindset
-
Interest in electronic music / visual culture
The Goal
Build a stable, automated system where:
-
Audio input drives dynamic visual parameters
-
Rendering is triggered programmatically
-
Output is consistent and production-ready
No AI model training required.No generative video pipelines.Just intelligent, well-structured audio-reactive systems.
This is an early-stage build with clear direction and room for longer-term collaboration if it’s a strong fit.
Please share:
-
Relevant TouchDesigner projects
-
Examples of automation / headless workflows
-
Availability
-
Hourly or project rate
If this sounds like you, reach out directly:
WhatsApp: +61 412 391 290