https://www.youtube.com/watch?v=tPMSUUKUDSA Bridging the realms of sound, live performance and literature, I've created a system in TouchDesigner that transforms real-time audio data into unique AI generated poetry being recited, also in real-time.
AI
You’ve probably heard about OpenAI’s GPT3 or its sibling, ChatGPT, as they’ve been making headlines recently about their ability to generate very convincing human-like text.
Generating Text with GPT3 in TouchDesigner
Hey! In this tutorial we'll go over 2 new components I developed to run OpenAI's Whisper (speech to text) and ChatGPT within TouchDesigner. The components work without any setup needed, just add your OpenAI API key.
Custom ChatGPT and Whisper(speech to text) Plugins for TouchDesigner
Hey! In this tutorial, we'll go over how to do video to video style transfer with Stable Diffusion using a custom component in TouchDesigner.
Video to Video AI Style Transfer with Stable Diffusion and Keyframing in TouchDesigner - Tutorial
Hey! In this tutorial, we'll go over how to use Stable Diffusion with a custom component to generate audio-reactive animations in TouchDesigner. Runs on Mac + PC without the need for a fancy GPU.
Audio Reactive Animations with Stable Diffusion and TouchDesigner
Hey! In this tutorial, we'll go over how to use Stable Diffusion in TouchDesigner to turn AI-generated images into a video and add audio-reactive particles for a blending effect. The project file is available on my Patreon: https://patreon.com/blankensmithing
Generate AI Images with Stable Diffusion + Audio Reactive Particle Effects - TouchDesigner Tutorial Part 2
Hey! In this tutorial, we'll go over how to use Stable Diffusion with a custom component I created to generate images in TouchDesigner. The project supports 2 forms of input using prompt generation and image to image so you can use any TOP in TouchDesigner as a starting point.
Generate AI Images with Stable Diffusion using Image to Image Generation with any TOP
Through the use of OpenAI’s DALL-E 2 API and TouchDesigner, I’ve managed to create a, let’s say, MIDI sequencer that captures RGB data incoming from AI generated images in real-time, and uses it to trigger MIDI events in Ableton Live.
[Walkthrough] Transforming DALL-E 2 generated images into sound [MIDI events]
Running TinyYOLO object detection model on the Oak-D camera. How to send detection data and video frames to TouchDesigner using Python-OSC and NDI-Python, in order to process it further and use it in a creative audiovisual workflow.