Hey! In this tutorial, we'll go over how to use Stable Diffusion in TouchDesigner to turn AI-generated images into a video and add audio-reactive particles for a blending effect. The project file is available on my Patreon: https://patreon.com/tblankensmith
Hey! In this tutorial, we'll go over how to use Stable Diffusion with a custom component I created to generate images in TouchDesigner. The project supports 2 forms of input using prompt generation and image to image so you can use any TOP in TouchDesigner as a starting point.
Through the use of OpenAI’s DALL-E 2 API and TouchDesigner, I’ve managed to create a, let’s say, MIDI sequencer that captures RGB data incoming from AI generated images in real-time, and uses it to trigger MIDI events in Ableton Live.
Running TinyYOLO object detection model on the Oak-D camera. How to send detection data and video frames to TouchDesigner using Python-OSC and NDI-Python, in order to process it further and use it in a creative audiovisual workflow.
How to combine example code from the ndi-python library and DepthAI examples to send depth and RGB video streams from the Oak-D camera to TouchDesigner.
First steps in setting up and using an Oak-D AI-powered camera by Luxonis. This series will explore how to use various AI models running on the Oak-D hardware, then send video frames and data to TouchDesigner for further processing in a creative audiovisual workflow.
In this TouchDesigner tutorial, you're going to learn how to create a visual system that draws you in real-time. For more tutorials, experiments, and project files, head over to: https://linktr.ee/uisato https://www.youtube.com/watch?v=8Vf_E0lguGI
WE PRESENTS NEW ONLINE EVENTS: PERSON IN TOUCH. Practicing Media Artist will presents their work. Artificial neural networks invaded our life and occupied nearly every area with a great success. The pace of the corresponding news and changes becomes overwhelming every now and then.