Close
Tutorial

DIY Stable Diffusion API ↔ TouchDesigner

Stable Diffusion is one of several popular text-to-image deep learning models released in the last few years, and is capable of producing highly detailed images based on text prompts from the user. Say you want to see what a painting of a cellphone by Picasso would’ve looked like — Stable Diffusion can generate a fairly believable approximation of it. Like other deep learning models, Stable Diffusion is accessible via an API and has its own Python package, which makes it a great candidate for working with in TouchDesigner. In this video, Jack DiLaura walks you through creating a component for sending requests to the Stable Diffusion API and saving the resulting images to the project folder. The tutorial has been created with beginners in mind, offering a great chance to practice working with APIs and creating reusable components in TouchDesigner. You’ll take a look at API keys and documentation, build a custom component complete with an interface to send requests to the API, and build out the underlying API request functionality by adapting example Python. Finally, you’ll take a look at the Engine COMP and how it can be very useful for avoiding performance hiccups by distributing processes like accessing an API.

Comments