Bridging the realms of sound, live performance and literature, I've created a system in TouchDesigner that transforms real-time audio data into unique AI generated poetry being recited, also in real-time.
Through the use of AI and data sonification techniques, this artistic installation aims to give life to a digital bard to accompany musical compositions in a hopefully-beautiful and meaningful way. I’ve been working on this for the past several months and let me tell you, there have been a significant amount of interesting moments while using this system to accompany my live-performing. Hope both examples shown in this post give you a glimpse of what I’m writing about.
In short, process happens as follows: Real-time Audio Source ➨ Frequency Analysis ➨ Poetic Element Mapping ➨ Poem Generation [ChatGPT API] ➨ TextToSpeech [ElevenLabs API]
Filled with excitement, I can’t stop thinking about possible cool usages for a system with these characteristics: from using it as a member in live-performing band, to a 24/7 online radio or a beautiful interactive installation.
PS: I’ve recently got my plane tickets to Europe for this fall, and I’d love to bring Auratura with me. If you are interested in hosting this installation, or know of cool places/venues, please do get in touch.
For more experiments, tutorials, and project files, you can head over to: https://linktr.ee/uisato