For the past two years, I've been on an exhilarating journey with Delenda from San Antonio. Our collaboration melds her haunting vocals and raw storytelling with my AI-enhanced surreal visuals. From making music videos to designing live visuals, we're exploring new frontiers together.
Catalyst
We merged AI-generated animations with analog video techniques for Catalyst's music video.
Fine-tuned models produced surreal visuals, and recaptured on vintage TVs for a uniquely tailored visual language.
Animated results using Runway's Gen-2 model.
Luminaria
For Delenda’s Luminaria Contemporary Arts Festival show, I transformed live footage of Delenda through custom AI processing, creating a surreal visual backdrop. I ran real-time visuals using TouchDesigner during the performance, performing alongside Delenda and her band.
A dynamic visual environment was created using two projectors running TouchDesigner. This setup allowed us to run fluid, ever-changing lighting conditions that interacted with the performer's various outfits in real time.
I curated two distinct image sets: one from our live-action video shoot and another of visually striking reference images.
These were used to train custom Stable Diffusion 1.5 checkpoint and LoRA models, enabling us to generate AI visuals that could synthesize and amplify Delenda's visual identity.
I applied an AI-style pass to the original footage using Stable WarpFusion and our custom-trained LoRA models. This process tracked movements through generated optical flow and consistency maps. I developed a Python script for frame glitching and leveraged TouchDesigner's feedback network to achieve the final aesthetic.
TouchDesigner live-visual setup and frame grabs.
Pathetic
This project began as a live-action music video, traditionally shot and edited. We then applied multiple passes of Stable WarpFusion AI to the footage, creating a surreal, dream-like version.
Project Breakdown
After filming, I created a fully color-corrected live-action edit of the music video.
This served as the foundation for our subsequent AI-enhanced visual treatments.
I then gathered and curated images to train a custom Stable Diffusion 1.5 checkpoint.
I processed sections of the live-action music video through Stable WarpFusion, using my custom model. This was done both on Google Colab and locally on a high-performance gaming laptop.
The final step involved compositing the original live footage over the AI-generated imagery in After Effects, followed by a final pass in DaVinci Resolve.