For the past two years, I've been on an exhilarating journey with Delenda from San Antonio. Our collaboration melds her haunting vocals and raw storytelling with my AI-enhanced surreal visuals. From making music videos to designing live visuals, we're exploring new frontiers together.
Catalyst
2024
We merged AI-generated animations with analog video techniques for Catalyst's music video.
Fine-tuned models produced surreal visuals, and recaptured on vintage TVs for a uniquely tailored visual language.
Project BreakdownEngineered a multi-layered AI workflow combining four custom datasets.Runway generated close-up moth imageryMidjourney crafted mood and tone elementsCurated Delenda selfies, portraits, and music-videos frame pulls. Hybrid dataset merging moth imagery with creative vibe photos.LoRA fine-tuned Stable Diffusion 1.5 models, enabling custom animation aesthetic.
Processed all footage through a custom circuit-bent glitch board and vintage TV’s. Re-captured using BlackMagic Pocket 6k for high-resolution analog artifacts.
Final frames extracted during color correction in DaVinci Resolve. AnimateDiff - 1st Tests
Luminaria
2023
For Delenda’s Luminaria Contemporary Arts Festival show, I transformed live footage of Delenda through custom AI processing, creating a surreal visual backdrop. I ran real-time visuals using TouchDesigner during the performance, performing alongside Delenda and her band.
Project Breakdown
A dynamic visual environment was created using two projectors running TouchDesigner. This setup allowed us to run fluid, ever-changing lighting conditions that interacted with the performer's various outfits in real time.
I curated two distinct image sets: one from our live-action video shoot and another of visually striking reference images.
These were used to train custom Stable Diffusion 1.5 checkpoint and LoRA models, enabling us to generate AI visuals that could synthesize and amplify Delenda's visual identity.
I applied an AI-style pass to the original footage using Stable WarpFusion and our custom-trained LoRA models. This process tracked movements through generated optical flow and consistency maps. I developed a Python script for frame glitching and leveraged TouchDesigner's feedback network to achieve the final aesthetic.
TouchDesigner live-visual setup and frame grabs.
Pathetic
2023
This project began as a live-action music video, traditionally shot and edited. We then applied multiple passes of Stable WarpFusion AI to the footage, creating a surreal, dream-like version.
Project Breakdown
I directed a two-day shoot at AV Expression in San Antonio, capturing Delenda, her friends, and fans.
After filming, I created a fully color-corrected live-action edit of the music video.
This served as the foundation for our subsequent AI-enhanced visual treatments.
I then gathered and curated images to train a custom Stable Diffusion 1.5 checkpoint.
I processed sections of the live-action music video through Stable WarpFusion, using my custom model. This was done both on Google Colab and locally on a high-performance gaming laptop.
The final step involved compositing the original live footage over the AI-generated imagery in After Effects, followed by a final pass in DaVinci Resolve.