Creative Coding
This is a collection of my explorations in machine learning, computer vision, real-time graphics, and live-coding performances. 



Temporal Weaving with TidalCycles

2020-2024

TidalCycles code on the left generates MIDI notes, visualized in real-time with TouchDesigner on the right.
TOPLAP Live-Stream: 

Real-time coding with TidalCycles, and TouchDesigner. 

Part of TOPLAP’s mission to make live coding inclusive and accessible, this session highlights the creative possibilities of programming music and visuals in the moment.  




Using Thresho as my digital tape recorder, raw takes of live-coded music and time stamp. 


Hydra
Live-Coding Visuals

2022-2024

Hydra is a browser-based visual synthesizer for live coding visuals through simple functions and feedback loops. 





Screencaptured live-coding in Hydra






Runway‘s Creative Partner Program

2024
As a selected Runway's Creative Partners Program member, I can explore AI-generated art more deeply. The CPP Discord server is constantly chatting with amazing artists worldwide who have been on my journey, finding the latest cutting-edge AI tools and models that Runway continues to update on a massive scale. 

Runway's text-to-image Gen-3 Alpha model maiden voyage showcases a fusion of surreal AI-Generated imagery and custom score sound design forward audio. 

Presented on AI's evolution in art, demonstrated Runway's applications in personal projects, and facilitated live AI art creation with audience participation.Represented San Antonio in Runway's worldwide series of community-led meetups, hosting an interactive session at Texas Public Radio's theater. 



Gen-3 Alpha FLUX.1-Dev (Images)

Runway Frames





Third Echo

2023

Exploring AI-generated art evolution. Uses Stable Diffusion (text-to-image AI) and AnimateDiff (AI animation tool). Builds on my earlier AI art experiments in StyleGAN. 



Project Breakdown
Looking Glass input images


Training frames came from a notebook called Looking Glass, original notebook by Sber AI @ai_curio. This notebook implements an image-to-image generation technique that fine-tunes ruDALL-E. By using this method, I was able to create new images that closely resembled the given input images.


Looking Glass 2022
Input video was made in StyleGAN, generative adversarial network (GAN) that creates highly realistic images by using a style-based generator architecture, allowing fine-grained control over various aspects of the generated images. 
For these datasets I used real scans of floral lumen prints and generative studio flower photography created in Runwayml Gen-2. 




Scyphozoa


2021

Created using early machine learning vision tools StyleGAN and Pix2PixHD next frame prediction, an image-to-image translation model that generates syntheses images.

Project BreakdownInput images

This run of the machine learning script was trained on jellyfish. It took 10 hours to train this model.
Output images

Epoch 200
Color corrected output image.  Upscaled on Topaz Gigapixel and printed on Hahnemühle Photo Rag 308gsm at Hare & Hound Press by ​master printer Gary Nichols. This project represents a full circle moment, as I discovered that my father, artist Carlos Chavez, collaborated with Gary Nichols and Hare & Hound Press 30 years ago.
Features original music live-coded using TidalCycles, a programming language for music creation.

21.04.2022-20.44.45.mp3
Delenda

For the past two years, I've been on an exhilarating journey with Delenda from San Antonio. Our collaboration melds her haunting vocals and raw storytelling with my AI-enhanced surreal visuals. From making music videos to designing live visuals, we're exploring new frontiers together.


Catalyst

2024

We merged AI-generated animations with analog video techniques for Catalyst's music video.

Fine-tuned models produced surreal visuals, and recaptured on vintage TVs for a uniquely tailored visual language.



Project BreakdownEngineered a multi-layered AI workflow combining four custom datasets.Runway generated close-up moth imageryMidjourney crafted mood and tone elementsCurated Delenda selfies, portraits, and music-videos frame pulls. Hybrid dataset merging moth imagery with creative vibe photos.LoRA fine-tuned Stable Diffusion 1.5 models, enabling custom animation aesthetic. 

Custom objects and characters generated via Runway's Text to Image. 

Animated results using Runway's Gen-2 model.

 
Processed all footage through a custom circuit-bent glitch board and vintage TV’s. Re-captured using BlackMagic Pocket 6k for high-resolution analog artifacts.









Final frames extracted during color correction in DaVinci Resolve.
AnimateDiff - 1st Tests



Luminaria 


2023

For Delenda’s Luminaria Contemporary Arts Festival show, I transformed live footage of Delenda through custom AI processing, creating a surreal visual backdrop. I ran real-time visuals using TouchDesigner during the performance, performing alongside Delenda and her band. 



Project Breakdown
A dynamic visual environment was created using two projectors running TouchDesigner. This setup allowed us to run fluid, ever-changing lighting conditions that interacted with the performer's various outfits in real time.





I curated two distinct image sets: one from our live-action video shoot and another of visually striking reference images. 

These were used to train custom Stable Diffusion 1.5 checkpoint and LoRA models, enabling us to generate AI visuals that could synthesize and amplify Delenda's visual identity.


I applied an AI-style pass to the original footage using Stable WarpFusion and our custom-trained LoRA models. This process tracked movements through generated optical flow and consistency maps. I developed a Python script for frame glitching and leveraged TouchDesigner's feedback network to achieve the final aesthetic.  
TouchDesigner live-visual setup and frame grabs.   


Pathetic 

2023

This project began as a live-action music video, traditionally shot and edited. We then applied multiple passes of Stable WarpFusion AI to the footage, creating a surreal, dream-like version.


Project Breakdown
I directed a two-day shoot at AV Expression in San Antonio, capturing Delenda, her friends, and fans. 

After filming, I created a fully color-corrected live-action edit of the music video. 

This served as the foundation for our subsequent AI-enhanced visual treatments.
I then gathered and curated images to train a custom Stable Diffusion 1.5 checkpoint.
I processed sections of the live-action music video through Stable WarpFusion, using my custom model. This was done both on Google Colab and locally on a high-performance gaming laptop.
The final step involved compositing the original live footage over the AI-generated imagery in After Effects, followed by a final pass in DaVinci Resolve.
A.M. Architect  Audio + Visual Art Project by Diego Chavez & Daniel Stanush

For the past decade, I've created multimedia experiences with A.M. Architect alongside Daniel Stanush. Our collaboration weaves Stanush's melodic sensibility with my soundscape manipulation, using TouchDesigner and machine learning to transform performances into interactive installations. Through beats, generative visuals, and audience interaction, we're redefining the boundaries between electronic music and digital art.


Hydra

2024Hydra offers a glimpse into a place where technology has evolved to offer near-limitless creation, and a group of elusive tech-savants that turn their abilities inwards to create a new vision of themselves.

Project Breakdown
Hydra's creation began with Midjourney-generated storyboards, producing over 1,500 still images. These were transformed into video using Runway Gen-2 and Stable Diffusion 1.5. A TouchDesigner network then synchronized video transitions with the music. Finally, the output was processed through a circuit-bent BPMC Analog video mixer for added glitch effects.


Cynatica Conductor II

FL!GHT Gallery, San Antonio, TX — 2022

Cynatica Conductor invites the viewer to interact with light and sound, playing the conductor overseeing an ensemble of sonic texture and fractured melody. 

Art interacts with a conductor (viewer), they grow in complexity and vibrancy, becoming an immersive collaboration between the art and the audience.



Installation Review: Glass Tire Art Magazine: 
A Technological Dream: "Cynatica Conductor"

Cynatica Conductor I

Sacramento, CA— 2022

An immersive art experience created with thirty-five artists' site-specific installations in the former PAC:SAT satellite news headquarters. It was a limited show before the building was demolished for new development on the land.

Prototype for audio-reactive visuals responding in TouchDesigner and Ableton Live.

Avicenna

2017

Avicenna is featured on the Territories LP compilation album from 79Ancestors, released in May 2017. The album includes tracks from other electronic artists such as Telefon Tel Aviv, Deru, and Shigeto. 

Pre-Production - Treatment Frames

Color Field

— 2017

Color Field is the companion film to the 2017 A.M. Architect release, Color Field, with 79 Ancestors.

BTS


SATX