Creative Coding
This is a collection of my explorations in machine learning, computer vision, real-time graphics, and live-coding performances. 


Temporal Weaving with TidalCycles

2020-2024
In traditional music-making, I thought of a song as a progression from start to finish, but live coding opened my eyes to new dimensions of time. Each pattern and layer I code is like a thread, looping and interlocking in dynamic, evolving sequences. 



This process allows me to build sonic landscapes where rhythms and harmonies emerge, intersect, and dissolve.  

TidalCycles has redefined how I express myself musically, transforming time itself into a creative medium that I can sculpt, expand, and reshape in the moment.


TidalCycles code on the left generates MIDI notes, visualized in real-time with TouchDesigner on the right.

This setup allows the audience to see the real-time code while enjoying reactive, river-like visuals.

SuperCollider handles the audio synthesis, with Ableton Live mixing in an ambient sound layer.


TOPLAP
Live-Stream: 

Real-time coding with TidalCycles, and TouchDesigner. 

Part of TOPLAP’s mission to make live coding inclusive and accessible, this session highlights the creative possibilities of programming music and visuals in the moment.  
Hydra is a browser-based visual synthesizer for live coding. Through simple functions and feedback loops, I create fluid animations and evolving patterns - transforming code into real-time.






Using Thresho as my digital tape recorder, I captured my raw journey into live-coded music. Stripped of familiar DAW workflows, this process challenged me to rediscover music-making from first principles—each recording embracing both intentional paths and unexpected sonic detours as equal parts of the composition.



Hydra
Live-Coding Visuals

2022-2024
Hydra is a browser-based visual synthesizer for live coding visuals through simple functions and feedback loops. 

I layer it with TouchDesigner during music performances to create dynamic visual responses in real-time.

Free and open-source, it's accessible to both beginners and experts. 




Screencaptured live-coding fluid animations inspired by natural phenomena and math. 









Runway‘s Creative Partner Program

2024
As a selected member of Runway's Creative Partners Program, I've gained exclusive access to cutting-edge AI tools and models. 

This opportunity opens doors for exciting collaborations and dedicated events, fueling my artistic journey, diving deeper into AI-generated art.

Runway's text-to-image Gen-3 Alpha model maiden voyage showcases a fusion of surreal AI-Generated imagery and custom score sound design forward audio. 

Presented on AI's evolution in art, demonstrated Runway's applications in personal projects, and facilitated live AI art creation with audience participation.Represented San Antonio in Runway's worldwide series of community-led meetups, hosting an interactive session at Texas Public Radio's theater. 


Black Forest Labs' Flux Dev and Runway Gen-3 Alpha tests. Flux Dev is 12B parameter used in my workflow with ComfyUI for local inference.

Third Echo

2023
Exploring AI-generated art evolution. Uses Stable Diffusion (text-to-image AI) and AnimateDiff (AI animation tool). Builds on my earlier AI art experiments in StyleGAN. 

Project Breakdown

Looking Glass input images


Training frames came from a notebook called Looking Glass, original notebook by Sber AI @ai_curio. This notebook implements an image-to-image generation technique that fine-tunes ruDALL-E. By using this method, I was able to create new images that closely resembled the given input images.


Looking Glass 2022

Input video was made in StyleGAN, generative adversarial network (GAN) that creates highly realistic images by using a style-based generator architecture, allowing fine-grained control over various aspects of the generated images. 
For these datasets I used real scans of floral lumen prints and generative studio flower photography created in Runwayml Gen-2. 

Final frames 


Scyphozoa


2021
Created using early machine learning vision tools StyleGAN and Pix2PixHD next frame prediction, an image-to-image translation model that generates syntheses images.
Project BreakdownInput images

This run of the machine learning script was trained on jellyfish. It took 10 hours to train this model.
Output images

Epoch 200
Color corrected output image.  Upscaled on Topaz Gigapixel and printed on Hahnemühle Photo Rag 308gsm at Hare & Hound Press by ​master printer Gary Nichols. This project represents a full circle moment, as I discovered that my father, artist Carlos Chavez, collaborated with Gary Nichols and Hare & Hound Press 30 years ago.
Features original music live-coded using TidalCycles, a programming language for music creation.

21.04.2022-20.44.45.mp3

SATX