Creative Coding
This is a collection of my explorations in machine learning, computer vision, real-time graphics, and live-coding performances. 

Temporal Weaving with TidalCycles

2020-2024

TidalCycles code on the left generates MIDI notes, visualized in real-time with TouchDesigner on the right.
TOPLAP Live-Stream: 

Real-time coding with TidalCycles, and TouchDesigner. 

Part of TOPLAP’s mission to make live coding inclusive and accessible, this session highlights the creative possibilities of programming music and visuals in the moment.  




Using Thresho as my digital tape recorder, raw takes of live-coded music and time stamp. 


Hydra
Live-Coding Visuals

2022-2024

Hydra is a browser-based visual synthesizer for live coding visuals through simple functions and feedback loops. 





Screencaptured live-coding in Hydra






Runway‘s Creative Partner Program

2024
As a selected Runway's Creative Partners Program member, I can explore AI-generated art more deeply. The CPP Discord server is constantly chatting with amazing artists worldwide who have been on my journey, finding the latest cutting-edge AI tools and models that Runway continues to update on a massive scale. 

Runway's text-to-image Gen-3 Alpha model maiden voyage showcases a fusion of surreal AI-Generated imagery and custom score sound design forward audio. 

Presented on AI's evolution in art, demonstrated Runway's applications in personal projects, and facilitated live AI art creation with audience participation.Represented San Antonio in Runway's worldwide series of community-led meetups, hosting an interactive session at Texas Public Radio's theater. 




Black Forest Labs' Flux Dev and Runway Gen-3 Alpha tests. Colored in DaVinci Resolve.



Third Echo

2023

Exploring AI-generated art evolution. Uses Stable Diffusion (text-to-image AI) and AnimateDiff (AI animation tool). Builds on my earlier AI art experiments in StyleGAN. 



Project Breakdown

Looking Glass input images


Training frames came from a notebook called Looking Glass, original notebook by Sber AI @ai_curio. This notebook implements an image-to-image generation technique that fine-tunes ruDALL-E. By using this method, I was able to create new images that closely resembled the given input images.


Looking Glass 2022

Input video was made in StyleGAN, generative adversarial network (GAN) that creates highly realistic images by using a style-based generator architecture, allowing fine-grained control over various aspects of the generated images. 
For these datasets I used real scans of floral lumen prints and generative studio flower photography created in Runwayml Gen-2. 




Scyphozoa


2021

Created using early machine learning vision tools StyleGAN and Pix2PixHD next frame prediction, an image-to-image translation model that generates syntheses images.

Project BreakdownInput images

This run of the machine learning script was trained on jellyfish. It took 10 hours to train this model.
Output images

Epoch 200
Color corrected output image.  Upscaled on Topaz Gigapixel and printed on Hahnemühle Photo Rag 308gsm at Hare & Hound Press by ​master printer Gary Nichols. This project represents a full circle moment, as I discovered that my father, artist Carlos Chavez, collaborated with Gary Nichols and Hare & Hound Press 30 years ago.
Features original music live-coded using TidalCycles, a programming language for music creation.

21.04.2022-20.44.45.mp3

SATX