Creative CodingThis is a collection of my explorations in machine learning, computer vision, real-time graphics, and live-coding performances. 

Temporal Weaving with TidalCycles

2020-2024

TidalCycles code on the left generates MIDI notes, visualized in real-time with TouchDesigner on the right.
TOPLAP Live-Stream: 

Real-time coding with TidalCycles, and TouchDesigner. 

Part of TOPLAP’s mission to make live coding inclusive and accessible, this session highlights the creative possibilities of programming music and visuals in the moment.  


Using Thresho as my digital tape recorder, raw takes of live-coded music and time stamp. 


Hydra
Live-Coding Visuals

2022-2024

Hydra is a browser-based visual synthesizer that enables live coding of visuals through simple functions and feedback loops. 





Screencaptured live-coding in Hydra



Art In the Open:
Flow State Workshop Series

Workshop Doc

SlidesCodeNotes
Hydra Website 

https://hydra.ojack.xyz/



Run all code - Runs all code on the page *ctrl+shift+enter

Clear all - Resets the environment and clears text from the editor.

Load library or extension - community extensions for Hydra-Synth. 

Show random sketch - Loads random sketch examples. 

Make random change - Modifies a single value automatically. 

Upload to gallery - Upload a sketch to Hydra’s gallery and create a 
shorter URL.

Show info window - Show overlay window with help text and links.



Inspired by analog modular synthesizers, these tools are an exploration of using streaming over the web for routing video sources and outputs in real-time.

Core principles: Input + Modify + Output

Osc is the input

Rotate and repeat are the modifiers.

Modify could be lots of things like:

.brightness
.rotate
.kaleid
.invert


Output = o0

Output that’s o0 is like a TV channel, and .out(o0) is telling Hydra, Put my visuals on this channel so it shows up.”  

Send this visual to screen number 0 so I can see it.

The visual you build goes into a “pipe.”
.out(o0) opens the pipe and shows it on the main display. Without .out(o0), Hydra made something—but you wouldn’t see it.


An oscillator is a signal that creates a repeating wave or pattern that goes back and forth — like a smooth pulse.

And in Hydra, we’re going to use it as an Input.



osc ()
.out(o0)
Hydra expects an input and an output argument. 

osc () // Input 
.out(o0) // Output 

Hydra is written in JavaScript, a programming language and a core technology of Websites, alongside HTML and CSS.

It enables dynamic and interactive content on websites and web applications.

Syntax is the grammar of code. It’s how we tell the computer what to do step by step.

In Hydra, the code is read from left ot right, top to bottom.

osc (20, 0.2,0)
.out(o0)


We have 3 arguments in osc()

osc()  - osc(frequency, sync, offset)

Oscillator ( frequency = 20, sync = 0.2, offset = 0)

In coding, an argument is information you give to a function so it knows how to behave.

•    osc() is a function — it creates something.
•    The dot . chains actions together.
•    Each pair of parentheses () passes arguments (values) that control behavior.

Think of a function like a machine, and arguments are the settings or ingredients you give it. Each function has its own argument. 

A parameter is a blank spot a function expects, and an argument is the actual value you fill that spot with when you use the function.

The first argument 20 is the frequency of the lines,

osc (20, 0.2,0)
.out (o0)

The second number is the synchronization, known as speed. 

Put 0.2; they're moving quite slowly. 

Now put number 0.8, you'll see it begins to move a lot faster. 

Warning too fast and it will flash!

osc (20, 0.2,2)
.out (o0)

The last number is what's known as the offset.

3 oscillators are moving simultaneously. One of them is red, one is green, and one is blue.

If you mix all of them, you get white & black.

They're moving slightly out of sync with each other, which will reveal the colors of each of those oscillators and mix them all together.

osc (20, 0.2,2)
.color(1, 0, 0) // Red
.out(o0)

//

osc (20, 0.2,2)
.color(0, 1, 0) // Green
.out(o0)

//

osc (20, 0.2,2)
.color(0, 0, 1) // Blue
.out(o0)

//

osc (20, 0.2,2)
.color(1.1, 0.5, 1) // MIX
.out(o0)
.color( red, green, blue, alpha )

Pro Tip
// Two forward slashes create a comment. //

Add a note to the line of code without breaking the syntax.


osc (10, 0.2,2)
.color (1.1, 0.5, 0.5)
.rotate(1)
.out (o0)

//

osc (10, 0.2,2)
.color (1.1, 0.5, 0.5)
.rotate(90 * Math.PI / 25)
.out (o0)

//

osc (10, 0.2,2)
.color (1.1, 0.5, 0.5)
.rotate (90*4)
.out (o0)
.rotate( angle = 10, speed )

Rotation is measured in radians is like a slice of a pie.

3.14 is pi, a full 360-degree rotation would be 3.14 times x2,

JavaScript / Hydra takes math values. 




osc (10, 0.2,2)
.color (1.1, 0.5, 0.5)
.rotate (90*4)
.pixelate( 25, 25)
.out (o0)

.pixelate( pixelX = 25, pixelY = 25 )

osc (10, 0.2,2)
.color (1.1, 0.5, 0.5)
.rotate (90*4)
.pixelate( 50, 50)
.kaleid(3)
.out (o0)

kaleid( nSides = 3 )

osc (10, 0.2,2)
.kaleid(3)
.color (1.1, 0.5, 0.5)
.rotate (90*4)
.pixelate( 50, 50)
.out (o0)

Change order .kaleid(3) to the top.

Modulate order matters in Hydra. 

Code in Hydra moves from left to right and top to bottom.


osc (10, 0.2,2)
.pixelate( 25, 25)
.kaleid(3)
.color (1.1, 0.5, 0.5)
.rotate (90 * Math.PI / 180)
.out (o0)

Moving modulation effects around and playing with math.

osc (10, 0.2,2)
.kaleid(2)
.rotate (90*Math.PI/90)
// .pixelate( 15, 25)
.color (1.5, 0.5, 0.5)
.out (o0)

Remix the order, and jam out

Turn lines on and off by commenting with 2 forward slashes //




Short Cuts

Run All

control + shift + enter

Hide Code
control + shift + h

noise(1.5, 0.2)
.out (o0)

In Hydra, the noise() function is one of the core source generators. 

It creates a dynamic field of random pixel values that shift and flow over time. 

You can think of it like digital static or clouds that move and evolve. 

It’s often used as a base texture or as a modulator to distort other visuals.

noise(1.5, 0.2)
.out (o0)




A smaller noise value gives us  
clouds like shapes that move and evolve. 

noise(2.5, 0.3)
.color(1,0,-1)
.saturate(1.1)
.out(o0)

Bring color with .color under the noise input and .saturate to pop up the saturation.  

noise(1.5, 0.2)
.color (1,0,-1)
.saturate(1.3)
.blend( o0,0.99)
.out (o0)

Using the Output = o0 in .blend creates a feedback loop. It’s like pointing a camera into its own output. Or like layering two images with transparency. Instead of clearing the screen every moment, the new image is built on top of the old image. Each frame leaves a ghost of itself behind.



noise(1.5, 0.2)
.color (1,0,-1)
.saturate(1.3)
.blend(osc (10, 0.2,2),0.50) 
// .blend( o0,0.75)
.out (o0)


.blend( texture, amount = 0.50 )

Since the 1st argument can take a texture we could use the oscillator as a texture input.

Texture is the graphical source.  

Commenting out the feedback loop to see what’s happening. 

noise(1.5, 0.2)
.color (1,0,-1)
.saturate(1.3)
.blend(osc (10, 0.2,2),0.50) 
.add(noise(5000,0.78),0.05)
.blend( o0,0.75)
.out (o0)

.add( texture, amount = 1 )


Adding .add noise at a very high number add texture. 


When you “add a texture,” you’re not just stacking an image — you’re passing a texture as an input into another function. That function then uses the texture to modify, mask, or mix visuals

noise(1.5, 0.2)
  .color(1,0,-1)
  .saturate(1.3)
  .blend(osc(10, 0.2, 2), 0.50)
  .modulate(osc(15,0.3), 0.4)   
// more modulation (0.1–1 range)
  .add(noise(5000,0.78), 0.05)
  .blend(o0, 0.75)
.out(o0)

For more of a water feel and less repetition ,we can use .modulate

Modulate functions use the colors from one source to affect the geometry of the second source. This creates a sort of warping or distorting effect. 

. modulate() does not change color or luminosity but distorts one visual source using another visual source.

An analogy in the real world would be looking through a texture glass window or water.

You can add a second parameter to the modulate() function to control the amount of warping: modulate(o1, 0.9). 

In this case, the red and green channels of the oscillator are being converted to x and y displacement of the camera image.


Think of it like bending one image with another image’s energy — the second texture becomes a map that tells Hydra how to distort the first one.

noise(1.5, 0.2)
  .color(1,0, -3)
  .saturate(1.3)
  .blend(osc (10, 0.2,5),0.50)
  .blend(o0, 0.75)
  .add(noise(5000,0.78), 0.05)
  .modulate(noise(1.5, 0.7))   
.out(o0)

Final 


Runway‘s Creative Partner Program

2024
As a selected Runway's Creative Partners Program member, I can explore AI-generated art more deeply. The CPP Discord server is constantly chatting with amazing artists worldwide who have been on my journey, finding the latest cutting-edge AI tools and models that Runway continues to update on a massive scale. 

Runway's text-to-image Gen-3 Alpha model maiden voyage showcases a fusion of surreal AI-Generated imagery and custom score sound design forward audio.

Presented on AI's evolution in art, demonstrated Runway's applications in personal projects, and facilitated live AI art creation with audience participation.Represented San Antonio in Runway's worldwide series of community-led meetups, hosting an interactive session at Texas Public Radio's theater. 



Gen-3 Alpha FLUX.1-Dev (Images)

Runway Frames





Third Echo

2023

Exploring AI-generated art evolution. Uses Stable Diffusion (text-to-image AI) and AnimateDiff (AI animation tool). Builds on my earlier AI art experiments in StyleGAN. 



Project Breakdown
Looking Glass input images


Training frames came from a notebook called Looking Glass, original notebook by Sber AI @ai_curio. This notebook implements an image-to-image generation technique that fine-tunes ruDALL-E. By using this method, I was able to create new images that closely resembled the given input images.


Looking Glass 2022
Input video was made in StyleGAN, generative adversarial network (GAN) that creates highly realistic images by using a style-based generator architecture, allowing fine-grained control over various aspects of the generated images. 
For these datasets I used real scans of floral lumen prints and generative studio flower photography created in Runwayml Gen-2. 




Scyphozoa


2021

Created using early machine learning vision tools StyleGAN and Pix2PixHD next frame prediction, an image-to-image translation model that generates syntheses images.

Project BreakdownInput images

This run of the machine learning script was trained on jellyfish. It took 10 hours to train this model.
Output images

Epoch 200
Color corrected output image.  Upscaled on Topaz Gigapixel and printed on Hahnemühle Photo Rag 308gsm at Hare & Hound Press by master printer Gary Nichols. This project represents a full circle moment, as I discovered that my father, artist Carlos Chavez, collaborated with Gary Nichols and Hare & Hound Press 30 years ago.
Features original music live-coded using TidalCycles, a programming language for music creation.

21.04.2022-20.44.45.mp3
Delenda

Instagram
Soundcloud

For the past two years, I've been on an exhilarating journey with Delenda. Our collaboration melds her haunting vocals and raw storytelling with my AI-enhanced surreal visuals. From making music videos to designing live visuals, we're exploring new frontiers together.





Catalyst

2024

We merged AI-generated animations with analog video techniques for Catalyst's music video.

Fine-tuned models produced surreal visuals, and recaptured on vintage TVs for a uniquely tailored visual language.



Project BreakdownEngineered a multi-layered AI workflow combining four custom datasets.Runway generated close-up moth imageryMidjourney crafted mood and tone elementsCurated Delenda selfies, portraits, and music-videos frame pulls. Hybrid dataset merging moth imagery with creative vibe photos.LoRA fine-tuned Stable Diffusion 1.5 models, enabling custom animation aesthetic. 

Custom objects and characters generated via Runway's Text to Image. 

Animated results using Runway's Gen-2 model.

 
Processed all footage through a custom circuit-bent glitch board and vintage TV’s. Re-captured using BlackMagic Pocket 6k for high-resolution analog artifacts.









Final frames extracted during color correction in DaVinci Resolve.
AnimateDiff - 1st Tests



Luminaria 


2023

For Delenda’s Luminaria Contemporary Arts Festival show, I transformed live footage of Delenda through custom AI processing, creating a surreal visual backdrop. I ran real-time visuals using TouchDesigner during the performance, performing alongside Delenda and her band. 



Project Breakdown
A dynamic visual environment was created using two projectors running TouchDesigner. This setup allowed us to run fluid, ever-changing lighting conditions that interacted with the performer's various outfits in real time.





I curated two distinct image sets: one from our live-action video shoot and another of visually striking reference images. 

These were used to train custom Stable Diffusion 1.5 checkpoint and LoRA models, enabling us to generate AI visuals that could synthesize and amplify Delenda's visual identity.


I applied an AI-style pass to the original footage using Stable WarpFusion and our custom-trained LoRA models. This process tracked movements through generated optical flow and consistency maps. I developed a Python script for frame glitching and leveraged TouchDesigner's feedback network to achieve the final aesthetic.  
TouchDesigner live-visual setup and frame grabs.   


Treatment

Pathetic 

2023

This project began as a live-action music video, traditionally shot and edited. We then applied multiple passes of Stable WarpFusion AI to the footage, creating a surreal, dream-like version.


Project Breakdown
I directed a two-day shoot at AV Expression in San Antonio, capturing Delenda, her friends, and fans. 

After filming, I created a fully color-corrected live-action edit of the music video. 

This served as the foundation for our subsequent AI-enhanced visual treatments.
I then gathered and curated images to train a custom Stable Diffusion 1.5 checkpoint.
I processed sections of the live-action music video through Stable WarpFusion, using my custom model. This was done both on Google Colab and locally on a high-performance gaming laptop.
The final step involved compositing the original live footage over the AI-generated imagery in After Effects, followed by a final pass in DaVinci Resolve.
A.M. Architect  Audio + Visual Art Project by Diego Chavez & Daniel Stanush

For the past decade, I've created multimedia experiences with A.M. Architect alongside Daniel Stanush. Our collaboration weaves Stanush's melodic sensibility with my soundscape manipulation, using TouchDesigner and machine learning to transform performances into interactive installations. Through beats, generative visuals, and audience interaction, we're redefining the boundaries between electronic music and digital art.


Hydra

2024Hydra offers a glimpse into a place where technology has evolved to offer near-limitless creation, and a group of elusive tech-savants that turn their abilities inwards to create a new vision of themselves.

Project Breakdown
Hydra's creation began with Midjourney-generated storyboards, producing over 1,500 still images. These were transformed into video using Runway Gen-2 and Stable Diffusion 1.5. A TouchDesigner network then synchronized video transitions with the music. Finally, the output was processed through a circuit-bent BPMC Analog video mixer for added glitch effects.


Cynatica Conductor II

FL!GHT Gallery, San Antonio, TX — 2022

Cynatica Conductor invites the viewer to interact with light and sound, playing the conductor overseeing an ensemble of sonic texture and fractured melody. 

Art interacts with a conductor (viewer), they grow in complexity and vibrancy, becoming an immersive collaboration between the art and the audience.



Installation Review: Glass Tire Art Magazine: 
A Technological Dream: "Cynatica Conductor"

Cynatica Conductor I

Sacramento, CA— 2022

An immersive art experience created with thirty-five artists' site-specific installations in the former PAC:SAT satellite news headquarters. It was a limited show before the building was demolished for new development on the land.

Prototype for audio-reactive visuals responding in TouchDesigner and Ableton Live.

Avicenna

2017

Avicenna is featured on the Territories LP compilation album from 79Ancestors, released in May 2017. The album includes tracks from other electronic artists such as Telefon Tel Aviv, Deru, and Shigeto. 

Pre-Production - Treatment Frames

Color Field

— 2017

Color Field is the companion film to the 2017 A.M. Architect release, Color Field, with 79 Ancestors.

BTS


Rokovoko

As a co-partner, videographer, editor, animator, and music producer, I help create vibrant media that inspires and educates.  


Nathan’s story

─2023

We aimed to portray Nathan as a multi-faceted person - an athlete, designer, and family man - not defined by SMA. This project celebrates living life forward, embracing one's whole self beyond others' expectations.



Portfolio My career has been a colorful tapestry of visual storytelling, each thread a different medium. From the raw emotion of documentary filmmaking to the pixel-perfect precision of motion graphics, I've learned to speak the universal language of human experience through visuals. 


Media Capabilities


Production Camera operation, lighting design on location, on-set sound recording, and directing interviews and performances.
Post-Production Editing for narrative, branded, and documentary formats. Color correction, captioning, and final delivery.
Motion
Design
2D animation and visual design using After Effects — title sequences, explainers, animated branding, and compositing for live-action footage.
Photography Portrait, product, and lifestyle photography with professional retouching and color grading.
Interactive Projection mapping and interactive installations.
AudioLive audio recording, voice-overs, and interviews. Custom music scoring and sound design.



Video Production


Matt Kleberg - Berggruen Gallery 2025


Strangers Salon at Ranch Motel


Vinilious - Angélica-Garcia


Artemis - Go Fund Me and Dinner


Muck & Fuss – Branding Video



Motion Graphics


            

Biogen - HHA


CNN 2020 Visionaries – Yunji Chen



Photos

Strangers Salon at Ranch Motel

 EVO Kitchen


SATX