Temporal Weaving with TidalCycles
2020-2024
TidalCycles code on the left generates MIDI notes, visualized in real-time with TouchDesigner on the right.
Real-time coding with TidalCycles, and TouchDesigner.
Part of TOPLAP’s mission to make live coding inclusive and accessible, this session highlights the creative possibilities of programming music and visuals in the moment.
Hydra
Live-Coding Visuals
2022-2024
Hydra is a browser-based visual synthesizer for live coding visuals through simple functions and feedback loops.
Screencaptured live-coding in Hydra
Runway‘s Creative Partner Program
2024
As a selected Runway's Creative Partners Program member, I can explore AI-generated art more deeply. The CPP Discord server is constantly chatting with amazing artists worldwide who have been on my journey, finding the latest cutting-edge AI tools and models that Runway continues to update on a massive scale.
Runway's text-to-image Gen-3 Alpha model maiden voyage showcases a fusion of surreal AI-Generated imagery and custom score sound design forward audio.
Gen-3 Alpha FLUX.1-Dev (Images)
Runway Frames
Third Echo
Exploring AI-generated art evolution. Uses Stable Diffusion (text-to-image AI) and AnimateDiff (AI animation tool). Builds on my earlier AI art experiments in StyleGAN.
Training frames came from a notebook called Looking Glass, original notebook by Sber AI @ai_curio. This notebook implements an image-to-image generation technique that fine-tunes ruDALL-E. By using this method, I was able to create new images that closely resembled the given input images.
Input video was made in StyleGAN, generative adversarial network (GAN) that creates highly realistic images by using a style-based generator architecture, allowing fine-grained control over various aspects of the generated images.
Scyphozoa
2021
This run of the machine learning script was trained on jellyfish. It took 10 hours to train this model.
Epoch 200
Features original music live-coded using TidalCycles, a programming language for music creation.
21.04.2022-20.44.45.mp3
For the past two years, I've been on an exhilarating journey with Delenda from San Antonio. Our collaboration melds her haunting vocals and raw storytelling with my AI-enhanced surreal visuals. From making music videos to designing live visuals, we're exploring new frontiers together.
Catalyst
We merged AI-generated animations with analog video techniques for Catalyst's music video.
Fine-tuned models produced surreal visuals, and recaptured on vintage TVs for a uniquely tailored visual language.
Animated results using Runway's Gen-2 model.
Luminaria
For Delenda’s Luminaria Contemporary Arts Festival show, I transformed live footage of Delenda through custom AI processing, creating a surreal visual backdrop. I ran real-time visuals using TouchDesigner during the performance, performing alongside Delenda and her band.
A dynamic visual environment was created using two projectors running TouchDesigner. This setup allowed us to run fluid, ever-changing lighting conditions that interacted with the performer's various outfits in real time.
I curated two distinct image sets: one from our live-action video shoot and another of visually striking reference images.
These were used to train custom Stable Diffusion 1.5 checkpoint and LoRA models, enabling us to generate AI visuals that could synthesize and amplify Delenda's visual identity.
I applied an AI-style pass to the original footage using Stable WarpFusion and our custom-trained LoRA models. This process tracked movements through generated optical flow and consistency maps. I developed a Python script for frame glitching and leveraged TouchDesigner's feedback network to achieve the final aesthetic.
TouchDesigner live-visual setup and frame grabs.
Pathetic
This project began as a live-action music video, traditionally shot and edited. We then applied multiple passes of Stable WarpFusion AI to the footage, creating a surreal, dream-like version.
Project Breakdown
After filming, I created a fully color-corrected live-action edit of the music video.
This served as the foundation for our subsequent AI-enhanced visual treatments.
I then gathered and curated images to train a custom Stable Diffusion 1.5 checkpoint.
I processed sections of the live-action music video through Stable WarpFusion, using my custom model. This was done both on Google Colab and locally on a high-performance gaming laptop.
The final step involved compositing the original live footage over the AI-generated imagery in After Effects, followed by a final pass in DaVinci Resolve.
For the past decade, I've created multimedia experiences with A.M. Architect alongside Daniel Stanush. Our collaboration weaves Stanush's melodic sensibility with my soundscape manipulation, using TouchDesigner and machine learning to transform performances into interactive installations. Through beats, generative visuals, and audience interaction, we're redefining the boundaries between electronic music and digital art.
Hydra
Cynatica Conductor II
Cynatica Conductor invites the viewer to interact with light and sound, playing the conductor overseeing an ensemble of sonic texture and fractured melody.
Art interacts with a conductor (viewer), they grow in complexity and vibrancy, becoming an immersive collaboration between the art and the audience.
Installation Review: Glass Tire Art Magazine:
A Technological Dream: "Cynatica Conductor"
A Technological Dream: "Cynatica Conductor"