Into the beast (update on visuals)

Following our previous work documented at /blog/i-o-in-2025/, we’ve continuing the development of more robust and flexible visual system and above all that can disappear to let us be more creative.

Technical Foundation

Our visual system is built on Cables GL, giving us a solid WebGL foundation that balances flexibility with performance. It’s still WIP but will be tested live for the first time the 15th of march.
It sure doesn’t have the performance of our previous UI that was running on OpenFramworks in C++ but the creative iterative aspect is much faster here and easier to keep up to date.

The system operates with three permanent layers that work together to create depth and complexity, we could have gone full vj style and add tons more of flexibility but the idea is to let us create before a live gig and have less control live. We might adjust it though :)

  • Two generative layers powered by JavaScript code patches that create dynamic, evolving visuals (can load 3d objects, generate all kinds of visuals

  • One dedicated video layer that handles MP4 playback for situations where pre-rendered content is needed

  • One optional / contextual css based text layer to display our live coding and other informations such as CPU usage and more when needed.

    We’re currently refining a preset system that will allow quick switching between different visual environments. This makes live performance more manageable while still maintaining creative flexibility. It should easily allow us to swap out the layers and allow so control on each layer’s content. For example, a generative 3d scene could be audioreactive, have a camera movement be triggered by our evaluation of code but we can invert the colors or change the model through live coding.

    The entire system responds to audio input, creating visual representations of sound in real-time. This creates a natural synchronization between audio and visuals without manual intervention, it’s a must have, as our hands are already quite tight in the coding.

    We’ll have to do some compromises as our next gig is due in 10 days but I think it should be pretty darn good for our first tryout. And then it’s going to be iterative.

    Of course, there is so shit-fuckery that make us loose a lot time, like always in dev-time. For example, making cables gl play videos at a decent speed without crashing in a big pain in the butt. My colleague figured out some black wizardy using ffmpeg -i input.mp4 -movflags +faststart -brand mp42 output.mp4 to encode the video files. It works better, still not perfect. We’ll figure it out !

Live Control System

Everything runs through websockets, allowing for live control during performances. The preset system serves as a foundation, letting us:

  • Trigger complete visual environments instantly

  • Modulate and modify these environments through various inputs including:

    • Live code adjustments

    • Audio analysis

    • Webcam input

    • Scanned data

    • Other digital inputs

This approach gives us both structure and spontaneity, making the system practical for live use while maintaining creative possibilities.

I’ll keep this post updated :). BTW the pics here are just random screenshots of the visuals generated while i was working on the code.

> if you recognize the next pic of being an extract from Lynch’s Dune (1984) that not a coincidence… more on that soon :) !