I. RECAP
Crash Server feels like it’s in a constant state of change, always evolving but at this very moment it feels special, not only because of the new tech (it helps though !). It seems like the perfect time for a little retrospective—to look back and share some of the journey with you, it will be focused more on the visual side of things. Although our visuals are always connected to the rest, the lore, and our history, let’s dig in.
Crash Server has always been a project about interaction, especially in live concert, when we play a concert we try to be as close the public as we can. As an inheritance of live coding we have to display our code (toplap manifest) in a way or another. Be transparent about what we are doing on stage. But then again, I’m not going deep into the argument of transparency/comprehension but basically we adhere to those core concepts :
– show our code live, the idea is not for the public to understand everything we’re doing but we have to show the link between the code and us, be it blurred sometimes
– play 100% live, improvised, the public makes or kills our sets, we listen and see you while we play, we’re not here to unwrap a premaid set, we may use some preexistent code sometimes but less and less so and if it is, it is to be twisted as we see fit.
– feelings, all of them. While we might offer the autobahn you wish for, or the adherence to a specific style, in the end I think after those years, we play what we are feeling.
Visuals are a tricky question concerning those points, we are less reactive on them as we are already quite focused on the sound, still, the visuals are there to enable and enhance those core concepts. The code is displayed, yet interpreted by our visual language. The visuals are improvised, hashed, as our live set is. This has led us to an approach that is less vj-like. We tried that for a while but that required something we were not looking for.
II. Searching for Alternatives
So it went all about two things, parameters and narration. Parameters are everything you can extract from your gig, that is, sound (fft, analysis, structure, …), narration is how to order, structure and tell a sort of story with visuals. We opted for a parameter driver system but with a control in real time of the narration. Narration is a lot. We’ll get there later, for the moment, for the purpose of this introduction, it’s more about sequencing than story telling;
When we first ventured into the world of interactive visuals, we began with vvvv, a powerful graphics programming environment. For the first few months, it served as our foundation for creating our first interactive UI. Our project was still the works, and on the amateur side. The setup was audio-reactive, responding to both music and inputs we carefully selected, which made it an exciting start.
However, while vvvv offered immense potential, it was also quite complicated to run in a live concert setting. Since it relied on several local servers and routing to forward data, it proved challenging to manage, particularly on Windows systems with no open-source options available. After that we went full linux. (that could be a topic on it’s own !)
Determined to find a better solution, we explored alternatives, including and not limited to Le Biniou, an open-source generative visuals tool. Though it showed promise, the search for a more stable and efficient system continued as we needed something more suited for live performances.
The Pandemic Shift: Streaming Concerts and 3D Exploration
When the pandemic hit, we shifted focus towards streaming concerts. This gave us the opportunity to dive deeper into 3D modeling and rendering, fields we hadn’t previously explored in-depth as we were more at ease with realtime. We started using Mozilla Hubs for creating virtual environments, experimenting with interactive spheremaps, triggered videos, and complex sequences to enhance our performances. We had a time during the covid we we were streaming live in a 3d environment (mozilla hub is dead so are a lot of our 3d projects, being dependent on that technology, but check this link to see what we did back then) >> Also, every asset, will be used again in another form 🙂
This period of experimentation wasn’t just about technology—it was when we began to develop our lore and the visual narrative that would define our performances. At the same time, we began laying the groundwork for a new interface, one that would be more stable, flexible, and open-source.
Enter Open Frameworks: Our First Functional UI
After much trial and error, we decided to develop our own solution in Open Frameworks. It wasn’t an easy journey—there were countless hours spent debugging, refining, and making it work for live performances. But in 2021, after much effort, we finally released Crash OS, our first fully functional and open-source UI. This new setup brought together real-time audio analysis, pre-rendered videos in HAP format (courtesy of Blender and Cinema 4D), and a highly stable environment for live performances.
The system could handle everything from DMX to Arduino, triggering lights and inputs such as our own panic button. It felt like we were entering a new era of possibilities.

sidenote: I’m talking here about our visuals we CAN project during a live concert.
Sometimes, depending on the place, the public, we did choose to display our code, raw, to the public. While at other times, the visuals are details, or not projected at all in some venues.
Interactive Artwork and Multichannel Audio-Visuals
In 2021, we took a major step forward with an interactive artwork project (“inextricable) that involved 4-channel audio, multiple video outputs, and everything synchronized, mostly via OSC. This piece marked a new level of complexity and interactivity for us, and it allowed us to integrate a wide array of hardware—DMX, lights, sensors (including IR), LED strips, Kinects, and more.
We continued to refine this setup, tuning Open Frameworks and further developing our Mozilla Hubs environment for more immersive experiences.
The Realization: Challenges of C++ and the Shift to New Tech
As time passed, we started realizing that C++—while powerful—was a beast to tame. The complexity, the endless grind to keep everything up to date, it became clear we needed a change. We missed that quick prototyping vibe, like what vvvv or Pure Data offered.
But hey, we didn’t ditch our Open Frameworks setup overnight. It stuck around for a while, did its job, and gave us a solid understanding of programming, hardware, drivers… and, let’s be real, a whole lot of random knowledge about power supplies, USB detection order, and some straight-up black magic debugging. And don’t worry—everything we learned and built in that time is still with us. You’ll see where it all led next.

III.
2024: New Beginnings with Cables.GL and JS Interfaces
In late 2024, we took a bold step forward. We integrated Cables.GL for visuals, allowing for a more flexible and accessible approach to interaction and visuals. At the same time, we transitioned Troop and FoxDot to JavaScript, creating a new interface that was easier to maintain and build upon. We’ve only tested cables gl three times for the visuals at the moment but I can confidently say it’s our way forward for the moment. There might be questions of merging our webtroop with cables in a way or another but we’re not there yet.
Here’s where the magic happens: Our new Cables.GL-based interface gets all its data through WebSockets. We set it up so our WebTroop (yep, JS-based FoxDot) can feed data straight into the system. We’re pulling info from Supercollider, BPM, CPU levels, everything we write into the system—and everything the “server” spits out. We get triggers when things are evaluated, and we use that to make everything audio-reactive
Looking Ahead
Everything descibed here after is controlled via coding in our webTroop environnment. And this is just an example of the usage of our system, i can be modulated to have different behaviors. We’re cooking up some cool new features, and I’m not gonna spoil everything—you’ll have to see it live. But here’s a sneak peek of the new features.

Live 2025
Before the chaos begins, we’ve got an Init Sequence that runs through every system we have. I/O checks, makes sure everything’s ready to go, all systems “green.” No more frantic last-minute checks before the show.
We can then trigger the “screensaver”. A generative background that builds the tension, with messages like “Concert starts in 10 minutes” popping up. The moment to launch some generative sounds, drones or connect to feeds from the public for example.
Then we connect to our home. It’s based loosely on our Mozilla Hubs environment, and it’s our virtual space before every concert, it has evolved a lot overtime. It’s like entering our home base, a part of the lore we’ve been building for years. You’ll know it when you see it. It’s an ever changing landscape, each concert shapes it.
Then, we hit the Boot Sequence. This is where the virtual O/S comes to life. Everything clicks in, systems start humming, and we’re ready to roll. We can see the code on screen but it’s still open to interpretation.


Next, we run a Scan Sequence. This grabs all the local data we need—Wi-Fi, device signals, data feeds—and it helps us map out our environment. We can see what’s going on around us in real-time, making sure we’re locked into everything we need.
And now, the real fun begins. The battle with the server is on. We’ve got a map of our “target,” and with sound, we go after every node we’ve detected. Destroy ‘em one by one.

When the time is right, we hit the Warp Sequence. It’s like a tunnel, zipping us between scenes in a flash. Things get wild. No one knows where we’re going next. Multiple variations are possible.

We’ve got a Target Sequence that shows the “battle” in real-time. We can see where we’re at, how much progress we’ve made. It’s like checking the score, but way cooler.
And finally, we wrap things up with the Shutdown Sequence. After the show’s done, we get all the stats: keys pressed, lines evaluated, BPM, CPU usage, and more. It’s a rundown of everything that went down, like a post-gig debrief.
Yeah, that’s a pretty solid rundown of what you might see in our next shows, but the real heart of it? It’s in the scenes and the sound. That’s what’s getting hammered out right now. Our code isn’t just spitting out numbers—it’s got something to say, something to show. And getting that across? Not always easy, but that’s what the next lives are all about: pushing ideas, pushing boundaries, and making sure the live show has more to say. But we’re not gonna spoil it here. Stay tuned though—there might be an article on this site soon, diving into the nitty-gritty of live coding from our point of view.”
Example 1 : Visualizing the Server
On another note our server isn’t just crunching numbers—it’s composing sound in real time. Here is a patch that helps to visualize its its mood swings:
⚡ Thicker lines = intense sonic generation
🔄 Chaotic turns = CPU spikes
💾 Text & markers = code written by the machine itself
