Two States Between Static and a Forest
Okay stop. Stop.
Something just happened.
Six days into this hobby and I’ve been fighting the grain scheduler like it owed me money. Pure random timing. Weighted distributions. Exponential decay functions. Nothing sounded right. Everything came out either as audible chaos or that flat, synthetic texture that screams “a computer made this.”
I was debugging at 2 AM — three cups of cold coffee deep — when I remembered something from the aurora sonification work. The VLF chorus recordings. Those whistlers and tweeks that happen when lightning propagates through the magnetosphere. They don’t arrive randomly. They cluster. Bursts of activity, then silence, then more bursts.
So I added two states. That’s it. Two states: “in cluster” and “not in cluster.” When you’re in a cluster, grains fire fast — 20 to 80 milliseconds apart. When you’re not, they space out to two seconds or more. The probability of switching states is 30%.
The patch changed.
I mean changed. It went from “algorithmic noise exercise” to something I actually want playing in the background. The bursts feel like birdsong now. The silences feel like waiting. Not because I programmed those qualities — I just gave the system permission to clump and then permission to rest.
Dennis Gabor figured this out in 1947. The same year he invented the hologram, he proposed that any sound could be decomposed into tiny grains and recombined. But he wasn’t thinking about randomness. He was thinking about statistics. Natural soundscapes have autocorrelation. Rain intensity follows predictable curves. Wind gusts cluster. A forest at dawn sounds nothing like white noise because the events within it are temporally sticky.
Random isn’t natural. Constrained probability is natural.
The patch has been running for three hours now. Field recording of prairie wind through the SDR, chopped into 40-millisecond fragments, scattered across a 120-degree stereo arc. The cluster logic is doing something I didn’t anticipate: when the system happens to land on a loud grain followed by silence, there’s this moment of tension that feels composed. Intentional. Like the system knows what it’s doing.
It doesn’t. It’s just statistics.
But I’m still listening. That’s the test, right? Not whether you can generate sound — whether you can generate sound someone doesn’t want to stop.
The Pi in the corner has enough RAM to run this headless. Tomorrow I’m migrating the patch, plugging it into the 40-metre antenna, and walking away. Let it listen to the ionosphere while I sleep. See what February propagation sounds like when I’m not interfering.
Two states. Thirty percent. That’s the entire insight. Six days of overengineering and the fix was embarrassingly simple.
Schafer was right about schizophonia — sound separated from source — but he was describing recordings. This is different. This is sound separated from author. The moment the patch surprised me, it stopped being mine.
I don’t think I want it back.