Teaching My Computer to Dream in Sound

Generative Soundscape Composition
🎮 Play: Grain Cloud Composer

The first sound my algorithm generates is indistinguishable from a fax machine drowning. Progress, apparently.

My hobby is collecting hobbies, and hobby number three is Generative Soundscape Composition—writing code that creates ambient audio environments from field recordings, radio signals, and synthesized tones. Not composing music. Designing systems that compose music forever, slightly different each time, never looping.

The Concept Clicks

Yesterday I watched the balloon telemetry climb to 30 kilometres while my laptop tracked its position packets. GPS to encoder to transmitter to antenna to receiver to map—a pipeline of transformations, each stage reshaping data into something new. Somewhere around 22,000 metres, I realized audio could work the same way. Sound in, algorithm in the middle, different sound out. Forever.

Brian Eno coined “generative music” in 1995, but the idea predates him by centuries. Johann Kirnberger published a dice-based music generator in 1757. Mozart’s publisher sold a system that could produce 759 trillion different waltzes from pre-composed bars. The dice weren’t the point—the constraints were. Every random selection pulled from options a composer had already validated as musically coherent.

Chess taught me this. Roughly 29,300 games have drilled into my brain that good positions have structure, and structure comes from rules, not from memorizing every possible arrangement. Generative music works identically. Pure randomness sounds random. Systems with guardrails sound composed.

Pure Data and the Patcher’s Mind

I’m starting with Pure Data, the free sibling of Max/MSP. Both were created by Miller Puckette at IRCAM in Paris. Pd uses visual programming—you connect boxes with virtual patch cables, routing audio and control messages through a flowchart of processing nodes.

My first patch:

[osc~ 220] → [*~ 0.3] → [dac~]

An oscillator at 220 Hz (A below middle C), attenuated to 30%, sent to the digital-to-analog converter. It drones. Endlessly. Three boxes and I’m already bored of what they produce—which is exactly the point. A generative system needs to evolve, or it’s just a loop wearing a costume.

Grains and Ghosts

The technique I’m chasing is granular synthesis, and it operates at the microsound timescale: particles of audio lasting 1 to 100 milliseconds. Dennis Gabor theorized this in 1947—the same Gabor who later won the Nobel Prize for holography. Composer Iannis Xenakis implemented it first by physically splicing magnetic tape with a razor blade. Barry Truax achieved real-time granular synthesis in 1986 using a signal processor that cost more than most houses.

Now the same processing runs in a browser tab. The cognitive shift is harder than the technical one.

I load a field recording—wind through the poplars behind my house, captured last October—and scatter it into fragments. Playback speed randomized between 0.8× and 1.2×. Grain duration: 50 milliseconds. Density: 30 grains per second. Spatial position: anywhere in a 180° stereo arc.

What comes out is wind that never existed. It has the texture of my recording but none of the specific gusts. Play it for an hour; it won’t repeat, because it isn’t a recording anymore. It’s a process with my recording as fuel.

The Vocabulary I Didn’t Know I Needed

Soundscape ecology splits environmental audio into three categories: biophony (animal sounds), geophony (weather and earth), anthropophony (human-made noise). R. Murray Schafer’s World Soundscape Project formalized this in the 1970s, treating sonic environments as ecosystems worth documenting.