
- Samuel HertzEditorial—Issue 02In the Midst of Echoes: Essays on the Turbulences of Listening
- Bobby JewellQuarry of Sound
- Emily Sarsam We Hum Together to Re-Member
- Melanie Garland Sounding the Maltese Archipelago Frequencies
- Masimba Hwati An Asymptotic Encounter with Nyami Nyami
- Kosmas Phan Ðinh Turbid Listening
- Julia E Dyck & Amanda Harvey Attunement as Method
- Chloe Alexandra Thompson Untitling
- Amias Hanley Aisles of MimeticaTracing the Role of Acoustic Mimicry Across Species and Systems
- Nele Moeller, James Parker, Joel SternNew Concepts in Acoustic EnrichmentAn Interview with Machine Listening
- Lisa AndreaniListening as NarrativeRamona Ponzini’s Environmental Storytelling
- ~pes I Build My Language with RocksIslands Unearthing Lithoaurality
- Radio OtherwiseListening to Soundscapes Otherwise: Infrastructures as Environmental SoundRadio Otherwise
- Ximena Alarcón, Elena BisernaTreeling ArbolitoA score by Ximena Alarcón
Machine Listening, Environments 12: new concepts in acoustic enrichment.Conversation between Nele Möller, Joel Stern and James Parker
Microphones and devices listen constantly to our conversations and data. And while we might not agree with—and give consent to—this form of corporate and governmental eavesdropping, we are still somewhat aware of this fact itself. At the same time, more-than-human entities in our environments are in a less conscious position to the microphones, recording devices, and loudspeakers that are pointed towards them by cooperations, governments, scientists, and artists. Methods like “acoustic monitoring” or “environmental field recording” collect data and material for policies, research, and artistic projects, while “acoustic enrichment” uses loudspeakers to play recordings of “healthy environments” onto degraded areas in the hope that this process could restore them. Although these practices often aim to counteract environmental destruction, their methods of data accumulation, automation, and enforcement also link them to extractivism, colonialism, and control. These listening technologies weave through our environments as they weave through the devices we use on a daily basis.
To pay critical and artistic attention to this technoscientific, AI-driven, and data-processing form of listening, artist-researchers Sean Dockray, James Parker, and Joel Stern established Machine Listening in 2020 as a platform for collaborative research and artistic experimentation.
In one of their current projects, Machine Listening relates current developments around acoustic enrichment and computational audition with past iterations of environmental listening and recording. Under the title Environments 12–new concepts in acoustic enrichment, Machine Listening created a speculative addition to the once-popular Environments series by Irv Teibel, which was one of the first commercial distributions of environmental recording in the late 60s and 70s.
In Machine Listening’s updated addition to the series, soundscapes are reproduced, synthesized, and managed on a planetary scale, creating a cybernetic ecology where humans, technology, and biosphere blur. The record and its more-than-human chorus imagine a world in which organic life depends on synthetic echoes of itself, “a reproduction of a replica,” as described in the record’s liner notes.
Environments 12 first originated as an 8-channel sound installation presented at RMIT Design Hub and was published in 2025 as a record by the label Futura Resistenza. As a co-runner of the label with my partner Frederic Van de Velde, I had the pleasure of moderating a conversation with James Parker and Joel Stern during the record launch at QO2 in Brussels in June 2025. I appreciate the possibility of expanding on our conversation within the framework of this issue.
Throughout the interview, you can listen and read excerpts from the record.
Nele Möller: The Environments series was the first publicly available psychoacoustic recording and sparked a wave of interest in environmental sound. The heavily processed field recordings were marketed as aids for work, sleep, and relaxation. What made you want to add a speculative twelfth edition to the Environments series by Irv Teibel, around fifty years later?
James Parker: We didn’t actually start with Environments. It wasn’t like, “Let’s make the twelfth record in the series.”
Joel Stern: That idea came later.
JP: What we really began with was wanting to explore how machine listening (audio AI) was being applied to environmental sound. We’d come across some scientific papers and projects that felt like fertile ground for an artwork. From there, we started thinking about how to tell this bigger story about machine listening and environmental systems, using the Environments series as a form or medium for telling that story.
Environments 1
JS: The way James, Sean, and I work is we share a flood of papers, links, and references over months, sometimes years, building this web of interconnected ideas. There was one piece of research that we were dwelling on a lot, which was the acoustic enrichment paper about playing the sound of coral reefs back into coral reefs in order to recuperate them.There was this image of speaker arrays installed on corals, playing sounds for fish and corals to recover, almost like the idea of the reef singing itself back to health.
One of the scientists involved in that project, Timothy Lamont, actually agreed to share the audio recordings that he had recorded and that were being played back in the reef with us. At the same time, we were sharing with each other things like interspecies internet, which is this project weirdly co-funded by Peter Gabriel about developing large language models to allow different animals to speak to one another and humans to speak to animals, and so forth. There were all these different stories.
And one of the things they all had in common was bioacoustics, interspecies language, AI, and environmental sound, all being instrumentalised in weird ways for different reasons.
Our Machine Listening projects often mix audio composition with a publishing element. Once we realised we could make a record and reproduce the aesthetic dimensions of the Environment series, that gave the project its form.
Reef mega mix
NM: The first time I listened to Environments 12, I felt quite strange because I didn’t know how to place it. The combination of AI voice clones with environmental sounds that are also in turn partly computerised. It felt unsettling. There’s a repulsion to it, like using technology to mimic the human voice alongside natural sound. But then, the more I listened, the more I realised my repulsion is part of that.
JP: When Joel sent me the first rough edit, I had exactly that reaction: repulsion. Something about it felt grotesque.
JS: Or a bit abject.
JP: Lots of the record’s weirdness comes from the voice clones. They don’t have anything directly to do with Environments, but there are a couple of connections. One is the myth that Teibel encoded his own voice, intentionally or not, in The Psychologically Ultimate Seashore. We asked writer and podcaster Mack Hagood, who has written on Environments about it, and he said someone is trying to do a forensic analysis, but no one knows if it’s true. Still, the myth was enough to work with.
Psychologically ultimate seashore
JS: Since the piece is about synthetic environments, it made sense to extend that idea to the voice, turning it into its own synthetic or cybernetic environment, where traces of a real voice are recomposed into something synthetic or hybrid.
JP: And then there’s the circular, cybernetic thing: the clones were trained on performances from this piece. So the work is trained on itself. It’s made weird and alien, but it’s still a kind of optimisation, aesthetic or affective optimisation, of the same material.
JS: Right, imagine taking a person’s reading of a story, cloning their voice from that reading, then having the clone reproduce the same text. It keeps traces of the original performance.
JP: Exactly. They’re not just “voice clones,” they’re performance clones. Voice puppets. They replicate a specific delivery. And there is also this whole thread about speaking to animals using computers that is woven into the story.
Conversations we can't understand
JP: And some of the singing and chanting has this mantra-like quality, which fits obliquely with the original Environments records—coming out of the ’60s and the early New Age culture. One of the Environments recordings we interpolated is from a hippie “be-in,” with chanting.
Often, the live, human voices on the record are doing something sung, chanted, or intoned. We’ve got this mantra element, which feels a bit hippie-ish, but also strangely corporate.
There is this hopeful–desperate logic behind certain scientific projects. Like, if you think you’re going to save a coral reef by putting loudspeakers on the seabed and playing remixed reef sounds, that’s a kind of semi-religious act. It’s desperation, but also absolute commitment against impossible odds.
For example, coral reefs are basically completely destroyed. So I have a huge amount of admiration for scientists who still try to turn things around. In a way, they’re watching the reefs die for us. There’s something deeply committed, almost spiritual, about that and I think some of that makes its way into the record.
JS: On the record, the dynamic is mostly that humans, friends of ours, do the singing and chanting, while the storytelling is handled by the clones. The exception is the final story, with the record spinning in the forest after nature has reclaimed the technology—that’s all human voices.
It’s deliberately mixed. Ideally, a listener who hasn’t read the notes shouldn’t always know which voices are human and which are synthetic.
NM: I wanted to ask more about the mode of listening that’s enabled here, through acoustic enrichment or machine listening. This extractive, sometimes governmental mode of listening that big corporations use, but that is also present in research projects and artistic works. There are so many different modes of listening tied up in this, and I think James wrote about Dylan Robinson’s idea of “hungry listening” in relation to the project as well. Could you expand on this?
JS: I see it as having a few dimensions. When we started the Machine Listening project, we were naming a type of listening we wanted to analyse and critique: algorithmic, extractive, computational listening. It grew out of our earlier work on eavesdropping, which was about state and corporate capture of our sonic world. Machine Listening is like the algorithmic extension of that.
JP: Right, but it seems also really important to notice that the phrase Machine Listening comes out of computer music and experimental music, basically. The first written use I can find is by Robert Rowe, a composer at the MIT Media Lab.
So there’s a very direct line from avant-garde music to Silicon Valley tech. Composers at MIT were literally teaching and learning alongside the people who went on to run programs at companies like Google.
It’s interesting because people tend to assume music or listening is inherently benign. But the phrase “machine listening” already contains this entwined history of experimental art and corporate technology. You don’t have to project that onto it—it’s baked in.
The thing about Teibel is he was really clear that he was pioneering a new kind of listening. Reading those liner notes, seeing him on TV, it’s wild to imagine that this way of listening wasn’t “natural” yet. It had to be taught.
If you watch that old interview, the host just finds it strange, almost funny. She’s never heard anything like it before, and the idea that you’d put on a record just to manage your mood? That is totally kooky to her. Now, of course, it’s completely normal.
But the way it’s “normal” now is bound up with capital in a much deeper way. Teibel, in addition to hanging out with Stockhausen, etc., is clearly a capitalist. He's a marketing guy, and a bit of a huckster. But he's still not as bad as Daniel Ek, CEO of Spotify, who is an ad guy who invests in military companies. Listening is tied to power. Pure capital has taken up not just the form of music production, but the forms of listening it encourages, because they’re incredibly profitable. Spotify specifically wants us to listen to mood music. That's a huge part of its business model, as Liz Pelly points out in her recent book, Mood Machine. Spotify is heavily invested in encouraging what she calls 'lean back listening', because it's much more profitable than more attentive, critical forms of listening.
With Environments, there’s this interesting arc: first, the form of listening had to be invented; later, it got captured by capital. That’s not exactly the same as the kinds of listening being projected back onto the environment now—but there’s overlap.
The big point I keep making about machine listening, especially as it’s used in bioacoustics, is that the scientists doing it are, frankly, politically naïve. They rarely engage with the political, legal, or economic implications of what they’re building. They call it passive acoustic monitoring, but once you automate analysis, then automate responses and interventions, you’re creating a monitoring system that inevitably becomes a control system.
That’s the core of acoustic enrichment: you feed sounds back into an environment to modulate it. This isn’t just “analysis.” It’s governance, on a potentially planetary scale. No forest is isolated; once you start, you’re thinking about entire continents. The UN is even pushing for environmental monitoring to underpin biodiversity markets. So we’re looking at the financialization of nature as a governance infrastructure in the age of climate change.
It also feels deeply imperial. Most of the leading work is coming from the US and UK, reliant on big-tech infrastructure, AWS, Google, and so on. And the logic of machine listening itself is imperial: it flattens the world. In their systems, there’s no meaningful difference between one place and another, no space for indigenous or local knowledge—just minor variations in data. The whole premise is scalability and transferability: you can “drop” the system anywhere and expect it to work.
That’s a long way from the situated listening practices that people like Dylan Robinson talk about: practices rooted in centuries of care for a specific place. There’s a hubris in thinking you can capture all that in a scalable, transferable model.
Could there be a more plural, situated form of machine listening? Some indigenous scholars say yes, but only if the entire pipeline is transformed so that indigenous ways of knowing shape it from the ground up. To me, that’s basically saying that machine listening, as we currently understand it, would have to stop being itself. Maybe you could rebuild it into something else entirely, but it wouldn’t be the same thing. In some fundamental way, the premise is transferability, scalability, and the outsourcing of decision-making to a machine that is incapable of knowing locality in the same kind of way that people who have been in touch with the land for hundreds or thousands of years would.
JS: That’s pretty heavy for a vinyl record, isn’t it? But no, seriously, that was a great articulation of the bigger political context.
JP: I mean, that’s not literally in the record—but it’s also not absent. If you’re trying to understand our project, you can’t separate the artworks from the discursive work.
JS: Yeah, all the different parts of the project feed into each other. Making records gives us this freedom to make work that’s stranger, less resolved. Essays demand clarity and precision, and, in my case, take forever to get published.
JP: Honestly, having the artwork as a conversation framework is great.
Reef lament
Machine Listening is a platform for collaborative research and artistic experimentation, established in 2020 by artist-researchers Sean Dockray, James Parker, and Joel Stern.
Joel Stern is an artist, curator, and researcher living in Naarm/Melbourne whose work focuses on practices of sound and listening and how these shape our contemporary worlds.
James Parker is an Associate Professor at Melbourne Law School, who works across legal scholarship, art criticism, curation, and production.
Nele Moeller is an artist and PhD researcher at LUCA/KU Leuven. She focuses on acoustic ecologies, environmental histories, and intersubjective relations with humans and more-than-humans.
- Samuel HertzEditorial—Issue 02In the Midst of Echoes: Essays on the Turbulences of Listening
- Bobby JewellQuarry of Sound
- Emily Sarsam We Hum Together to Re-Member
- Melanie Garland Sounding the Maltese Archipelago Frequencies
- Masimba Hwati An Asymptotic Encounter with Nyami Nyami
- Kosmas Phan Ðinh Turbid Listening
- Julia E Dyck & Amanda Harvey Attunement as Method
- Chloe Alexandra Thompson Untitling
- Amias Hanley Aisles of MimeticaTracing the Role of Acoustic Mimicry Across Species and Systems
- Nele Moeller, James Parker, Joel SternNew Concepts in Acoustic EnrichmentAn Interview with Machine Listening
- Lisa AndreaniListening as NarrativeRamona Ponzini’s Environmental Storytelling
- ~pes I Build My Language with RocksIslands Unearthing Lithoaurality
- Radio OtherwiseListening to Soundscapes Otherwise: Infrastructures as Environmental SoundRadio Otherwise
- Ximena Alarcón, Elena BisernaTreeling ArbolitoA score by Ximena Alarcón



















