AI bands like The Velvet Sundown represent the early tremors of a cultural earthquake that signals a future where machine-made content becomes so seductive and abundant that we lose the ability to distinguish between human creativity and algorithmic manipulation. A Skynet of the arts, if you will, or autocomplete for emotions. And that’s bad, right? Right?
Meet The Velvet Sundown, a band with over 1.3 million monthly Spotify listeners (and counting, as of 15th July 2025), three released albums, and a devoted fanbase. There’s just one problem: they are not real. The musicians, their origin stories, even their carefully crafted photos, are all AI-generated fiction. When the truth finally emerged, it sparked exactly the wrong kind of conversation. Instead of celebrating what AI-enhanced creativity could achieve, we received a masterclass in how not to do it.
Here’s what makes this so maddening: The Velvet Sundown didn’t mess up because they used AI. They messed up because they lied about it. As Roberto Neri, chief executive of the Ivors Academy, put it: “AI-generated bands like Velvet Sundown that are reaching big audiences without involving human creators raise serious concerns around transparency, authorship and consent.”
This wasn’t innovation, it was a cheap trick. Create some fictional personas, upload the songs, watch the streams roll in, then act surprised when people feel deceived. It’s sonic catfishing, and precisely the approach that gives AI-enhanced music a bad name straight out of the gate before it’s had a chance to make friends and influence people.
They had a genuine opportunity to show what happens when technology amplifies human creativity, and that was squandered for a few manufactured headlines.
A different path
We’ve taken a fundamentally different approach here at Meatbag, though I’m prepared for charges of hypocrisy. We are real people, for starters: The Meatbag project is me, George Hopkin, and Adam Krajczynski. Our musical project, Artichoke FM, might have Gio and Odam as frontmen, but we hope people will allow us that modest artistic conceit, even if it’s not exactly Ziggy Stardust. We’ve never hidden our involvement in any of these endeavours.
Adam and I have collaborated on several video productions, including “Hello Computer,” a series of interviews we conducted in 2023 when ChatGPT first gained attention, exploring the human side of the AI revolution during that pivotal moment in technological history.
Our backgrounds are in traditional media and storytelling. I’ve spent more than three decades translating complex technological concepts into compelling narratives, working across financial technology, AI, cybersecurity, and Industry 4.0 topics (check out my LinkedIn newsletter, The Freesheet, if you’d like to doomscroll AI headlines). Adam brings his own extensive experience in media productions; our collaboration has always been about exploring how technology can enhance human creativity, not replace it.
In early 2024, we were planning our next video projects. We decided to explore AI-enhanced music creation for the backing tracks, allowing us to use unique sounds that fit our nonsensical sensibilities, a lo-fi, dreamy vibe inspired by trip hop, psychedelia, funk, and, dare I say it, soul.
We went looking for some handy musical loops. What we discovered was an amazing new world of opportunities.
What excites us most about AI-enhanced music creation is its democratising potential. This is, in many ways, the new punk, a movement that allows heathens like Adam and me to make creative noises for entertainment, even without traditional musical training.
As writer Eric Hal Schwartz noted on TechRadar (the website where Eric’s colleague Graham Barlow broke the original story): “AI could be a boon to music as a whole. Imagine a rural teenager with few resources who can’t hire a band, a piano teacher, or a recording studio. With a phone and some imagination, they could use free AI music tools to experiment and share the music in their minds.”
We are all that rural teenager. With a safety pin through our collective noses. That can’t be hygienic.
This isn’t about replacing human musicians. It’s about expanding the creative possibilities available to everyone. Just as punk rock democratised music creation by showing that you didn’t need years of classical training to make meaningful sounds, AI-enhanced music tools are opening doors for people who have always had music in their heads but lacked the technical skills to get it out into the world.
The cultural singularity
Without revealing the exact process, most of the tracks we’ve produced to date begin with an original melody – a fragment we capture from the sonic dimensions we’re exploring – and then we construct each piece bar by bar until something rewarding emerges from the creative chaos.
More recently, we’re taking more time and effort, not less. We’re introducing layers of additional edits and revisions using more traditional audio production methods, which we use with growing confidence and, we hope some people agree, ability. Anyone who thinks they can create meaningful art with a single prompt is on the wrong track. It takes hours of switching between traditional tools, including old-school audio apps like Audacity (because punk) and video apps such as Photoshop, in addition to the wealth of new generative AI services. And a meatbag like us needs to want the whole thing to happen; AI is always sitting on the sidelines until human agency is introduced. For now.
However, based on the work we’ve done and the research we’ve carried out, Adam and I can see that the cultural singularity is not just approaching. It’s here. The next few years will bring changes that make the current Velvet Sundown controversy look adorably naive by comparison.
We’re witnessing the early stages of a fundamental transformation in how creative content is conceived, produced, and consumed. This is the moment when the remixing of culture accelerates beyond the point of human prediction, driven by technology, the internet, and the fact that anyone with a smartphone can now drop a viral hit or spark a global movement before breakfast.
The phrase itself riffs off the “technological singularity,” a term popularised by sci-fi author Vernor Vinge in the 1980s and later by futurist Ray Kurzweil, to describe the point where machines outpace human intelligence and everything changes forever.
The cultural singularity borrows this logic but swaps out AI for the infinite scroll of human creativity: as tech lowers every barrier to entry, from art apps to meme generators, culture starts creating more culture, at speeds and scales that make yesterday’s pop stars and blockbuster movies seem quaint.
What does this look like? Memes mutating hourly, outpacing any attempt at explanation. Punk creators using free tools to make games, art, and music that reach millions without ever having to take a step into a studio. Platforms like TikTok and SoundCloud turning niche obsessions into global phenomena overnight.
It’s not just audio, of course. A new wave of AI generators can create lifelike video complete with dialogue. These tools, with Google’s Veo 3 leading the pack at the time of writing, are producing viral doomscroll slop, satirical commentary, and deepfakes of disputed events, including riots and elections. We’ve used this kind of thing ourselves already, like this:
The worrying result: an ever-accelerating feedback loop where culture begets culture, originality is both everywhere and impossible to pin down, and the “mainstream” is just another tribe with a good PR team. The old gatekeepers are out; the crowd is in. And as with the technological singularity, nobody really knows what comes next, except that it’ll probably be weird, wonderful, and impossible to predict.
The numbers are already staggering. French streaming platform Deezer reports that 18% of songs being uploaded these days are entirely AI-generated. That’s over 20,000 robot-made tracks per day. Twice the number from just a few months ago.
But this is just the beginning. What we need now is not resistance to these changes, but thoughtful engagement with them.
Move quickly and make things
We’re entirely on board with governments moving quickly to update copyright, licensing, and other legal frameworks to accommodate these new realities. The current situation, where streaming platforms have no obligation to identify AI-generated content, serves no one well, neither artists, nor consumers, nor the platforms themselves.
Clear labelling doesn’t diminish the value of AI-enhanced creativity; it empowers consumers to make informed choices while protecting the rights of human creators whose work may have been used in training data.
Beyond 2030, frankly, it’s anyone’s guess what the creative landscape will look like. The exponential pace of change means that our current frameworks for understanding art, authorship, and creativity may need complete reimagining. Government teams should be gathered in groups to think about this today, because la-la-la-i-can’t-hear-you won’t cut it.
I do hope they’re like us, eager to hear from anyone who has solid thoughts about what the world of 2030 and beyond will look like. How will we all define artistic value when the barriers to creation have pretty much disappeared? How will we maintain genuine human connection and proper meaning in a new era of overabundant synthetic content? And at the same time, what new forms of creative expression will emerge that we can’t even imagine today?
Our own AI exploration and incurable curiosity led to four distinct musical projects (so far; we have more in the pipeline):
Artichoke FM was our first experiment, created around 18 months ago to explore these possibilities with a backstory built from our collective pop cultural obsessions. Who doesn’t love mind control conspiracies and cats? The project channels otherworldly frequencies from The Backrooms into laid-back beats, creating what we describe as “signals from realities beyond our peripheral vision”. That sounds like genuine art-house bollocks to me. I don’t need AI to produce copious amounts of entertaining rhetorical nonsense.
Bloobeam emerged as our “cosmic prophecy” project, weaving darker downtempo beats and ambient soundscapes on the album “The Emperor’s New Skies.” It’s our attempt to create transmissions from the edge of tomorrow, exploring themes of transformation and cosmic consciousness. We also hope to cash in when Project Blue Beam kicks off, because it’s totally going to happen and we all know it, right? You know. I see you.
Tronotron is an old-school house experiment, with retro-futuristic soundscapes. The first album is called “A Singular Singularity,” because it will be. “Technology will eat itself, and Tronotron’s bleeps and bloops will be playing in the restaurant” is one of the many lines I now find myself thinking: “Did I come up with that one, or was it an AI contribution?”
The Meatbag Light Orchestra (mlo) emerged around a year ago, when we were experimenting with synthetic voices, primarily to create jingles and bumpers for our new radio station, Meatbag Radio. (That’s right, we have a radio station. Why not join the thousands of happy listeners worldwide who tune in every day at meatbag.fm and on Radio Garden?) mlo combines vintage synthesisers with psychedelic elements and lo-fi textures, but it’s the haunting vocals from spirit session singers that make this a spooky listen.
All of these projects are transparently AI-enhanced. We don’t pretend to be seasoned musicians in the traditional sense. But we have ears, we know what we like when we hear it, and we understand how to craft compelling audio experiences that resonate with listeners.
The path forward
The Velvet Sundown controversy has highlighted the tensions inherent in this period of transition, but it’s also shown us what not to do. The path forward isn’t through deception or by trying to replicate the past, but through honest exploration of new creative possibilities.
At Meatbag, our projects may use AI as a creative tool, but they’re guided by human vision, shaped by human sensibilities, and created with human audiences in mind. We’re not trying to fool the world into thinking we’re something we’re not. Even if we wanted to, we couldn’t. The cats wouldn’t let us.
We’re excited to showcase what’s possible when technology enhances rather than replaces the human spark of creativity. Bands will exist in new ways, embracing both the technological possibilities of our moment and the enduring human need for connection, meaning, and transcendence through sound.
Never mind the bollocks, though. Here’s the future.
Meatbag Radio
On Spotify
On Apple
George Hopkin is a technology writer and creative force behind the Meatbag multimedia project. Along with collaborator Adam Krajczynski, he explores the intersection of technology and storytelling through various creative ventures. For more information about Meatbag Radio and our various musical projects, visit meatbag.fm.