In January 2024, folk artist Murphy Campbell made a startling discovery: AI-generated versions of her performances of traditional ballads had been uploaded to Spotify without her knowledge or consent. The imposters, which included her rendition of the 19th-century folk song 'Four Marys,' featured vocals that mimicked her distinctive style but were not her actual recordings. Campbell’s ordeal escalated when she became the target of a separate copyright trolling scheme, where falsified YouTube videos were weaponized to claim ownership of her public domain compositions—including the classic 'In the Pines,' famously recorded by Lead Belly and Nirvana. These incidents, unfolding within days of each other, have thrust Campbell into the center of a growing crisis at the intersection of artificial intelligence, music distribution, and copyright law, exposing glaring vulnerabilities in systems millions of artists rely on.
- Folk artist Murphy Campbell discovered AI-generated duplicates of her public domain ballads on Spotify in January 2024.
- A separate incident involved false copyright claims on YouTube using fabricated videos to siphon revenue from Campbell’s videos.
- The disputes center on songs like 'In the Pines' and 'Four Marys,' both long-established in the public domain.
- The incidents highlight systemic failures in AI detection, content moderation, and copyright enforcement across major platforms.
How AI-Generated Music Is Exploiting Artists on Streaming Platforms
Campbell’s troubles began when she noticed unauthorized AI covers of her YouTube recordings appearing on Spotify. These tracks, created using voice-cloning technology, replicated her vocal style so convincingly that two AI detection tools flagged them as likely AI-generated. ‘I was kind of under the impression that we had a little bit more checks in place before someone could just do that,’ Campbell told The Verge in an interview. ‘But, you know, a lesson learned there.’ The ease with which these deepfakes were produced and distributed underscores a troubling trend: generative AI tools, now widely accessible, are being deployed to create convincing impersonations of artists, often without their consent.
The Spotify Loophole: Why AI Covers Slip Through Detection Systems
Spotify currently relies on a combination of automated scanning and user reporting to identify fraudulent content. However, the platform’s detection systems are not foolproof, particularly when AI-generated vocals closely resemble an artist’s real recordings. Campbell’s experience reveals a critical gap: while Spotify has begun testing a manual approval system for artists to vet songs before they appear on their profiles, Campbell remains skeptical. ‘I feel like every time an entity that large makes a promise like that to musicians, it seems to just not be what they made it out to be,’ she said. ‘But I’ll be curious to try it out in the future.’ The system, while a step in the right direction, arrives years too late for Campbell, who spent weeks navigating removals and confusion over which tracks were hers.
Public Domain Songs: The Unintended Victims of AI Exploitation
The songs at the heart of Campbell’s disputes—'Four Marys' and 'In the Pines'—are centuries-old folk ballads, long considered part of the public domain. This means their melodies and lyrics are free for anyone to use, but Campbell’s recordings of these songs are her intellectual property. The AI-generated covers exploited this distinction: by cloning her voice to sing the public domain material, perpetrators created new, derivative works that could be monetized without her involvement. ‘The songs at the center of these claims are all in the public domain,’ Campbell noted, ‘including the classic *In the Pines*, which dates back to at least the 1870s.’ The irony is stark: while the underlying compositions are free to use, Campbell’s unique interpretations—and the revenue they generate—were being siphoned off by third parties.
YouTube’s Copyright Troll Nightmare: Revenue Claims on Public Domain Music
Just as Campbell was grappling with the AI deepfakes, she faced another assault: false copyright claims on YouTube. On the same day a *Rolling Stone* article about her AI ordeal was published, a series of videos were uploaded to YouTube via the distributor Vydia. These videos, which Campbell has never seen, were used to stake claims on her existing content, redirecting ad revenue to an unknown entity named Murphy Rider. The notice Campbell received read: ‘You are now sharing revenues with the copyright owners of the music detected in your video, Darling Corey.’ The absurdity of the situation was not lost on her. ‘The songs in question are in the public domain,’ Campbell said. ‘How can you claim copyright over something that’s been freely used for 150 years?’
Inside Vydia’s Content ID System: Accuracy and Backlash
Vydia, the distributor behind the disputed uploads, has since released the claims and banned the user associated with the videos. According to spokesperson Roy LaManna, the company filed over 6 million Content ID claims in 2023, with only 0.02% deemed invalid—a rate LaManna called ‘amazing by industry standards.’ ‘We pride ourselves on doing this the right way,’ he stated. However, Campbell and other artists argue that the system’s flaws extend beyond a 0.02% error rate. The backlash against Vydia has been severe, with LaManna revealing that the company has received ‘literal death threats,’ prompting office evacuations. Campbell, though critical of Vydia’s role, places equal blame on the broader ecosystem of generative AI and copyright enforcement. ‘It’s not solely to blame,’ she said. ‘The worlds of generative AI, music distribution, and copyright are complex with multiple points of failure and opportunities for abuse.’
The Role of Distributors and Platforms in Policing AI Abuse
Distributors like Vydia act as intermediaries between artists and streaming platforms, handling metadata, claims, and monetization. While they are not legally responsible for every claim filed, their systems are often the first line of defense against fraud. Yet, as Campbell’s case demonstrates, these systems can be gamed. ‘YouTube declined to comment for this story,’ a representative confirmed, leaving questions unanswered about how the falsified videos evaded detection long enough to trigger revenue claims. Spotify’s manual approval system, though in testing, may help prevent future AI deepfakes—but it does nothing to address the existing backlog of fraudulent content. For Campbell, the dual failures of detection and enforcement highlight a systemic issue: the digital music industry was not built to handle the scale and sophistication of AI-driven exploitation.
Why Public Domain Music Is a Hotbed for AI Exploitation
Public domain songs are particularly vulnerable to AI abuse because their underlying compositions are free to use, but their recordings are protected. This creates a legal gray area: while Campbell cannot claim copyright over 'In the Pines' itself, her specific recording of it is her intellectual property. AI tools can clone her voice to sing the song, creating a new work that appears to be hers but is not. ‘I think it goes way deeper than we think it does,’ Campbell said, pointing to the broader implications for artists who rely on traditional folk, classical, or classical arrangements. The rise of AI voice-cloning technology has turned centuries-old melodies into potential tools for copyright trolling, revenue theft, and reputational harm.
The Human Cost: How AI and Copyright Abuse Affect Real Artists
Beyond the financial losses, Campbell’s ordeal has taken an emotional toll. The invasion of her artistic identity—her voice, her style, her legacy—has left her feeling powerless. ‘Obviously, I was thrilled by that,’ she said sarcastically, referring to the proliferation of fake Murphy Campbells on streaming platforms. The psychological impact of seeing AI-generated versions of her work circulating online is compounded by the bureaucratic nightmare of reclaiming her content. ‘It took some time before Campbell managed to get the fake songs removed,’ she recounted. ‘I became a pest.’ For independent artists without the resources of major labels, navigating these systems can feel like an impossible battle.
Could New Regulations or Technologies Solve This Crisis?
The incidents involving Campbell reflect broader challenges in regulating AI and copyright in the digital age. In the U.S., the Copyright Office has begun examining the implications of AI-generated works, while the EU’s AI Act seeks to impose transparency requirements on generative AI systems. However, these efforts are still in their infancy, and enforcement remains inconsistent. On the technological front, companies like Spotify are exploring watermarking and digital fingerprinting to detect AI-generated content, but these tools are not yet universally adopted. Campbell remains unconvinced that current solutions will fully protect artists. ‘Every time a large entity makes a promise to musicians, it falls short,’ she said. ‘I’ll be curious to try it out, but I’m not holding my breath.’
What’s Next for Murphy Campbell and the Music Industry?
Campbell’s legal options remain limited. While she can file takedown requests and dispute false claims, the process is time-consuming and often requires repeated interventions. The music industry, meanwhile, is grappling with how to adapt. Streaming platforms are under pressure to improve detection systems, while distributors like Vydia face scrutiny over their Content ID practices. Campbell’s ordeal may serve as a case study for policymakers, platforms, and artists alike, highlighting the urgent need for systemic changes. ‘I think we’re at a crossroads,’ she said. ‘Either we figure this out now, or we’re going to see a lot more artists getting hurt.’
Frequently Asked Questions
- What are AI-generated music deepfakes, and how do they work?
- AI-generated music deepfakes use voice-cloning technology to replicate an artist’s vocal style. Tools like ElevenLabs or Descript can analyze a singer’s recordings and generate new songs that sound like them. These deepfakes are then uploaded to streaming platforms without the artist’s consent, often mimicking their public domain or original compositions.
- How does YouTube’s Content ID system allow copyright trolling with public domain music?
- YouTube’s Content ID system automatically scans uploaded videos for copyrighted material. However, it can be exploited by uploading falsified videos of public domain songs to stake claims on unrelated content. Since the underlying compositions are not copyrighted, the system may not flag the abuse, allowing trolls to redirect ad revenue.
- What can artists do to protect themselves from AI exploitation and false copyright claims?
- Artists can monitor their streaming profiles regularly, use watermarking tools on their recordings, and advocate for stronger platform verification systems. Some are also exploring legal action against repeat offenders, though enforcement remains inconsistent. Engaging with industry groups to push for regulatory changes is another strategy.




