Search for AI Courses, Tech News and, Blogs

Spotify Cracks Down on AI “Slop” Impersonating Real Artists

by Romario Parra | 1 week ago | 9 min read

In a significant escalation of its fight against AI-generated “slop” and impersonation, Spotify has begun testing a new “Artist Profile Protection” tool designed to stop low‑quality or fraudulent AI tracks from being automatically attached to real artists’ profiles. The beta feature gives artists something they have been demanding for years: the power to review and approve releases before they appear under their name on the world’s largest music streaming platform.

A direct response to AI ‘slop’ and impersonation

The test arrives as AI-generated music floods streaming services, making it easier than ever for bad actors to upload convincing deepfakes and low‑effort tracks that mimic well‑known artists. “Music has been landing on the wrong artist pages across streaming services, and the rise of easy-to-produce AI tracks has made the problem worse,” Spotify acknowledged in a blog post announcing the beta.

“That’s not the experience we want artists to have on Spotify, and that’s why we’ve made protecting artist identity a top priority for 2026,” the company added. “Today, we’re announcing a first-of-its-kind solution to a problem that’s affected streaming for years.” Industry observers say the move is overdue after a series of high‑profile incidents where AI clones or mislabeled tracks landed in users’ Release Radar and Discover Weekly playlists, undermining trust in the platform’s recommendations.

How ‘Artist Profile Protection’ works

Under the new system, artists included in the beta can toggle on “Artist Profile Protection” from their Spotify for Artists settings on desktop and mobile web. Once enabled, Spotify will email them whenever new music is delivered to the platform with their name attached, regardless of who uploaded it.

At that point, the artist or their team can review each incoming release and either approve it or decline it before it goes live under their profile. Only releases that they approve “will appear on their artist profile, contribute to their stats, and show up in users’ recommendations,” Spotify explained. This effectively creates a manual checkpoint before tracks can be associated with an established name, closing a loophole that previously allowed mislabeled or malicious uploads to slip through via distributors and open upload systems.

Spotify says the feature is particularly aimed at artists who have experienced repeated incorrect releases, share a common or generic name with others, or simply want more control over what appears under their profile. “We know how frustrating this can be for both artists and fans alike and one of the top requests we’ve heard from artists over the past year is that you want more visibility before music appears under your name,” the company wrote.

A growing problem: AI ‘slop’ on streaming

The beta tool lands against a tense backdrop: labels, artists and listeners have been warning for months that “AI slop”  , a term used to describe low‑quality, mass‑generated content  is degrading the listening experience and eroding trust in streaming platforms. In September 2025, Spotify disclosed that it had already removed more than 75 million “spammy” tracks and committed to a new spam‑filtering system targeting mass uploads, duplicates, SEO gaming and ultra‑short tracks designed purely to farm streams.

Sony Music recently revealed that it has requested the removal of more than 135,000 AI‑generated songs impersonating its artists across streaming services, underscoring the scale of the issue. “This change is about strengthening trust across the platform, it’s not about punishing artists who use AI responsibly,” Spotify said previously as it rolled out broader AI protections last year.

Music writer and cultural critic Ted Gioia argued that the new review mechanism was not optional but essential. “Spotify finally responds to the rapid spread of impersonation tracks on the platform,” he wrote in a recent post. “It is testing a tool that would let artists to review any new music attributed to them before it gets released. This is absolutely necessary. Fake tracks are everywhere on Spotify now.”

Building on earlier AI and spam policies

Artist Profile Protection is the latest piece in a wider framework Spotify has been building to manage AI in music rather than ban it outright. In 2025, the company introduced stronger impersonation rules, clarifying that unauthorized AI voice clones, deepfakes and other vocal replicas intended to impersonate an artist are not allowed and will be removed.

Spotify also announced a dedicated music spam filter and a disclosure system to label AI‑involved tracks using an industry standard for credits. In a press release at the time, the company stated that “proactively safeguarding against the most detrimental aspects of generative AI is crucial to unlocking its potential for artists and producers.”

“This standard provides artists and rights holders with a means to clearly express the role AI played in the creation of a track whether it involves AI-generated vocals, instrumentation, or post-production work,” Spotify said, stressing that the goal was “to foster trust across the platform, not to penalize artists who responsibly incorporate AI or to lower the ranking of tracks that disclose their production methods.”

How Spotify’s move compares to rivals

Other major streaming platforms are also racing to define their approach to AI in music, but they are taking different routes. Apple Music, for example, has announced “Transparency Tags,” a system that relies on labels and distributors to disclose when AI has been used in a track, pushing the responsibility upstream. YouTube Music has faced vocal backlash from users and musicians over the influx of AI‑generated songs, with some creators accusing it of not moving fast enough to address the problem.

By contrast, Spotify’s new tool hands final approval power directly to the artists whose names are at risk of being misused. As TechRadar noted, “this marks the first time where the company gives the artist an active role in preventing AI fraud as well as avoiding common mix-ups in the release process.” While the system does not remove AI-generated music from the platform entirely, it “tightens the screws on the approval process and is a step towards preventing the increase of AI fraud.”

Spotify itself has framed its strategy as a balance between protection and openness. The company has repeatedly said it does not intend to ban AI-generated music wholesale, emphasizing instead that the aim is to shield “genuine artists from spam, impersonation, and deception” while still allowing them to use AI tools creatively if they choose.

Artists push for stronger safeguards

The pressure to act has come not just from labels and commentators, but from artists who have found themselves unexpectedly cloned or misrepresented on the platform. In one notable case last year, Australian band King Gizzard & The Lizard Wizard discovered AI-generated music impersonating them on Spotify, prompting frontman Stu Mackenzie to lash out at the company.

A Spotify spokesperson said the offending content “was removed for violating our platform policies, and no royalties were paid out for any streams generated,” but acknowledged that such incidents highlight how hard it is to keep AI slop at bay when enforcement is mostly reactive. Critics argue that this kind of cat‑and‑mouse approach is unsustainable as generative tools make it trivial to spin up new fake tracks at scale.

For many artists, the ability to block suspicious releases before they ever touch their profile is a meaningful shift. It not only protects their catalog and stats but also prevents fans from being misled by false associations in algorithmic playlists and search results. As Spotify itself warned in its announcement, when tracks land on the wrong page “it can impact your catalog, your stats, your Release Radar, and how fans discover your music.”

Not a silver bullet, but a signal

Industry analysts caution that Artist Profile Protection, while important, is unlikely to be a complete solution to AI misuse in music. The feature currently targets a specific attack vector misattribution and impersonation via artist profiles but does not address all forms of low‑quality AI “slop” that may still be uploaded under new or obscure names.

Digital rights advocates point out that the system also relies on artists or their teams having the time and capacity to review incoming releases, which could be challenging for those with large catalogs or highly common names that frequently attract false attributions. Others worry that, rolled out too conservatively, the tool may fail to catch a significant portion of problematic content.

Still, the beta marks a clear signal about where the industry is heading: towards giving artists more direct control over how their identity is used in the age of generative AI. “These updates represent the latest in a series of adjustments we are making to foster a more reliable music ecosystem for artists, rights holders, and listeners,” Spotify has said of its broader AI policies. “We will continue to roll out enhancements as technology progresses, so stay tuned.”

For now, Artist Profile Protection is available only to a subset of artists in beta, but Spotify says it is continuing to test and refine the system with leading distributors before considering a wider rollout. If it works as intended, the tool could become a new norm across the industry, forcing rival platforms to offer similar safeguards as AI-generated music and AI slop continues to reshape the soundscape of streaming.