Alamy Stock Photo/Eric Carr

Could AI music be the industry’s next Napster moment?

By María L. Vázquez, Dean of Law School, Universidad de San Andrés, Argentina

October 2, 2025

Share

“Have Everything. Own Nothing,” Napster once claimed. Today’s generative AI models seem to say, “Scrape Everything. Credit Nothing.” Still, fairer frameworks may yet emerge, says Professor María L. Vázquez. Working for Virgin Music in the 1990s, the Harvard-educated lawyer saw early file-sharing give rise to legitimate streaming platforms. For WIPO Magazine, Vázquez explores the copyright lessons we may learn from past disruptions.

In the 1990s, as a young in-house lawyer at Virgin Music in London, I had a front-row seat to the music industry at its peak. The offices at Kensal House buzzed as the label churned out recording and publishing contracts almost weekly. Virgin famously signed the Rolling Stones for USD 45 million in 1991, a testament to its confidence that it could recoup that sum from record sales. Yet the industry stood on the brink of unprecedented disruption.

Napster burst onto the scene in 1999, changing the way music was consumed. The peer-to-peer sharing platform allowed users to exchange digital music files directly. For the first time, anyone with an internet connection could access music instantly, effortlessly and at no cost, threatening the industry’s entire business model. Record and CD sales plummeted, while file-sharing services blossomed.

The Recording Industry Association of America (RIAA) initially responded to digital piracy with a legal strategy, that included filing thousands of lawsuits against individual users. One of the most well-known cases was that of Jammie Thomas-Rasset, who was ordered to pay USD 222,000 for downloading and sharing 24 copyrighted songs on the file-sharing service Kazaa.

Yet the music industry was unable to prevent illegal downloads. Napster reached 80 million users before being shut down in 2001. Virtually every song ever recorded was now available online and, more importantly, consumers had become accustomed to this new way of accessing music.

Just as species must adapt to survive, so too must industries

Apple’s introduction of the iPod and iTunes Store in the same year that Napster was shut down proved transformative. By offering licensed digital songs for USD 0.99, Apple demonstrated that consumers were willing to pay for music online, as long as it was affordable and delivered via a user-friendly platform.

A person holds a first-generation iPod in their left hand, positioned in front of their jeans and black sweater. The device’s small monochrome screen displays menu options like Playlists, Artists, and Albums, with a circular click wheel below.
Alamy Stock Photo/Chris Willson
First generation iPod, released in 2001.

This laid the groundwork for the next major shift: streaming. Platforms such as Spotify, introduced in 2008, gave users access to extensive music libraries via a subscription-based model, no ownership required.

This time, the industry did not fight the change. While many labels had initially clung to physical formats such as CDs, they later came to accept streaming. Today, streaming drives the majority of industry revenue and teaches a clear lesson in evolutionary theory: just as species must adapt to survive, so too must industries.

The coming of AI

Fast-forward to November 30, 2022. The release of OpenAI’s ChatGPT triggered the same industry panic that Napster had some 20 years earlier. This time, though, the stakes were even higher.

Some early “creative AI” companies licensed data throughout the 2010s and ethical AI companies still do. However, as many other commercial generative AI companies rushed to develop their systems, vast volumes of data, which included many copyrighted works or those under related rights, were scraped with little concern for tracking the sources that went into training their models. In music, this means existing musical works and sound recordings, synthesized beats, lyrics, chord progressions and musical patterns have been used.

Perhaps this was a digital gold rush – collect now, ask later. Yet the sheer scale of the data grab has made it almost impossible to trace or credit original creators, let alone compensate them. This has sparked a growing conflict between generative AI companies and content owners.

While Napster challenged the way music was distributed and sold, AI-generated compositions, tracks and deepfake performances are threatening the very foundations of music creation and authorship. In both cases, the creative community pushed back, raising concerns about the unauthorized use of their work and the erosion of intellectual property rights.

At the heart of these lawsuits lies a question: does AI training constitute fair use of copyrighted material?

As they had in the wake of Napster, the lawsuits came swiftly. The release of “Heart on My Sleeve” in April 2023, which featured unauthorized deepfakes of Drake and The Weeknd’s voices, was a wake-up call for the entire industry. Many complaints followed. The song was quickly removed from platforms shortly after its release, but its impact continues to reverberate.

In April 2024, prominent musicians and artists, including Billie Eilish, Nicki Minaj and Pearl Jam, signed an open letter denouncing irresponsible AI training as a direct attack on human creativity. Then, in June 2024, the RIAA announced that Universal Music Group, Sony Music Entertainment and Warner Records filed against AI startups Suno and Udio, accusing them of using copyrighted content to train their models.

Shawn Fanning, founder of Napster, sits in the audience during a U.S. Senate hearing on online entertainment. He is centered in focus, wearing a dark suit and tie, while others around him appear blurred or have their eyes closed or downcast.
Alamy Stock Photo/ZUMA Press
Napster founder Shawn Fanning (center) during a Senate hearing on online entertainment in Washington, D.C., 2001.

At the heart of these lawsuits lies a fundamental question: does AI training constitute fair use of copyrighted material? Tech giants argue it does, comparing AI training to humans reading books. However, unlike the US and other common-law countries, most civil-law countries have a closed catalog of exceptions that only justify unconsented use in very limited instances. Still, the outcome of key US cases such as New York Times v. OpenAI, as well as those of music labels suing AI music companies, will have global repercussions and probably influence licensing and industry norms worldwide.

Yet, even as these legal battles unfold, the industry continues to explore a different path that echoes its eventual accommodation of streaming platforms. Rather than trying to halt the rise of AI, some artists and music professionals have been seeking ways to use it to their advantage.

Survival strategies in the AI age: litigate, license or legislate?

In April 2023, Grimes announced that she would split 50% of the royalties with creators of “any successful AI-generated song” that uses her voice. The Financial Times reported in June 2024 that the likes of Sony, Warner and Universal were in talks with Google-owned YouTube to license their catalogs for training purposes, potentially in exchange for substantial lump-sum payments. More recently, in June 2025, Bloomberg reported that some labels are in talks to settle with Suno and Udio, much to the disappointment of companies that have always licensed training data and continue to do so.

Napster’s unauthorized peer-to-peer sharing paved the way for legitimate platforms. Today’s unregulated use of copyrighted material in generative AI, however, has yet to show what kind of authorized frameworks may break through on a major scale to ensure that AI training respects creators through attribution and compensation.

“Have Everything. Own Nothing,” Napster once claimed. Today’s generative AI models seem to say, “Scrape Everything. Credit Nothing.” The difference lies in scale and traceability. Where Napster still made individual songs distinguishable and accessible, and Spotify offers discoverability, AI training renders them invisible.

This issue of invisibility – or more precisely, discoverability – matters. Despite tens of thousands of new tracks being uploaded daily to platforms like Spotify, these services still offer discoverability, helping artists build an audience. As generative AI drives music creation to an unprecedented scale, artists’ individuality will be lost in the training process.

If AI systems aim to establish genuine partnerships with creators, they should leverage technology that enhances discoverability for human artists to remain visible and competitive. Artists may be more inclined to opt in to AI training datasets when their contributions are attributed and recognized.

CMOs could play a pivotal role in negotiations with generative AI companies on behalf of their members

As well as expecting AI firms to ensure attribution, creators who negotiate such voluntary licenses also expect to retain some control over their works and receive fair compensation for them. In an ideal world, these licenses would respect creators’ rights and foster creativity, while providing AI developers with access to content without legal uncertainty. However, given the vast scale of data required to train AI models and the lack of standardized frameworks and collaborative mechanisms, securing voluntary licenses for each and every work used in data scraping seems truly impossible.

Therefore, collective management organizations (CMOs) could play a pivotal role in negotiations with generative AI companies on behalf of their members. Blockchain technology, already employed by some CMOs to enhance data accuracy for members, has also been praised for its potential to monitor training data, streamline licensing and support fair compensation.

Voluntary licensing continues to advance but, if we hope to avoid being entirely dependent on a slow and complex process, some scholars suggest that a statutory license for machine learning could be another option. A statutory license could set a standard for accessing protected works, thereby possibly reducing transaction costs, providing legal clarity and ensuring fair compensation. However, there is opposition from rightsholders and creators and any “catch all” solution would have to be carefully balanced to hopefully encourage AI innovation while protecting the vital role of human authors.

In any case, we should learn from the lessons of the past. For the music industry, the challenge is to avoid resisting innovation while shaping it in ways that respect creativity, reward talent and build trust between artists and technology.

And for the stakeholders behind today’s AI systems, perhaps they could use their technological savvy to solve the very conundrum they’ve created, developing tools that help artists understand, manage and license their work for AI training in ways that are transparent, equitable and empowering. Just as the disruption of Napster eventually gave rise to models such as iTunes and Spotify, long-term success will depend on forging thoughtful responses that honor creators’ rights. To echo Otis Redding, all artists are asking “is for a little respect.”

About the author

Professor María L. Vázquez serves as the Dean of Law School at the University of San Andres (UdeSA) in Buenos Aires, Argentina. She is also the Director of UdeSA-WIPO Joint Master Program in IP & Innovation and the Director of UdeSA Regional Center in IP & Innovation (CPINN). She went to Harvard Law School and worked for Virgin Music in London and EMI Records in New York before serving as partner at Marval O’Farrell & Mairal in Buenos Aires.

Disclaimer: WIPO Magazine is intended to help broaden public understanding of intellectual property and of WIPO’s work and is not an official document of WIPO. Views expressed here are those of the author, and do not reflect the views of WIPO or its Member States.