You are opening our English language website. You can keep reading or switch to other languages.
19.11.2025
10 min read

The Future of Creative Rights: Protecting Artists, Authors, and Publishers in an AI-Driven Entertainment Landscape

Artificial Intelligence is rewiring how stories, music, and visuals are made. Its dramatic rise and pervasive impact on the world of media and entertainment have caused disquiet among the creative class given the unauthorized use of copyrighted materials to train AI models. As AI systems learn from millions of songs, images, videos and scripts, often without consent from owners, they blur the line between inspiration and imitation, threatening not only to erode creative ownership but to redefine the underlying economics of the media industry.

The Future of Creative Rights: Protecting Artists, Authors, and Publishers in an AI-Driven Entertainment Landscape

Article by

Doron Fagelson
Doron Fagelson

Until just a few years ago, this issue was largely theoretical, but no longer. From music to film to publishing, generative AI models are already capable of producing content rivaling that of human creators, and in some cases, using the very work of those creators. The result is an industry caught between creative acceleration and accountability, and facing a critical question: how do we protect creative rights in an era when creative works are the source material for AI-generated content?

Ethical and Legal Issues in the Media Industry

Over the past year, the ethical and legal questions of training generative AI models on copyrighted work have moved firmly into the public limelight. Across the creative class, artists, publishers, music labels and film studios are challenging the “fair use” legal arguments of AI firms both in court and through public denouncements in efforts to protect their work from what they see as theft and exploitation.

Across the Media & Entertainment industry there have been several high-profile cases in this category:

Music Business

Anthropic vs. Music Publishers (2025)

Music publishers accused Anthropic (developer of the AI chatbot “Claude”) of training its model on copyrighted lyrics, claiming millions of songs were acquired without permission. Courts have allowed parts of the case to proceed, marking one of the first major tests of fair use in the age of generative AI.

Céline Dion’s voice used without permission (2025)

In 2025, Céline Dion condemned AI-generated songs that claimed to feature her voice. This is a stark example of how cloning technology can exploit an artist’s identity and creative essence without their consent.

RIAA vs. Suno & Udio (2024)

Major record labels including Universal, Sony, and Warner sued AI-music startups Suno and Udio for allegedly training their generative AI systems on copyrighted recordings without permission. The models were reportedly capable of recreating songs that mirrored the sound, structure, and even the voices of famous artists which are all drawn from unlicensed media.

Art Tech

Artists call out “mass theft” in AI-art auctions (2025)

Thousands of artists petitioned Christie’s to cancel an AI art auction, arguing that the works were generated using models trained on copyrighted images without consent. Their protest was centered around the core concern that AI training on copyrighted works can replicate artistic styles without attribution or compensation.

Film and Video

Disney & Universal vs. Midjourney (2025)

Disney and Universal filed lawsuits against Midjourney, alleging its generative AI model used copyrighted movie stills, character likenesses, and logos in its training data. They also claim that the model could reproduce some of the distinctive characters without permission.

Publishing and News Media

Penske vs. Google (2025)

Penske Media, owner of Variety and Rolling Stone, sued Google over the financial impact of AI-generated summaries that allegedly repurpose and diminish original reporting, which is a growing concern for all publishers whose traffic and revenue depend on original content visibility.

Image

Encyclopedia Britannica and Merriam-Webster vs. Perplexity (2025)

Both publishers accused the AI search startup Perplexity of copying their proprietary material for use in generative AI responses to human prompts. The complaint claims the company “free rides” on decades of editorial work while diverting user traffic.

News Corp vs. Perplexity (2024)

News Corp (publisher of The Wall Street Journal and New York Post) filed a similar lawsuit, calling the AI firm’s use of its content “massive freeriding.” Together, these cases frame the central legal AI question of our time: can AI systems lawfully use copyrighted content to train their generative AI models without explicit permission?

Strategies to Tackle the Unauthorized Use of Copyrighted Materials

As legal and ethical disputes unfold, the Media & Entertainment industry is beginning to define what accountability looks like in an AI-driven landscape. The solution will not come from a single ruling or regulation, but it will rather emerge from a combination of legal clarity, monetization frameworks, platform governance, and technical innovation.

Legal Actions: Setting Precedents for the Decade Ahead

The courts are now the front line for determining how AI firms can lawfully use copyrighted materials to train models.

Anthropic’s landmark $1.5 billion settlement with authors and publishers marked a turning point: an explicit, public acknowledgment that large-scale data ingestion of copyrighted work without consent carries real financial and reputational risk. Similar class actions from music labels, visual artists, and publishers are following the same path. The intent is not to halt innovation, but to create a legal standard for what constitutes fair use, licensing, and implied consent in the era of generative AI.

Cases like RIAA vs Suno and Udio and Disney vs Midjourney will likely serve as reference points for future policy. Their outcomes could shape how “derivative learning” is treated under copyright law — whether AI training on copyrighted works falls within fair use, or whether it demands explicit licensing agreements like sampling in music.

Monetization and Partnership Models: Turning Risk into Revenue

Many companies are shifting from confrontation to collaboration. Recognizing that AI training on copyrighted works cannot be fully eliminated, media leaders are exploring revenue-sharing and licensing models that compensate rights holders while enabling continued AI development.

In 2024 and 2025, a series of deals between publishers and AI companies began to set exactly this precedent:

  • Time, Der Spiegel, Fortune, Entrepreneur, and The Texas Tribune partnered with Perplexity in a revenue-sharing program tied to advertising within AI-generated answers. Publishers gain transparency through analytics and access to enterprise-grade AI tools.
  • Axios expanded its local newsroom network using funding from OpenAI, designed to enhance distribution and monetization of local journalism without automating human-driven news reporting.
  • Condé Nast and Axel Springer signed multi-year agreements with OpenAI and Microsoft, granting permission to display and reference their content within AI products in exchange for licensing fees and traffic attribution.

These partnerships reflect a new commercial logic: AI firms need quality creative content to train and improve their models, and rights holders want to be compensated for the use of their work and explore new revenue streams. When such deals are structured fairly and respectfully, both can benefit, shifting the narrative from exploitation to collaboration.

Platform Policies: Setting Guardrails for an Ethical AI Ecosystem

Beyond individual deals, media and data platforms are beginning to codify rules that address the ethical and legal challenges of building generative AI models for the Media and Entertainment world. Social networks, streaming services, and cloud providers are moving toward explicit consent frameworks that give creators more control over how their content is used.

The debate between opt-out and opt-in models sits at the heart of this shift. The UK’s proposed opt-out policy allowing AI training unless rights holders actively choose to object, sparked an outcry from artists and publishers who argue it reverses the burden of consent. A growing coalition instead supports opt-in models requiring explicit approval from rights holders before copyrighted material is used to train AI.

This approach also aligns with the core principles of copyright law and mirrors mechanisms like YouTube’s Content ID, which arose under the DMCA. It also inspires new frameworks: Cloudflare’s AI platform now operates on an opt-in model that lets publishers and marketers dictate how their data is crawled and reused. It’s a small but meaningful step toward a more transparent and ethical AI ecosystem.

Bespoke Technical Solutions: Defending Rights with Technology

Technology is also becoming a line of defense for rights holders against unauthorized use of copyrighted work.

In 2025, Universal Music Group and Sony Music partnered with Stanford spin-out SoundPatrol to deploy “neural fingerprinting” a form of AI that can detect when generative AI models reproduce copyrighted melodies or styles. Unlike traditional audio-matching systems, it analyzes musical semantics, allowing it to spot derivative or synthetic copies that would otherwise elude traditional detection systems.

Similar detection frameworks are emerging across art, publishing, and video, signaling a new phase in which copyright protection itself becomes algorithmic in essence. For rights holders, these tools offer much needed transparency into how their work circulates through AI model training and output pipelines. It’s a prerequisite for both enforcement and monetization.

The Path Forward: Balancing Innovation with Integrity

The media and entertainment industry is standing at a familiar crossroads, one it has faced at every major technological shift, from analog to digital to streaming. But this time, the stakes are higher. Generative AI doesn’t just allow creative work to be distributed efficiently and conveniently; it learns from it, reshapes it, and competes with its very originators.

Image

To protect the value of human creativity and Intellectual Property, the next phase of AI evolution must be built on three commitments:

  1. Transparency — every model should disclose what data it’s trained on and provide attribution to give creators visibility into when and how their work is being used.
  2. Consent — rights holders must have meaningful control over inclusion in training datasets, with clear mechanisms for opting in or out.
  3. Compensation — when creative work fuels model performance, its originators should be credited and share in the economic value they produce.

These are the foundations for the sustainable growth of an AI-powered Media and Entertainment industry. AI systems built and trained ethically and legally will ultimately outperform those built on contested ground, because they operate with trust, clarity, and collaboration with industry players at their core.

The Future of Creative Rights

As commitments to transparency, consent and compensation increasingly shape the evolution of the AI-driven entertainment landscape, a new infrastructure is likely to emerge incorporating content provenance standards, machine-readable licensing, and traceable AI datasets that embed attribution directly into media files. This shift will mirror how digital rights management and streaming royalties matured after the early chaos of online file sharing and distribution.

For technology providers, the challenge is to architect AI-powered solutions that recognize creative input as a protected private asset, not a public good. For creators and copyright holders, it’s to collaborate with tech companies to find ways to generate new revenue streams and to embrace the tools that can expand artistic expression without having to surrender ownership.

The answer to the problem of protecting creative rights in the AI era will be a shared ecosystem where artists, publishers, creators, and technology partners build solutions together under new terms of integrity and trust.

At DataArt, we work closely with media and entertainment companies to turn complex business and copyright challenges into robust, scalable, future-ready solutions. Our teams combine business domain expertise with technical excellence in data governance, cloud engineering, and AI strategy to design systems that respect ownership rights while unlocking new opportunities for personalization, data attribution, and monetization.

Looking to maximize the value of your creative work and copyright ownership with the help of generative AI systems and tools? Our Data & Analytics, AI, and Cloud experts design state-of-the-art technology platforms and solutions that enforce ownership rights, track provenance, govern access, and embed transparency into every layer of the creative content value chain.