Theme #1: Licensing Content to AI Firms as a New Revenue Stream
AI firms have employed the “fair use” argument to justify training their AI models on copyrighted media content without the consent of creators, be it news articles, films, music, artworks, stock images, etc. Increasingly, however, we’re seeing licensing deals come to fruition between media content owners and AI tech providers, whereby AI firms compensate creators and content owners for the use of their work in the data sets used to train AI models.
Meta for example recently signed it’s first AI licensing deal for news with Reuters, and both Perplexity and Open AI have signed licensing deals in the last year with major news media and publishing players, among them the Financial Times , The Atlantic, The Wall Street Journal, and the Dotdash Meredith group. These deals translate into new revenue streams for media firms and copyright owners through the ongoing usage, training, development and evolution of GenAI models, benefiting both content rights holders and AI tech firms. While the New York Times and others have chosen to sue AI firms for copyright infringement, the attraction of unlocking new revenues through licensing content to AI firms means similar licensing deals between content owners and AI firms in other sectors of the M&E industry, like Music and the Art Market, could follow in 2025.
Theme #2: Advertising Will Benefit from Incorporating AI-Powered Tools and Solutions
In marketing and advertising, GenAI can be used to quickly develop creative approaches for different contexts and scenarios—and then to iterate and refine rapidly in response to consumer feedback and uptake. There is already evidence this year of palpable excitement about the potential efficiency, scaling and hyper-personalization benefits of GenAI in the advertising space.
For example, while examining the major trends at this year’s CES 2025, Max Lenderman of AdWeek noted how “artificial and augmented intelligence applied to marketing and advertising dominated the discussions on the stages and in the suites.” Lenderman argues that GenAI can emulate and automate much of the typical workflow of the marketing agency creative process: “AI-enabled iterative loops can discover insights, make creative, test the work, refine it, and deploy it with constant adaptability and scale”, serving as a powerful collaboration tool for marketers.
Theme #3: AI is Set to Enhance Human Creativity in TV & Film
Concerns that Gen AI will eliminate creative jobs in the TV & film industry should be balanced with expectations that Gen AI will redefine creative roles and spark new synergies between creative teams and technology. According to Alix Partners, there will be a lack of creatives in 2025 with the expertise and skills required to use the new AI tools available. This gap will drive demand and new opportunities for those with the skills and the desire to leverage AI tools to enhance the creative process and bring stories to life faster, and at lower cost.
These opportunities will manifest across a range of core functions in the TV & film sector. AI-assisted scriptwriting, for example, involves the use of AI tools to help writers generate compelling story arcs and dialogues, suggest casting options and automate or diversify scripts based on data analysis of genre trends, viewer preferences, and narrative success factors. AI technology can be also used in the pre-visualization stages of the film production process, such as generating 2D images, 3D models, mood boards, storyboards, and animatics that display a project vision, style, scope, and narrative arc. These are just a few of the ways that AI tools can add value in the TV & film industry by enhancing and accelerating the creative process, feedback rounds, and automating repetitive tasks.
Theme #4: Conversational Search for Content Will Take Center Stage
For over two decades, Google defined how we search for and discover things: typing keywords into a box, sifting through links, and finding information dispersed across the web. In 2025, we'll step into a new search chapter. Thanks to advances in GenAI embedded in tools like Google's AI Overviews, Microsoft's Bing, and OpenAI's ChatGPT, we're moving beyond keyword-based searches to conversational interfaces capable of delivering direct, nuanced answers in natural language.
Conversational search results go beyond the medium of pure text. With the advent of multi-modal machine learning models, AI tools like ChatGPT can deliver search results in many modalities — images, graphs, and even video. Companies with a specific domain focus, like Cyanite which operates in the music discovery space, go even further to enable users to interact with AI as naturally as they would with a person — describing the mood, vibe, or context to discover the perfect song track from the AI tool. Conversational search will offer a much richer and more interactive experience than keyword search and offer more flexibility in how we search for and discover content of any kind.
Very often, technological evolution disrupts existing business models and raises important concerns. Publishers for example fear traffic loss from conversational search due to "zero-click" searches as users consume AI-generated content without ever clicking through results. The AI models powering conversational interfaces can hallucinate or misrepresent facts, which threatens to undermine trust in this new search paradigm. The AI-driven future of search is undeniably exciting, but still in its early stages, so users must be alert to the dangers of inaccurate results delivered at speed, in a conversational format, couched in sophisticated and nuanced language.
Theme #5: Tackling the Issue of Deepfakes
Over the last couple of years, advances in GenAI have ushered in the era of synthetic media, thanks to platforms like Midjourney, Speechify, and deepfake tools like Reface. Widespread access to such tools and platforms creates its own set of challenges, such as diminishing trust in digital content, potential misuse by malicious actors, legal constraints, and fears that artistic expression will become more formulaic and less authentic.
The AI technology underpinning deepfakes continues to improve while its marginal cost is falling. In parallel, the need to detect and combat fake content escalates, laying the cost of maintaining a credible internet on the shoulders of consumers, creators, tech firms and advertisers.
One approach to the deepfake and synthetic media problem involves efforts to label generative content as the product of an AI tool. For example, some GenAI tools automatically add watermarks to the media they create, and major social media platforms are starting to take action to identify synthetic media.
A more advanced way to confirm an asset's authenticity, be it an image, video, or piece of audio content, is to use authentication that is embedded into the asset. For instance, SWEAR offers technology that creates hashes of every byte generated while video or audio is being recorded, and then those hashes are recorded on a blockchain, enabling users to verify whether part of an image or video has been altered since their origination.
The most reliable and comprehensive solution, as I see it, requires collective action spanning technology, industry, government, and civil society. One of the best examples of such collective action that is gaining momentum and support from industry leaders is the Content Authenticity Initiative. It combines cryptographic methods and a trust list to provide a transparent, tamper-evident history of digital content. Working within a well-defined regulatory framework, the Content Authenticity Initiative can offer a feasible and viable solution to some of the most urgent challenges posed by synthetic media and deepfakes.
Theme # 6: The Rise of Ethical AI in Music
Last year saw several examples of AI technology posing significant challenges for music labels and publishers, especially around AI-generated music and training AI models on copyrighted music. In August 2024, Universal Music Group, Sony Music Entertainment, and Warner Music Group sued Suno and Udio, two AI start-ups, for copyright infringement. The crux of the issue revolved around the legality of songs generated with the help of GenAI models, as Spotify's Co-President, CTO and CPO Gustav Söderström articulated in his comments on the case "With these models and the way they were trained, will that be considered legal or not? For example, in the US, these companies are now being sued. So I think that question will be decided by legislation."
The key question therefore is whether the same copyright protections afforded to new music should apply to tracks created using AI. The question is further complicated by a lack of consensus on the extent to which music created using GenAI models differs from music created using other kinds of digital tools and algorithms. As of now, music generated using GenAI tools and uploaded to streaming platforms undergoes a detection system. For example, Spotify's detection system analyzes if a song is a product of something that already exists, or isn't a derivative of anything, and therefore doesn’t count as a copyright infringement. If it does, the system takes that song track down.
These concerns have led some companies to join forces to implement ethical AI practices in their AI products. Thus, for instance, Beatoven.ai and Musical AI teamed up to create an ethical, licensed AI music generator trained on a dataset of properly licensed songs, loops, samples, and sounds. Beatoven.ai and Musical AI committed to managing the rights and handling payments to copyright holders in similar fashion to music streaming services. We may see more moves like this from other AI technology companies operating in the music space in 2025.
Theme # 7: The Need to Enforce Licensing Protections
Just as in the music industry, other M&E domains need solutions for licensing protection standards for AI-generated content. With no clear standards, navigating these questions complicates the use of AI in the creative landscape. A partial solution to the problem of copyright infringement by open source LLMs and the uncertain legal status of their outputs may be the implementation of proprietary LLMs trained on a discrete set of copyrighted, in-house content. Some digital publishers and news media have already adopted this strategy.
In a broader sense, the current debate over the legal status of AI-generated content harks back to 2023 when the US Copyright Office stated that for a work to be afforded copyright protection in the US, it must have a human author and demonstrate considerable human involvement. This definition is quite vague, open to interpretation and difficult to enforce. As AI technology tools evolve and their adoption accelerates across the various sectors of the M&E industry in 2025, perhaps we'll see more clarity emerge in terms of licensing protection standards for AI-generated content.
Final Thoughts
The integration of Generative AI into the Media and Entertainment industry has ushered in both remarkable opportunities and complex challenges. From creating new revenue streams through licensing agreements to enhancing creative roles in TV and film, AI is driving value, efficiency, innovation, and personalization across the sector. At the same time, ethical concerns over the rise of deepfakes and copyright issues in music highlight the need for clear regulations, ethical AI practices and collaborative solutions to ensure fairness and authenticity. The ongoing dialogue between AI technology providers, creators, and regulators will define the M&E landscape in 2025 and beyond.












