You are opening our English language website. You can keep reading or switch to other languages.
20.12.2024
9 min read

The New Frontier of Human-AI Collaboration in the News Media Industry

"What kind of writer is ChatGPT?" This question served as the title of an arresting piece in the New Yorker penned by the writer and computer scientist Cal Newton a couple of months ago. The question is a pertinent one for the current moment, not least because ChatGPT has become one of the most popular technology tools on the planet, but also due to the fact that the underlying technology powering ChatGPT, Generative AI, has already permeated a great many fields and industries, including the news media business. Understanding the implications of the still relatively recent but growing adoption of ChatGPT and other Gen AI tools like it, including both the benefits they offer and the risks they pose, is critical to the future of the news industry and the field of journalism.

The New Frontier of Human-AI Collaboration in the News Media Industry

Article by

Newton notes in the piece that for those who he interviewed for the article, "ChatGPT was not so much a perfect plagiarism tool as a sounding board. The chatbot couldn’t produce large sections of usable text, but it could explore ideas, sharpen existing prose, or provide rough text for the student to polish. It allowed writers to play with their own words and ideas." In other words, the tool was being used in a highly collaborative fashion, with humans guiding and supervising the collaboration process and the machine output. The leading computer scientist and thinker, Jaron Lanier, has described Generative AI in similar terms, as a new form of social collaboration where AI tools help us to generate what amounts to a giant mash up of human expression.

New Styles of Collaboration

The notion that Generative AI tools offer a new form of social collaboration and human-machine interaction could mark the beginning of significant changes and potential benefits for the news media business, and many companies in the industry are already working their way through them. The Financial Times tested an AI chatbot trained on decades of its own articles; the Telegraph developed an internal tool, Pulse AI, for newsroom workflow aides, consumer-facing AI services, and internal data discovery; and Reuters signed a deal with Meta to bring news-related answers to Meta's AI chatbot, with citations linking to Reuters content. These are just a few examples of how news and digital media organizations have been incorporating AI as a collaboration tool for their reporters, editors and consumers in 2024.

Image

While some AI use cases in this space look promising, it’s still early days and media executives need to focus on which ones make business sense from an ROI standpoint. The uncertainty has led some to take an experimental approach to Generative AI such as the Telegraph's director of technology Dylan Jacques, who advocates being “quite agnostic...across the technology and trying different things all the time. Hitching your flag to one vendor or another is probably a mistake in this area.” Given the pace of change and evolution in generative AI technology today, this position is both a practical one and wise counsel.

Hard Questions, Inherent Risks and Upsides

As news media organizations experiment further with the use of generative AI tools and explore possible use cases for AI technology, important questions have surfaced, such as whether some AI-powered solutions are just too expensive, or if and how AI tools can be seamlessly integrated into their current business workflows, or to what extent the generative content produced in collaboration with these tools can be trusted.

The importance of the human aspect in this new form of human-AI social collaboration, and why human direction and control over how AI tools are used is so crucial, cannot be emphasized enough. The infamous AI chatbot “hallucination” problem and the prevalence of factual inaccuracies are particularly noxious for news media organizations who are highly dependent on maintaining their credibility and high standards of journalism and reporting for success.

ChatGPT 4, for example, has a hallucination rate of 1.5%, while several of the most popular alternative AI chatbots such as Meta AI, xAI's Grok, Microsoft's Copilot, Google's Gemini and Perplexity have all at times failed fact-checking tests. According to the AI misinformation tracker run by NewsGuard, the AI chatbots failed to provide "accurate information" nearly 57% of the time after the assassination attempt on Donald Trump during the US presidential race, "either because the AI models repeated a falsehood (11.11%) or declined to provide any information on the topic (45.56%)". Strict human oversight and guidance in the use of these AI tools is therefore critical for media outlets to ensure that they filter out misinformation and hallucinations that AI-powered bots continue to produce.

In fact, the problem of misinformation in AI chatbot content if used uncritically by news professionals and businesses is so pervasive, that Harvard Business Review (HBR) coined a specific term for it: "botshit." Given the obvious risks of an overreliance on such content, it is remarkable that some media outlets are pivoting to have almost all their content generated by AI. Clearly, adopting an AI "botshit"-fueled content strategy can lead to highly commoditized, meaningless content that, while far cheaper to publish at scale, is likely to erode an organization's professional standards and jeopardize their credibility. This is what happened in the Cody Enterprise case, where a reporter used Generative AI to help write a news story that included fake quotes from Wyoming's governor and a local prosecutor.

Another journalistic concern that highlights why the use of Gen AI tools in the newsroom should be collaborative in nature, with humans firmly in control, involves plagiarism. AI chatbots can steal fresh, high-quality news content and regurgitate it as their own. Thus, for example, in June 2024, Perplexity reportedly scraped the work of Forbes journalists, which was "not just summarizing... but with eerily similar wording, some entirely lifted fragments — and even an illustration...", and later faced allegations of content replication from other media outlets such as News Corp's Dow Jones and the New York Post. Perplexity is not the only Generative AI tool that faces plagiarism challenges. The New York Times' lawsuit against OpenAI's ChatGPT is centered around a similar issue: ChatGPT responses containing "near-verbatim excerpts" from the outlet.

The risks of chatbot plagiarism and misinformation can be minimized with thoughtful and careful approaches to the use of Generative AI by news media businesses. They should establish clear policies to define how Generative AI may or may not be used in their organizations, that safeguard journalistic standards while still allowing themselves a berth to explore the best ways to leverage the benefits of human-AI machine collaboration.

One of the hot-button issues with Generative AI technology today in the news media domain is copyright infringement, since the tech companies behind Generative AI models have scraped the internet without the permission of content owners, and without providing compensation to media outlets. AI technology providers should work with government regulators and media outlets to come to agreement on how they access and license any proprietary content used to train their AI models. By doing so, this issue can provide a significant upside for media companies as a potential new revenue stream from licensing their content.

Some progress on this issue has been made. In 2024, OpenAI established media partnerships with News Corp (The Wall Street Journal, the New York Post, and The Daily Telegraph), Axel Springer (Business Insider and Politico), DotDash Meredith, the Financial Times, and The Associated Press to license training data. While the intentions behind the licensing partnerships are good and there are mutual benefits to be had — tech companies get to continue to train their AI models on evergreen content, and media companies establish a new source of revenue and can shape a healthier and fairer information ecosystem — there's also been a backlash against them. Some media executives and reporters criticize OpenAI for striking partnerships in bad faith, only to avoid lawsuits, and criticize media companies for "failing to defend their own intellectual property ... trading their own hard-earned credibility for a little cash from the companies that are simultaneously undervaluing them and building products quite clearly intended to replace them."

Image

One of the challenges is the lack of a common regulatory framework, which can result in misalignments between lawmakers and news media outlets. Thus, for instance, in August 2024, California lawmakers opted for a deal with Google in which the company agreed to pay $172 million to support local media outlets and start an AI program. The money will be overseen by news industry groups and distributed accordingly among the media outlets, including smaller or ethnic ones. The deal represents a significant departure from the bill pushed by news publishers and media employee unions earlier this year.

Long Term Implications

The fear of professional skill atrophy from the use of Generative AI tools is another big challenge in media and journalism and in the broader professional context.

"Are we deskilling employees by removing creativity from work…or upskilling by enhancing and augmenting human creativity?" was a central question of the recent Emerging Tech Forum at the MIT Museum.

The answer is not an obvious one, but there is clearly a need for balance in the use of these tools. It's fair to ask how much writing reporters should do versus AI to get the most from human-AI machine collaboration without compromising on the key skills that good journalists need to develop and practice. Relying on AI-generated content for routine or time-consuming tasks is enticing since it's faster and can be easily automated. However, using them uncritically and naively can diminish problem-solving and critical thinking, and lead to plagiarism or undetected inaccuracies.

These questions must also be weighed against some demonstrable AI-powered productivity gains for reporters and news media companies, such as transcribing interviews, metadata tagging of content including images and videos, and having an AI model perform complex data analysis in the blink of an eye.

Concluding Thoughts

While some see recent advances in AI technology as approaching a new kind of intelligence, akin even to human intelligence, Generative AI may actually be helping us to understand better how we differ from machines. The more we explore the wonders of Generative AI, the more we learn about human intelligence, and how fundamentally different it is from machine intelligence. This learning surely reinforces one of Generative AI’s core benefits for the news media business and so many others: its ability to take human-machine collaboration to a new level.

Generative AI is most effective in assisting journalists via a new type of collaboration, as opposed to replacing them, whereby human journalists guide and orchestrate the process, review and edit generative content output, ensure the output adheres to ethical and journalistic standards, push AI to pitch ideas for new stories and get to focus more on doing better human journalism that AI cannot do, such as observing a legal courtroom battle, or interviewing an imprisoned defendant or a grieving parent of a school shooting victim.

Assuming progress continues towards closer cooperation and alignment between the news media outlets, AI tech companies, and regulatory government bodies, journalism and the news industry at large can embrace and benefit from the best that Generative AI technology has to offer.

Subscribe to Our Newsletter

Subscribe now to get a monthly recap of our biggest news delivered to your inbox!