Subscribe to our LinkedIn so you don't miss important media news and analysis
Amid shrinking audience numbers, dwindling advertising, and cost-cutting measures, journalists are understandably mistrustful. The spectre of AI stealing their jobs has just become the latest thing to be feared.
And now, that spectre has a name: ChatGPT.
Recent headlines have demonised the creation. It is a woke, leftwing puppet. It will take your job. It can convincingly lie and spread disinformation.
To date, the only real-world example of a media platform adopting ChatGPT-style technology for content creation is BuzzFeed. It announced on January 27 that it would use AI to build quizzes, provide data for editorial brainstorming and personalise content for audiences.
While true, the announcement was doubtless timed perfectly to raise press and social media mentions of BuzzFeed, just as it seemed close to a multi-million dollar deal with Meta.
However, the roles it will assign to generative AI intersect broadly with what experts believe the first generation of AI-newsroom integration will look like. The reality is that, for the moment, AI is not going to steal many journalists’ jobs but instead help them get rid of threats.
Let’s consider three essential positives that can help newsrooms implement AI rapidly.
ChatGPT is extremely cautious by design. When engaging with almost any topic related to history, culture, politics, and more, its responses will almost always include a caveat, urging further consideration of a topic.
For some, this is proof that generative AI has gone “woke” as it is unwilling to simply engage with any given target.
This is an accusation newsrooms are all too familiar with. As Russia’s invasion of Ukraine is soon to enter its second year, for example, accusations of bias have become a standard line of attack to discredit reporting on the conflict. On issues that affect the lives of millions so personally, even trained reporters and correspondents, is it fair to expect objectivity? Is it even possible?
And can AI help in this quest?
The AIJO project, created by eight media organisations, came online in 2020 to study and help correct newsroom biases. These included abstract biases that occurred when news was created and very direct ones affecting gender representation in text and images, for example.
Instead of investing in costly and time-consuming human reviews of potentially biassed content, even when harnessing data scraping, machine learning programs can do it better. A program can be encoded to detect multiple types of biassed reporting across text, photo, video, audio and social media.
“…But rather than eliminating bias, as many researchers try to do, we want to understand how and why the bias comes to be,” said Sil Hamilton, a researcher at McGill University in Canada, who helped develop a machine learning program which analysed bias in reporting by the Canadian Broadcasting Corporation (CBC).
Machine learning can be deployed to help newsrooms identify and address biases that crop up in their own reporting, across text, photo, video, audio, and social media.
Versions of this have already been made available, such as Dbias, a Python-based programme which detects biassed text in articles and automatically suggests a better approach. Other tools exist to comment on the diversity of sources in text or to provide data on what ethnicities are found in photos across a media organisation’s output.
In response to the BuzzFeed announcement, the company’s former head of quizzes and games, Matthew Perpetua, complained that the human touch is what makes such interactions fun. Time will tell whether ChatGPT quizzes, set to appear in February, will be as engaging as the human-made version.
But recent history suggests it will take a long time for AI-led articles, beyond perhaps short blurbs, to be accurate enough to pass muster. In November 2022, technology news website CNET tested an AI article-writing software to publish 77 articles. Prior to publishing, each article had to have a human-created outline fed into the AI. After it created a draft, editors had to expand, fact-check, and edit the stories.
The results were not encouraging. Basic facts such as company names and figures were incorrect, vague writing was regularly found. The in-built plagiarism checker was not properly used. CNET paused the test and will retool the AI to restart at an unspecified later date.
“It’s related to the idea of ‘modular journalism’, where a story is seen not as one monolithic whole but rather as a collection of individual parts that can be mixed and served in different combinations. From what I’ve seen personally, AI is getting much better at putting the different parts together in a way that appears superficially as if it had been produced by a human,” explains Joe Litobarski, head of training and events at the European Journalism Centre.
This is where the industry stands. Creating a curated series of content for a particular user, based on their specific preferences and history, is something machine learning can already do better than humans. The YouTube and TikTok algorithms, as controversial as they can be, are the best-known examples of this
“I see risks around AI hype, where people ascribe abilities to these models that they just do not have. Obviously, this relates to questions around accuracy and disinformation. There are real risks here not just to journalism as an industry, but to society more broadly as our filter bubbles and digital echo chambers risk being reinforced by AI generation,” warned Litobarski.
Since the leaks of the Panama Papers, it seems that multi-platform, multi-country investigations, put together in collaboration by prominent media, have become commonplace.
The London School of Economics’ JournalismAI project aims to extend that collaborative spirit. For the last three years, its cohorts of reporters from around the world have jointly developed machine learning solutions to accurately assign quotes to sources, to help apply satellite imagery to storytelling, or to monitor misogynistic statements by politicians on social media.
If the BBC and the Associated Press saw the value of such collaborations, it stands to reason smaller newsrooms should too.
“This is precisely where the most value will lie, in exploring collaborative approaches that bring together the strengths of both human journalism and AI models. This is why I’m, overall, optimistic about the impact of AI on journalism,” concluded Litobarski.
This is where newsrooms should be reacting: creating a media sandbox. Waiting is not the right idea. As Mattia Peretti, manager of the JournalismAI project, explained in an article for the London School of Economics, “New tools are released seemingly every other day, new scientific papers are published daily. Trying to keep up with everything is just not realistic.”
Instead, newsrooms need to plan ahead, understand how AI can enhance the work of their journalists, instead of replacing them, seek out the right partners to achieve that vision (many newsrooms may have the same vision), and take time before rolling it out.
After all, not everybody can be BuzzFeed looking for a quick PR boost at a profitable time.
The Handbook on Disinformation and Media Manipulation – European Journalism Centre
JournalismAI Project – London School of Economics
JournalismAI Discovery Course – London School of Economics
Newsroom AI for Beginners – Knight Lab at Northwestern University
Everything you need to know about European media market every week in your inbox
We are using cookies to give you the best experience on our website.
You can find out more about which cookies we are using or switch them off in settings.