Subscribe to our LinkedIn so you don't miss important media news and analysis
The emergence of publicly and commercially available AI tools like GPT-4 presents newsrooms with new ethical challenges. These range from navigating the potential of this revolutionary technology to upholding the essential public trust. AI holds vast promise for reimagining how audiences interact with news, from content production and distribution to potentially transforming news organisation business models: in this context, we’ve documented how AI tools are already making their mark in daily news, generating and classifying text, and tailoring content for individual readers.
But this promise is shadowed by concerns: potential misuse of expansive language models, the reliability of their outputs, copyright quandaries, and privacy matters.
In response to these challenges, many newsrooms have developed pioneering standards for AI use in journalism. One review of existing guidelines, continuously updated and available at this link thanks to the Nieman Lab team, highlights recurring themes and patterns, guiding other newsrooms in crafting bespoke AI practices.
Take The Associated Press (AP) for example. They were among the first newsrooms to offer directives for generative AI tools. They spotlighted core values of accuracy, fairness, and speed, underscoring the importance of human editing to counteract risks like misinformation propagation and privacy breaches. In a collective initiative, figures like ICIJ executive director Gerard Ryle and journalist Maria Ressa are joining forces to frame guiding principles for AI system use.
Other organisations are also working on guidance for newsrooms looking to use generative AI. The Partnership on AI, a nonprofit aimed at addressing the impact of AI across industries, is developing a draft of its AI Procurement and Use Guidebook for Newsrooms. At 27 pages, the document delves much deeper into the use of AI in the newsroom.
A recent study from Kim Björn Becker, Felix M. Simon, and Christopher Crum conducted an in-depth analysis of 52 guideline sets from 12 countries, aiming to understand the complex relationship between AI’s potentials and pitfalls in journalism.
Although there is plenty of literature on media self-regulation, the spotlight has predominantly been on social media. There, policies often display a high degree of inconsistency and ambivalence. AI guidelines paint a different picture: the underlying assumption for the study by Becker, Simon, and Crum was that media outlets respond to AI’s challenges and opportunities in a fairly consistent manner. This assumption stems from the theory of institutional isomorphism, suggesting that organisations within a specific domain tend to emulate each other when faced with uncertainties, especially in challenging times.
The inherent ambiguity of AI’s capabilities and implications might push organisations to mirror established entities. Early AI guideline setters, such as the BBC, likely served as templates for subsequent adopters. Professionalism and the quest for legitimacy further fuel this trend.
Will we witness a future of standardised AI protocols, or will it be an array of customised ones? The isomorphism theory suggests a lean towards standardisation. Co-author of the study Felix Simon’s observations echo this.
The study’s data is vast, spanning 12 countries and 52 publishers. However, it’s neither symmetrical nor fully representative, with, say, too many German publishers covered and not enough data from India and Brazil. The collection process also faced hurdles, with some entities not having guidelines ready or unwilling to share their policies. When asked why many newsrooms have yet to establish guidelines for AI usage, Simon answered with his experience on background conversations: simply put, “there are plenty of news organisations who do not yet use AI extensively. Some also seem to have adopted a ‘wait and see approach’, or have established guidelines but are not making them public. We can expect to see more guidelines come out in the next weeks and months”.
This data has to be combined with the results of a survey published by the World Association of News Publishers in May that found that about half of the 101 respondents were using generative AI tools in their newsrooms, but only 20% had guidelines in place to govern use of the tool. This suggests that publishers still have a relaxed approach on the use of generative tools, but that over time guidelines will be developed.
Through a blend of qualitative and quantitative analyses, the findings of the academic study already revealed that publishers often use similar terminologies. Themes like accountability, allowed and prohibited AI applications, and intended audiences were found to be consistent across two-thirds of the analysed categories.
A notable distinction emerged between public and private media outlets. Public media displayed a heightened awareness of human oversight over algorithms, likely due to their structured organisation and specialised editorial teams.
This and other areas remain open for further enquiry, including technical systems, technological dependency, and audience engagement in AI discussions. Sustainable AI and its broader environmental and societal impacts were also notably absent in the guidelines. Furthermore, AI’s influence on existing power disparities, especially in terms of cultural and local diversity, was only sporadically mentioned.
The authors of the study believe newsrooms would benefit from audience feedback, for example to identify further blind spots in AI guidelines and inform decisions around the disclosure of AI use. Other subjects that could help in the process, according to Simon, could be academic and subject experts on the topic such as oversight of algorithms and AI sustainability, as “the expertise exists, but is up to news organisations to make use of it and up to academics to engage in a hands-on manner”.
Yet, we must tread carefully. A recent study examining 41 policies that prescribe human oversight of government algorithms discusses whether people are actually able to perform this function, and that if not, human oversight policies could legitimise uses of faulty and controversial algorithms that, rather than protect against the potential harms of algorithmic decision-making, provide a false sense of security. And it is important to remember large technology platforms, too, need robust content moderation policies to foster a safe and healthy information ecosystem for news organisations.
One of the most notable shifts in AI has been the increasing emphasis on interdepartmental collaborations. The link between academia and newsrooms promises to redefine the way information is disseminated and consumed. Yet, as these collaborations forge new paths, it’s essential to scrutinise the power dynamics emerging amongst the editorial, business, and tech teams. How these groups navigate their respective roles will substantially influence the AI guidelines that emerge from these collective dialogues.
However, the AI community stands divided. While some organisations have thrown open their doors to embrace these guidelines, others peer through the keyhole, hesitant and unsure. Understanding further motivations behind such diverse reactions will not only enrich our understanding of perceived AI benefits and risks but might also light the way for broader acceptance.
The roles of publishers in these phenomena is also overestimated. As the CSIS notes, the evolution of newsrooms to accommodate technological advancements is not without repercussions and the sustainability of news cannot fall on publishers alone. Different scenarios could arise in the future – for example search engines can decrease web traffic to external news websites through answering user queries with AI, or the ranking of news posts, even if curated with state-of-the-art technologies, may still be de-prioritised because of the use of AI by social media, which often favours fake, spammy or manipulative user-uploaded content. Not to consider the impact of AI on the workforce, which heightens labour uncertainty. All of this points to the fact that both technology platforms and newsrooms need formal guardrails to promote ethics, fairness and transparency in the development and deployment of AI.
Amidst this intricate web of technology and guidelines, human oversight remains crucial. It’s not merely about supervising algorithms; it’s about intertwining human values, ethics, and expertise with machine logic. Ensuring this harmony is not just a technical challenge but a societal one, marking the path ahead for AI’s integration into our world.
Source of the cover photo: https://unsplash.com/
Everything you need to know about European media market every week in your inbox
We are using cookies to give you the best experience on our website.
You can find out more about which cookies we are using or switch them off in settings.