With advancement in AI tools and technology, distinguishing between AI-generated content and human-made content is getting difficult. It is one of the many problems newsrooms have to solve in the age of AI.

University of Bergen’s Laurence Dierickx, a researcher of AI and journalism says, “The widespread use of artificial intelligence systems in journalism highlights the need for clear guidance. Self-regulation is important to avoid external rules enforced by, for instance, the state.”

A step in this direction was recently initiated by the Reporters Without Borders (RSF) along with 16 partners. The commission of 32 members from 20 countries chaired by Nobel Peace Prize laureate Maria Ressa published the Paris Charter on AI and Journalism

The Charter’s aim was “to determine a set of fundamental ethical principles to protect the integrity of news and information in the AI age.” The Charter’s 10 guidelines work to uphold the essential principles of journalism like accuracy, accountability, independence, trust and transparency. 

The Charter instructs media organisations to maintain transparency to their audience about AI-generated content. It asks the editorial team to label such content as AI-generated and provide traceability to its sources. It also asks media organisations to evaluate the AI system based on journalistic values and AI companies to credit sources and financially compensate their authors. 

This Charter is one of the first steps to internationally set the tone for the use of AI in journalism. Prominent media organisations and unions have been part of the team that has drafted the charter, including the Committee to Protect Journalists, DW Akademie, Pulitzer Centre and 13 more. The members come from geographically diverse backgrounds including Asia, Africa and Latin America.

A critical look at the Charter

Despite being a significant step towards laying the foundation for the ethical use of AI tech in journalism, the charter lacks contribution from other stakeholders. The committee in charge of preparing the document consisted of directors of journalism unions and organisations along with researchers in the field of AI use in journalism. 

Dierickx explains, “AI in journalism is not only a matter of journalists, insofar as they are likely to work together with data and computational scientists. Hence, all the stakeholders should have been involved in the discussions on ethical guidelines.”

She adds, “From this perspective, the Paris Charter fails to meet two needs – more data and AI literacy and promoting an inclusive approach that fosters interdisciplinarity by considering tech scholars. Developing data and AI literacy is essential for journalists to be able to report and investigate data and algorithmically driven stories.”

Dierickx notes the omission of key points from the Charter. “Responsible AI is also about considering the environmental impact of these technologies.” She highlights the example of Finland’s Yle which states in its guidelines “We understand that the computing requirements of AI also burden the environment and we make choices to minimise these impacts.”

Flaws of the guidelines

The World Association of News Publishers (WAN-IFRA) was one of the 16 partners involved in the preparation of the Charter. Despite their involvement, WAN-IFRA welcomes the Charter but has decided to not promote it. Their CEO Vincent Peyrègne stated that they cannot endorse the Charter due to “the unrealistic prospects of section 3.” 

Section 3 of the Charter advocates for the AI systems in journalism to undergo prior and independent evaluation. Moreover, “This evaluation must robustly demonstrate adherence to the core values of journalistic ethics. These systems must respect privacy, intellectual property and data protection laws”.

Dierickx also critiques this section and points out how ChatGPT also relies on data that can be unreliable, biassed and copyrighted. “Also who are the humans who will evaluate the systems? Besides, there are new tools that appear almost every day and other tools that disappear.”

Another problematic section is Section 8. It advocates for the AI-driven content personalisation and recommendation to be based on journalistic values. “Such systems should respect information integrity and promote a shared understanding of relevant facts and viewpoints. They should highlight diverse and nuanced perspectives on various topics, fostering open-mindedness and democratic dialogue.”

Dierickx notes, “This article tries to preserve the diversity of information while ignoring two fundamental ethical aspects of data privacy and the informed consent of using data. In addition, research highlights that ethical issues related to news recommendations and personalisation are less related to filter bubble or echo chamber effect than exacerbating the polarisation of opinions.”

Section 2 and 4 of the Charter urge editorial teams to plan the ethical usability of every AI system and to assume responsibility in using such tools for every stage of the journalism process. This role should not be the sole responsibility of editorial leaders. As Dierickx noted earlier other stakeholders like journalists and AI and data scientists should have a say in the final decision.

The Charter diminishes the use of generative AI by setting unrealistic standards that media organisations need to follow before using AI tools. The first survey conducted by Google News Initiative and LSE’s Polis found that more than 75% of media persons use AI tools in their news creation process with 73% believing generative AI to be important to find new opportunities in journalism. 

Hence Dierickx says, “It is better to prioritise education rather than prescription, particularly in the field of AI. This is absent from the Charter. A risk-based approach, as promoted by the [European Union’s proposed] AI Act and the French Press Council, would have been more relevant because all AI systems used in journalism do not have the same potential for creating harm.” 

AI tools can be incorporated in each stage of journalistic production, from collecting information, to cleaning large data to the publication of the story. At every stage the use of AI can help journalists polish their story. But it is important for the journalist and the news organisation to use these tools in an ethical manner that fits the values of the profession.

As Dierickx concludes, “Ethics is first and foremost a matter of practice. Ethical dilemmas arise every day and they are not always easy to solve based solely on a few lines from a document.”

Source of the cover photo: Justice.RSFCC BY-SA 4.0, via Wikimedia Commons


The Fix Newsletter

Everything you need to know about European media market every week in your inbox