Three quarters of the news industry believes generative AI to be an opportunity in the field of journalism, according to a survey conducted by Google and LSE. Yet there exists high scepticism in accepting generative AI tools like ChatGPT and Google Bard. Publishers like The Telegraph consider using AI-generated text in the article without the knowledge of their editors and legal team as plagiarism.  

At the same time, the publishers that have used AI tools responsibly have seen a positive impact. After seeing a steady drop in traffic from social media sites like Facebook, BuzzFeed started experimenting with AI tools that led to “above average engagement.” Sweden’s Aftonbladet has also witnessed an increase in time spent on articles whose summaries were generated with AI. 

Separating AI tools from the field of journalism is highly challenging and should also not be the solution. As University of Bergen’s Laurence Dierickx, a researcher of AI and journalism says, “all AI systems used in journalism do not have the same potential for creating harm.”

Using AI in journalistic production

AI tools can help eliminate time and elevate speed at all stages of the journalistic work process. It can help in the collection stage by finding information, help in the processing of large chunks of data, help improve the structure and quality of the news piece and help broadcast the content effectively. 

The French association on journalistic ethics CDJM divided AI tools into three categories based on their potential risk to breaking journalistic ethics.

The first is the “low risk uses”, a category for tools that can be used by journalists without telling the public about it. This includes using AI for grammatical corrections, SEO generation and for fact-checking or research purposes.

The “moderate risk uses” consists of tools like translations, summary generators and creation of auditory and visual content. Here there is a need to inform the public about it. Last is the “Prohibited” section which is “intrinsically, incompatible with respect for journalistic ethics”. The authors recommend not to use generative images, videos or audio that might lead the audience to believe in its realism and to not use AI generated content without human supervision.  

Guidelines on using AI 

There has been a flurry of activity on creating guidelines for using generative AI in the media sector. Several international and regional media organisations have come forth with their recommendations. One prominent document in this field is the Global Principles for AI.

Endorsed by the European Publishers Council, WAN-IFRA, New Media Finland, Danish Media Association and many international press organisations, it provides guidelines for the field of media and journalism. 

The ‘Global Principles for AI’ places more responsibility on AI developers to establish an ethical framework. Another guideline that specifically talks about the responsibility of news publishers is the RSF-initiated Paris Charter on AI and Journalism

European take on AI guidelines

Along with such international guidelines, many regional and national press bodies and organisations in Europe have drafted their own recommendations. Many of these recommendations suggest using AI tools ethically and with human supervision to improve journalistic work.  

Northwestern University’s Nicholas Diakopoulos and University of Amsterdam’s Hannes Cools studied the guidelines of European and US news organisations. They found that the guidelines of Sweden’s Aftonbladet, Netherlands’ De Volkskrant, Switzerland’s Heidi.News and France’s Le Parisien recommend using AI tools as long as there is human approval to the content. 

Norway’s VG has decided to use AI tools in the creation of graphs, illustrations and models but has decided to ban the use of photorealistic images. Switzerland’s Heidi.News and France’s Le Parisien have also stated to use AI tools ‘illustrative purposes’.

The German Journalists’ Association (DJV) guideline proposes the lawmakers to make the labelling of AI generated content to be mandatory. It also calls for an option for the end-user to opt out of AI-driven personalised distribution of content. 

Another study of European guidelines was done by Dierickx. Along with Professor Carl-Gustav Lindén, they have analysed 34 guidelines from 11 European news organisations for their paper which is currently under the peer-review process. 

According to their research the Press Council of Catalonia produced “the most comprehensive guidelines”. She explains, “[it] is based on solid research to highlight challenges as much linked to the intrinsic characteristics of systems powered by large volumes of data, as well as responsible practices of AI technologies.”

They also found that the BBC is the only news publisher with guidelines that focus on machine learning engineers. They provide six principles and a self-audit checklist to help develop ethical technology. “BBC’s principles underscore that the responsibility for AI lies with those who build the systems.”

A similar idea was also seen in Germany. “Responsible engineering was also at the heart of the guidelines published by the Bavarian public broadcaster Bayerischer Rundfunk (BR), which also emphasised the importance of developing a ‘data culture,’” says Dierickx.

AI technology is an ever-changing field, with new tools entering every day and some disappearing. This creates a space for the guidelines to constantly evolve yet stay true to widely accepted journalistic principles.

“New technologies in journalism pose new challenges, highlighting the necessity for ethical practices founded on principles of accuracy, fairness and transparency,” Dierickx concludes.

Source of the cover photo: https://unsplash.com/


The Fix Newsletter

Everything you need to know about European media market every week in your inbox