“It is easy to support transparency as a theoretical ideal, but much harder to specify its nuances and relevance in practice”, Agnes Stenbom says.

Stenbom is the co-founder of the Nordic AI Journalism network and served as a project lead for the AI Transparency project, an initiative that got major Swedish publishers together to formulate specific recommendations on AI transparency. 

Some of the basic assumptions behind the project are that transparency around how news publishers operate are important, but at the same time “there are risks associated with excessive AI transparency”, like perhaps an unwarranted negative impact on audience trust in news. The report offers an early take into how to strike this balance.

AI transparency recommendations 

The AI transparency report provides recommendations on how Swedish media companies can inform their audiences about the use of AI in their editorial products.

The authors offer seven recommendations, labelling the first three as “fundamental” and the other four as “practical”:

1. “AI with ‘significant journalistic impact’ requires transparency”. What does “significant journalistic impact” entail in practice? That’s up to each newsroom to decide. (Presumably publishing AI-generated text or image in most cases would count here, but there are more blurry cases where it’s the team’s call).

2. “Other internal AI-tooling does not require transparency”. For example, using ChatGPT to proofread an article or an automatic transcription tool to transcribe an audio interview do not need to be disclosed.

“Of course we need to be very clear about the instances where the tools have significant journalistic impact, but for many of the other current use cases, it is almost like informing the user that ‘the internet was used while creating this story’”, Stenbom says.

3. “AI transparency must be approached as an iterative theme”. AI development is still at its early stage, and publishers should be ready to evolve their transparency principles.

4. “Be specific about the type of AI tool applied”. The authors advise to “demystify the concept of AI” and clarify specific tools used, like text generation or image analysis.

5. “Share information in connection with consumed content”. In other words, when you publish AI-generated text, it’s not enough to rely on the general AI policy buried in your website’s footer; AI usage should be mentioned alongside the specific article.

6. “Harmonise the industry’s language around generative AI”. As a first step, the authors suggest employing the phrase “created with the support of [an AI tool]” when disclosing AI usage to emphasise the involvement of human reporters and editors in the process.

7. “Avoid visual labels (icons) for AI in editorial media” as they could wrongly imply the content created with support with AI-generated tools is less credible than human-created content.

Setting an example for other markets 

The Swedish news media industry is among the most advanced news markets in Europe, such as when it comes to the high rate of readers paying for digital news. Similarly when it comes to AI transparency, Stenbom hopes the Swedish collaboration can inspire others.

“We have gotten great response in and beyond our Swedish market so far, with even organisations not involved in the process pledging to follow the recommendations”, Stenbom notes.

As the report notes, AI is a rapidly developing industry, and publishers need to be ready to iterate. Stenbom says the authors will follow up with project participants in six months to review who adopted the recommendations and update them as needed. 

Source of the cover photo: dominik hofbauer for Unsplash


The Fix Newsletter

Everything you need to know about European media market every week in your inbox