How Italy’s journalists’ association is regulating AIs — and why it might not be enough

How Italy’s journalists’ association is regulating AIs — and why it might not be enough

Avatar

In June 2025, a new ethical code for journalists will come into force in Italy. The updated Testo Unico dei Doveri del Giornalista introduces Article 19, a short but striking set of rules on the use of artificial intelligences. It’s one of the first national-level professional guidelines in Europe to directly address generative AIs’ integration in journalism. But in a country known for its regulatory enthusiasm — at least on paper — the gap between codified norms and practical enforcement is often as wide as the Tiber. Article 19 is no exception.

From charters to unification: a new era in journalism ethics

The Italian journalistic profession is no stranger to deontological rules. Over the decades, various documents have been crafted to regulate sensitive areas — from reporting on minors (Carta di Treviso) to migration (Carta di Roma), from economic reporting to gender discrimination (Carta di Firenze). Today, most of these texts converge in a single framework: the Testo unico dei doveri del giornalista, in force since January 2021. This unified code was created to harmonise previous charters and provide clearer, more consistent ethical standards across the board. It integrates the documents I mentioned and other ones like Carta dei doveri del giornalista (1993), Carta di Gubbio, Carta dell’informazione economica e finanziaria, Codice in materia di sondaggi, and many more. Now, this unified code has the new version already approved. And here we come with Article 19.

What the article says — and what it doesn’t

Article 19 introduces three clear principles:

  • AIs cannot replace journalism: The human journalistic act is irreplaceable, no matter how advanced the tech.
  • Transparency is mandatory: Journalists must explicitly declare when and how AIs were used, and remain fully responsible for the content.
  • Ethical obligations remain: Using AIs doesn’t exempt anyone from verifying facts, sources, or respecting professional duties.

So far, so good. On paper, these principles are solid. They reaffirm the centrality of the human journalist and signal that AIs are a tool, not a substitute. But the real challenge lies in how these rules will be interpreted — and by whom.

Please note that the use of the plural AIs is mine. It is not just a trivial affectation. I think it is necessary to remind us of the plurality of these tools and the fact that they are not human. However, Article 19, as the most part of journalistic coverage of AIs, talks about a generic “artificial intelligence”. 

Where things get blurrier: practical challenges and systemic risks

I’ve identified several grey areas in Article 19. What does “using AI” even mean? If a journalist drafts an article using ChatGPT, that’s probably AI use (it depends, as we will see).

But what about using Grammarly to fix grammar? Pinpoint to transcribe an interview? Gemini to summarise a report? Should every single interaction be declared? How should disclosure work? Should the use of AIs be tagged in metadata? And what is going to happen when these tools will be fully integrated in CMSs or hardwares? Is a note at the bottom enough?

There’s no guidance on formats or expectations. Journalists are asked to “verify the sources and truthfulness” of AIs outputs, too. But which outputs? If a chatbot suggests a quote from a press release it scraped somewhere, journalists must already have the skills and tools to verify it, AIs or not.

Moreover, Article 19 puts the burden entirely on the individual journalist. But decisions about AIs integration — from editorial automation to audience chatbots — are made by publishers and newsroom leaders. Without institutional responsibility, this rule risks becoming a disciplinary trap for freelancers and staffers, not a shield for journalistic integrity.

Voices from within: councillors Gianluca Amadori 

To understand how Article 19 is meant to work in practice — and to test some of its ambiguities — I reached out to one of the councillors of the Italian Order of Journalists, Gianluca Amadori, who was directly involved in the process. Here is a translated version of our exchange. 

Alberto Puliafito: The first paragraph of Article 19 tackles a crucial issue: substitution. But shouldn’t this primarily concern publishers, not journalists?

Gianluca Amadori: Article 19 establishes a fundamental obligation of transparency and a principle of responsibility for journalists: the audience must know whether AI has been used to create a journalistic piece. In any case, the journalist’s duty to verify the information remains intact. The Journalists’ Order can only bind its members to follow ethical rules, not publishers. To them, we can only issue invitations and recommendations. During the recent national contract negotiations, the FNSI [Italy’s main journalists’ union, ndR] listed the AI issue among the non-negotiable priorities.

A.P.: Right now, AI cannot replace source verification, human judgment, or journalistic method — though it can be a powerful assistant. So what exactly does "substitution" mean?

G.A.: Technology has accustomed us to revolutionary changes becoming reality at an astonishing pace. The first paragraph of Article 19 states that a journalist cannot be replaced by AI in producing journalistic content. This principle is also aimed at publishers — particularly at editorial leadership, who are bound by professional rules and therefore must ensure the journalist’s role remains central.

A.P.: In what form, and in which cases, must a journalist explicitly disclose the use of AI? If, for instance, AI-powered translation tools, transcription, summarisation software, or even AI-integrated search engines (like the evolving Google) are used — and verification is still performed — does this need to be specified? And how precisely?

G.A.: The current Code already requires journalists to cite their sources — unless there’s a clear need for confidentiality. Why should the use of AI be treated differently? Personally, I believe transparency is always a value. Therefore, citing the tools used for research, synthesis, or translation should be considered a professional obligation in every journalistic piece.

A.P.: The reference to ethical responsibilities seems essential. Why did the Order feel it was necessary to include a specific reminder in the context of AI?

G.A.: The media world is changing at an extraordinary speed. Technology brings huge opportunities — but also new risks. And we’re dealing with sensitive material here: news touches constitutionally protected rights like freedom of expression and human dignity. That’s why it must be the journalist who governs the information, takes responsibility for it, and ensures maximum transparency.

Voices from within: the president responses

Shortly after the exchange with councillor Gianluca Amadori, I also received a detailed reply from the President of the Italian Order of Journalists Carlo Bartoli. At the time of writing, the organisation is undergoing leadership renewal following recent elections — it remains to be seen whether the current president will stay in office.

Here is a translated version of his full response to the same questions:

On publishers and substitution:

There’s no doubt it also addresses publishers, who might be tempted to use generative AI solely as a way to cut costs. Just like what happened with the arrival of the web and social media, which led to disastrous consequences for the publishing sector — as it threw itself into the arms of the big platforms and ended up with nothing but crumbs. As the Order, however, we can only address registered professionals. The first paragraph explicitly asks our colleagues — editors above all — to resist the temptation of doing everything “automatically”, abandoning the core principles of the profession.

On the meaning of “substitution”:

When writing a rule, the goal is to outline a reference framework that looks ahead — in this case, toward the future evolution of these tools. As with any regulation, you can’t go into too much detail. AI, like the internet and social media before it, is evolving so quickly that by the time you describe one application, it’s already outdated and a newer version has taken its place.

On the formal application:

The rule is general in nature and specifically refers to generative AI. As stated in paragraph 3, if a journalist produces text, images, audio, or anything else using AI, they must inform the audience so that readers are aware. But disclosure alone doesn’t solve the problem: a revealing example is the experiment by Il Foglio, which launched “Il Foglio AI” [see “Il Foglio AI and the automation of opinion journalism”, ndr]. They clearly stated that the content was generated entirely by AI — but the result wasn’t journalism. An article isn’t just a summary or a compilation: it’s research, verification, and the deliberate construction of a story or investigation. Even more concerning is the use of AI to generate images, videos, and sounds entirely on its own.

On ethics reference and the need to be specific about AIs:

The ethics of our profession must always adapt to the current reality. That was true when we used typewriters, then when personal computers arrived, and it applies now with AI. Technology evolves — digital tools advance even faster — but the core of journalism stays the same: investigating facts, verifying sources, using appropriate and balanced language, respecting people, rejecting all forms of discrimination, and seeking truth within the limits of the context. These foundations cannot be ignored, even when using generative AI. Generative AI can be useful if used as a tool — as support — but it can also mark the end of journalism if allowed to operate according to rules of profit or special interests, often opaque ones, instead of our own.The real problem with AI in news — especially with audio and video content — is what happens outside the boundaries of professional journalism. The entire digital ecosystem is affected. If we are flooded with fake videos that look increasingly real, public disorientation will grow. That’s why journalists — and the entire professional news system, from publishers to institutions — must fully play their role: to offer citizens information that is ethically and professionally sound. That is the only way to build trust. And trust is essential for democracy.

How I used AIs to write this article — transparently

With a certain effort, I’ll try to be as transparent as possible about how I used AIs to write this piece. But it’s a long story. First of all, several months ago I built my own personalised AI assistant, fine-tuned on dozens of articles I’ve written — in both Italian and English — on topics like digital literacy, media literacy, AI literacy, and media analysis. I designed it so that it could analyse my writing style and progressively learn the logic, structure, and rhythm I tend to follow, making a distillation of patterns. 

Once ready, I began using this assistant in multiple ways: to test the strength of an idea; to have it challenged; to draft a pitch; to structure a potential article; to identify logical gaps; to polish translations and my English, which is not my mother tongue; to proofread drafts, or to highlight stylistic inconsistencies. Sometimes, I use it to help surface additional sources or perspectives I might have overlooked. Once an article is finished, it feeds back into the assistant’s rules and knowledge base. In my freelance work, my personal AI assistant has become a tool that I can work with “together”. And I have more of them.

I anticipated that it would be a long story. And the thing is, I can't even say exactly how many ways I used my assistant in this article. But I have full control over it because I'm the one who thought it up, structured it, chiseled it, checked it, and reread it before sending it to the editors of The Fix.

This workflow allows me to speed up what I would call the non-human parts of writing — and gives me back time for the most human ones. Like reaching out to Amadori and Bartoli and asking for an interview. Like digging deeper into ethical questions. Like taking care of how a piece is constructed, step by step, word by word.

As you can tell, AIs in this process are never a source. They are assistants. If that’s what needs to be disclosed — fine. But the real issue lies elsewhere: we need to teach journalists how to use these tools properly, if they choose to. And we need to focus on the bigger structural threat: the mass production of underpaid content to feed a click-driven business model.

Rules are necessary, but not sufficient

Compared to other countries, Italy’s move is pioneering. Most press councils across Europe are still debating how to incorporate AI into existing frameworks. The EU’s AI Act will provide a legal framework, but not an ethical one tailored to journalism. In that sense, Italy’s Article 19 is a much-needed first step. Yet it also reveals a deeper problem: journalism still lacks shared operational standards for AIs. The code offers no practical examples, no shared glossary, no protocol templates — and no support for journalists trying to navigate this complexity in real time.

To be effective, deontological rules need to be discussed in newsrooms, taught in journalism schools, shared with readers and audiences, applied with collective responsibility, not just individual liability.

Article 19 signals that Italian journalism wants to take AIs seriously — and that’s a good thing. But like many ethical rules, it currently exists in a vacuum. Without newsroom policies, training, and open conversation, its impact will be limited. Both Gianluca Amadori and the President of the Italian Journalists’ Association reveal an encouraging level of awareness.

Both acknowledge — albeit in different tones — that AIs are not just a tool but a cultural shift. Both stress that disclosure alone is not enough. Both recognise that the risks of automation are as much economic as they are ethical. And the President speaks openly of opaque interests, of editorial temptation, and of a flood of synthetic media threatening the very conditions for democratic trust. Still, this openness raises the stakes even further.

AIs are not a threat to journalism per se. But unregulated, opaque, and editorially imposed AIs might be. If we want ethical and sustainable integration, we need more than rules. We need culture, shared knowledge and competences, and collective agency.

Otherwise, the risk is clear: good intentions may end up becoming another layer of pressure for those who already carry the weight of a struggling system. And let’s be clear: as my 2024 surveys revealed, journalists are already using AIs. The question is not whether, but how. The ethical conversation on AIs adoption and application has started. But we need to make sure it doesn’t stop at declarations. Because the issue is no longer just how AIs are used in journalism. The real question is: what kind of journalism — and what kind of media ecosystem — we are willing to build. 

Source of the cover photo: generated by OpenAI’s ChatGPT, DALL·E


The Fix Newsletter

Everything you need to know about European media market every week in your inbox

Alberto Puliafito avatar
36 articles • 0 Followers