Subscribe to our LinkedIn so you don't miss important media news and analysis
Editor’s note: we are republishing one of the emails from The Fix’s new AI newsletter course that offers perspective and practical advice on artificial intelligence for news leaders by Alberto Puliafito. You can subscribe for free to access the whole course.
I believe that fact-checking must be – and it is, indeed – part of the whole journalistic process.
I’m not alone in these thoughts: “Journalists often describe the essence of their work as finding and presenting ‘the facts’ and also ‘the truth about the facts.’ They also describe using certain methods – a way of working – which Bill Kovach and Tom Rosenstiel describe in The Elements of Journalism as a scientific-like approach to getting the facts and also the right facts. Called the Discipline of Verification, its intellectual foundation rests on three core concepts – transparency, humility, and originality”. And more: “In the end, the discipline of verification is what separates journalism from entertainment, propaganda, fiction, or art. Journalism alone is focused first on getting what happened down right.”
Verification is the fundative pillar for journalism; fact-checking is the backbone of credible journalism. In an era where information spreads rapidly online, journalists must be equipped with the tools and techniques to verify facts swiftly and accurately. Today, we’ll explore how AI can enhance the fact-checking process, discuss the tools available, and consider the ethical implications of relying on AI here.
AI can help by automating parts of this process, allowing journalists to focus on more complex verification tasks that require human judgement. However, while AI can be a valuable tool, it’s essential to use it responsibly, ensuring that it complements rather than replaces traditional fact-checking methods.
Is automated verification of facts a thing? Let’s start from an assumption: generative AI can’t verify for journalists, said Andrew Dudfield, the head of AI at Full Fact, an UK’s independent fact checking organisation. “In reviewing past fact checks from his own organization,” Angela Fu wrote for Pointer, “Dudfield found that the vast majority involved “brand new information.” Fact-checkers had to consult experts and cross-reference sources to produce those stories. AI tools, which draw upon existing knowledge sources, can’t do that”.
But AI tools can assist in the rapid verification of factual claims by cross-referencing them with reliable databases, news archives, and other sources. This can be particularly useful for checking the accuracy of statements made by public figures or in viral content, even with brand new information.
Let’s see an example using Perplexity. I fabricated a brand new claim to verify: “Kamala Harris is about to tax the rich by putting a $100 million cap on a person’s net worth”.
Is it true or false? Before starting any other verification job, I ask that to Perplexity.
As you can see, Perplexity collects a bunch of sources and provides me with a summary of its findings. First thing I have to note: “there is no evidence to support the claim”. Then, the answer goes on.
Now, it’s true that I still have to do my job as a journalist checking the sources. But this is a solid base to start with, and it can save a lot of time. I have been provided with a summary and sources to check.
Tracking statements – If you want to use AIs tools to track a statement, you can do something like this:
“Trace the origin of this statement [insert statement] and provide a timeline of when and where it was first mentioned. Identify any changes or alterations in the way it has been reported over time.”
Real-time monitoring and alerts – AI can monitor news and social media in real-time, identifying potentially false or misleading information as it spreads. This allows journalists to respond quickly to emerging misinformation before it gains traction. Experiments with AI-driven monitoring tools like Dataminr, which can alert you to trending topics or viral posts that may require fact-checking, could be interesting.
Assisting in deep research – AI tools can help with the deep research needed to fact-check complex stories. By sifting through large amounts of data, including historical records, scientific studies, and legal documents, AI can surface relevant information that supports or refutes a claim. In order to do so, you can experiment, for example, with several LLMs like ChatGPT, Claude, Gemini, with Perplexity again, or with specific search engines like Consensus or Scholar. They are academic search engines, powered by AI, grounded in scientific research. They use language models (LLMs) and search technology to surface the most relevant papers. Then they synthesise both topic-level and paper-level insights. Everything is connected to real research papers.
Let’s say you’re working on a story about climate change and need to fact-check recent claims about the impact of global warming on sea levels. You can use a prompt like this in Consensus:
“Search for the most recent peer-reviewed studies on the impact of global warming on sea levels. Provide a summary of the key findings from the top papers, and include direct links to the research.”
This will help you quickly gather scientific data, summarise the research, and verify the information with reliable sources.
Language processing, translation and summarisation – Of course, you can use the AI tools’ natural language processing capabilities to assist you in fact-checking content in multiple languages, broadening the scope of verification efforts. AI can also translate foreign-language content, allowing journalists to verify claims from international sources.They can help you to put in context a claim, to better understand and contextualise a claim or a fact in culture different from yours and so on. Moreover, LLMs can help you summarise large documents and answer specific questions about documents.
Detecting deep fakes and manipulated media – As I wrote in this guide for The Fix about manipulated media, content manipulation is as old as content themselves. AI can help detect deep fakes and other forms of manipulated media by analysing inconsistencies in video, audio, and images. As deepfake technology becomes more sophisticated, AI tools are increasingly essential for verifying the authenticity of multimedia content. But they are not magic wands and sometimes they can only give you some doubts about a video, an image, an audio. One of the most interesting tools we have as journalists to check videos and pictures is the InVID Project.
I used InVID forensic analysis to check possible signals of manipulations in the famous Kate Middleton’s family portrait, seeing interesting things. InVID’s forensic toolset is designed to help detect alterations in manipulated images. It’s recommended not to use it on screenshots, scanned documents, or collaged images, as they may already be altered, therefore you need to use the highest resolution image available for best results. The tool provides you with several filters. The more filters that highlight the same area, the more suspicious that area is. Note that forensic filters detect any digital signal changes, not just semantic manipulations, which can lead to false positives. Complex textures or excessive brightness can also alter signals unintentionally. Of course, you can also have false negatives. Again, the process of verification can’t be restricted to one tool: you’ll need to put the content in context and sometimes you won’t reach a final answer 100% true.
You can also ask a LLM to deeply describe an image: it’s very useful because they use defensive writing. Therefore, you’ll have a description of the image without your personal biases.
Three things to do according to Poynter – Olivia Sohr is news initiatives director and Franco Piccato is executive director at Chequeado, a fact-checking organisation based in Buenos Aires, Argentina. They shared three uses of AI tools for fact-checking:
While AI can automate parts of the fact-checking process, human oversight is crucial to ensure accuracy and context. AI tools may not always understand nuance, sarcasm, or cultural references, which can lead to misinterpretation. Therefore, if you decide to use AI tools for your fact-checking process, you should use them as a first layer of fact-checking, followed by thorough review and analysis by human journalists. This ensures that the final judgement on accuracy remains with trained professionals.
Moreover, as we saw in the past episodes of this course, AI tools are only as good as the data they are trained on and they can hallucinate. If the training data is biased, the AI’s fact-checking results may also be biased. If you ask a conversational chatbot without guardrails about a fact, it may fabricate the answer. It’s important to understand the limitations of the tools being used and to be critical of their outputs.
As for other aspects of our job, I recommend transparency about the role AI plays in our fact-checking process. This transparency helps build trust with the audience and clarifies how conclusions were reached. When publishing fact-checked content, consider including a note explaining how AI was used in the process, and outline the steps taken to ensure the information’s accuracy.
Everything you need to know about European media market every week in your inbox
Alberto Puliafito is an Italian journalist, director and media analyst, Slow News’ editor-in-chief. He also works as digital transformation and monetisation consultant with Supercerchio, an independent studio.
We are using cookies to give you the best experience on our website.
You can find out more about which cookies we are using or switch them off in settings.