Subscribe to our LinkedIn so you don't miss important media news and analysis
Editor’s note: we are republishing one of the emails from The Fix’s new AI newsletter course that offers perspective and practical advice on artificial intelligence for news leaders by Alberto Puliafito. You can subscribe for free to access the whole course.
AI can assist journalists in uncovering stories, analysing large datasets, and improving the accuracy and speed of our research processes. Let’s look at specific ways to achieve it – and consider the ethical implications of using these powerful tools.
Research and news gathering are at the heart of journalism. Traditionally, these tasks have been time-consuming and labour-intensive, but AI has the potential to streamline these processes, allowing journalists to focus more on storytelling and analysis. AI can sift through massive amounts of data, identify patterns, and even predict newsworthy events before they happen.
This doesn’t substitute at all the human approach to sources. Instead, it could allow you to save time that you can dedicate to what will always remain human: relationships with sources, connecting dots, talking to people.
Gemini, Claude, ChatGPT are not search engines. They can hallucinate, fabricate facts, make mistakes. But you can force them to search the web and quote sources – if they have not been blocked using the robots.txt (a topic that we dig into the eighth and last episode of this course) and then double check those sources, for example. You will receive as an output answers and links.
For example, you could use a prompt like this:
“Search the web for recent articles on [insert topic]. Provide a summary of the key points and include the links to the sources, ensuring they are up-to-date and reliable. Cross-reference them with one another for accuracy.”
You can also use tools like Perplexity that are made for search.
Perplexity is designed to minimise the hallucinations, to provide a long list and link of reliable sources, and to provide you with a summary of those sources. You can dialogue with the results, use it to go further, to find other related information and so on. You have to double check both the sources and the summary. I use it for the newsletter Artificiale I write for Internazionale: the newsletter is supposed to have a content curation section, and Perplexity is my assistant for this section, significantly cutting my search time.
In general, AI tools can handle routine research tasks such as collecting background information, summarising documents, or pulling together statistics. This allows journalists to focus on more complex tasks that require human judgement and creativity.
This way, while AI tools can assist in gathering information, it remains crucial to manually verify their outputs for accuracy.
AI can process vast quantities of information far more quickly than a human can. This includes scanning documents, social media, public records, and other data sources to identify trends, patterns, or anomalies that could lead to a story.
Tools like ChatGPT or other chatbots can be used to analyse sheets or documents, extract patterns and so on. For example, I’m using my ChatGPT account to analyse the results of a FOIA I conducted to find connections among documents.
In Slow News, we used Pinpoint to dig into a large collection of documents we used as sources for a podcast about the Bologna massacre. We also made the collection available for the public.
Of course, you have to always evaluate if the documents you are analysing are public, if you can analyse them with online tools or if you need offline tools to protect your sources.
AI can assist in the verification of information by cross-referencing data from multiple sources, checking for consistency, and identifying potential falsehoods. This can significantly speed up the fact-checking process, which is crucial in today’s fast-paced news environment.
For example: “Trace the origin of this statement [insert statement] and provide a timeline of when and where it was first mentioned. Identify any changes or alterations in the way it has been reported over time.”
AI can analyse sentiment of social media posts, articles, and public statements to gauge public opinion on various issues. This can provide journalists with a deeper understanding of how different topics are perceived by the public. You can use generative AI tools to understand open answers and to conduct a deep sentiment analysis.
AI can help uncover connections between seemingly unrelated pieces of information. By analysing patterns and relationships within data, AI can reveal new angles on a story or identify key players that might not have been apparent initially. This can be achieved by asking chatbots for connections among documents or evaluation.
For example, after uploading into my newsroom ChatGPT’s account several documents about the Smart Control Room in Venice (a topic I’m investigating), I’ve asked the AI to tell me if in some way the SCR has been “oversold” as a magic solution for Venice’s problems.
Ensuring data privacy, ethical use and transparency – when using AI to gather and analyse data, it’s essential to consider the privacy implications. Journalists must ensure that the data they are using has been ethically sourced and that they are not violating privacy laws or journalistic ethics. Always verify the source of the data and ensure it was obtained legally and ethically. Be transparent with your audience about how AI was used in your research.
Ethical use case: an acceptable use of AI in journalism could be using AI tools to analyse publicly available datasets, such as government reports, open-source research, or public domain archives. For instance, using AI to cross-reference public financial records to uncover discrepancies in corporate tax filings would be a responsible and ethical use of AI, provided that the data is legally accessible and used transparently. Another ethical use is using AI to dig into open-source intelligence (OSINT) to track down critical information available in the public domain, such as geolocated videos or satellite imagery, to verify facts in investigative reporting.
Unethical use case: Using AI tools to attempt to access protected or private accounts, such as trying to penetrate password-protected databases or social media profiles, would violate ethical standards and potentially break privacy laws. Journalists must respect the privacy of individuals and should not use AI or other tools to hack or bypass security measures. There are, of course, situations where journalists might need to push the boundaries, but always within the framework of core ethical principles. These cases typically involve a high public interest that justifies a deeper investigation or the use of unconventional techniques, but with careful consideration and transparency.
Avoiding over-reliance on AI – while AI can significantly enhance research, it’s important not to become overly reliant on these tools. AI is not infallible, and its findings should always be corroborated with human judgement and additional research. Use AI as an assistant, not a replacement. Ensure that all AI-generated insights are thoroughly checked and validated by journalists before being integrated into a story.
Everything you need to know about European media market every week in your inbox
Alberto Puliafito is an Italian journalist, director and media analyst, Slow News’ editor-in-chief. He also works as digital transformation and monetisation consultant with Supercerchio, an independent studio.
We are using cookies to give you the best experience on our website.
You can find out more about which cookies we are using or switch them off in settings.