Subscribe to our LinkedIn so you don't miss important media news and analysis
In journalism, the where and the when of a photo or video aren’t just metadata — they’re often the key to the entire story.
Geolocation and chronolocation – pinning down the time something happened – are fundamental to journalism and investigative reporting. Whether you’re verifying footage from a war zone, debunking a viral video, or cross-referencing witness testimonies, being able to anchor visual content to a specific place and time turns speculation into fact.
To do that, journalists rely on a series of visual clues: fixed geographic markers like buildings, natural features, architectural styles; human infrastructure such as road signs, shop fronts, graffiti, and public transport. For the when, we often turn to shadow angles, weather conditions, traffic patterns, or even the position of the sun — all of which can be used to estimate the time of day or cross-check the authenticity of footage.
These methods have powered some of the most groundbreaking investigations in recent years — from uncovering war crimes to exposing state propaganda. But they all rest on one crucial assumption: that the image itself is a reliable witness — and that if it isn’t, we can at least spot something strange in it. A glitch in the pixel pattern, an inconsistent shadow, a distorted reflection — something that tells us: look closer.
With AI-generated visuals, that assumption is no longer safe.
We already know that deepfakes are a problem and that with the rise of multimodal models of generative artificial intelligence, we’ve entered a new phase of visual realism at low cost.
With the arrival of ChatGPT 4o’s image generation capabilities, the already murky waters of AI-generated visual content just got deeper. One of the most intriguing — and worrying — use cases? Geolocation and chronolocation hacks. You can take a photo at night, ask an AI to make it daylight, wait a few seconds – or minutes, if the servers are full – and then watch search engines confidently confirm the fake as if it were the real thing.
Let’s see how this works, what risks it creates for journalists and audiences, and what media organisations can do to navigate this new terrain.
Here’s the trick: take a real photo of a well-known location at night, like this one, in Bologna’s Santo Stefano complex.
Then, in ChatGPT 4o you can upload the photo and just ask “Make this picture daylight”. It’s a multimodal prompt: this means that you can use different formats of content as a part of the same prompt (in this case, an image and text).
And voilà — the model hallucinates sunlight, fixes shadow orientation, adds blue skies, and smoothes away lighting inconsistencies.
You’re left with an image that’s not just believable, but recognisable. If you look closely, there are still some artifacts that let you suppose something is wrong with the image. And in some way this is the good news. Let’s magnify, for example, the person with the red skirt that appears to be in front of the church’s door. That’s a sign that something is wrong.
But the point is that if you reverse image search the AI-generated daylight version on Google the platform “confirms” the location. The AI didn’t just create a fake — it created a fake that passes as real to another machine.
This is not theoretical. I tested it. It works, and you can test it by yourself.
Just go to Google homepage and click on the stylised camera on the right.
Then drag and drop the image. The answer of Google, if the place is well known, will be almost instantaneous.
This kind of manipulation raises obvious red flags for disinformation, but it also highlights subtler, more insidious issues in our industry, since these instruments are spreading wider and it’s cheaper than ever to create this kind of fake (the trend will probably continue):
We’re not powerless. But we do need to update our practices, fast. Here are a few suggestions for newsrooms, editors, and anyone working with visuals, forgetting the idea to put a watermark on fake images. Yes, I know it’s the majority idea. This does not mean that it is a good idea: after all, should the watermark be placed on fake banknotes or on authentic ones?
So, we need to create an ecosystem with properly tagged images.
Even if an image is accurate, showing when it was taken helps create accountability and prevents misinterpretation.
Many services claim to identify AI-generated images, but they’re far from perfect. Use them as signals, not verdicts.
If you publish a photo, explain where it came from, who took it, and whether it has been modified. This builds trust — and helps other journalists verify your work.
The more we talk about this openly — not just as a tech story, but as a journalistic challenge — the more resilient we’ll be.
To truly address the issue at scale, we need more than just newsroom best practices — we need infrastructure. One path worth exploring? A certification system for authentic images, not for fake ones! If the focus is on watermarks for generated images, we have to keep in mind that a screenshot or a sharing as an image – not as a file – will lose all the metadata. ChatGPT says that 4o Image Generation puts a C2PA code inside its images. But, again, it’s very easy to get around.
Imagine instead an open, interoperable standard — something akin to a Creative Commons for credibility — where metadata is hashed, timestamps are locked, and any post-processing or generative elements are clearly flagged. This could be:
This wouldn’t stop disinformation outright — but it would create a new layer of accountability and traceability, which is desperately needed.
Journalism has always relied on sources. Maybe now it’s time to treat images like sources, too — and demand the same level of transparency.
By anchoring metadata — such as timestamp, GPS coordinates, camera device ID, and any edits applied — to a public, tamper-resistant ledger, blockchain could offer a way to cryptographically prove the authenticity and integrity of an image from the moment it was captured. This doesn’t mean every photo would become an NFT or need to live on the blockchain. Rather, it would involve registering a hash (a unique digital fingerprint) of the image and its metadata at the point of capture. Any manipulation or modification would change the hash, making tampering instantly detectable.
Projects like C2PA (Coalition for Content Provenance and Authenticity), backed by Adobe, Microsoft, BBC and others, are already working toward open standards that could integrate such cryptographic proof chains — with or without blockchain as the underlying infrastructure. Blockchain isn’t a silver bullet. But it could be a valuable piece in a broader infrastructure of source transparency, especially when used in combination with secure hardware and verified device identities.
Source of the cover photo: generated by OpenAI’s ChatGPT, DALL·E
Everything you need to know about European media market every week in your inbox
Alberto Puliafito is an Italian journalist, director and media analyst, Slow News’ editor-in-chief. He also works as digital transformation and monetisation consultant with Supercerchio, an independent studio.
We are using cookies to give you the best experience on our website.
You can find out more about which cookies we are using or switch them off in settings.