Tech

Deepfakes and disinfo technology: An interview with Google News Lab fellow Marek Miller

We asked Google News Lab fellow Marek Miller about the risks presented by deepfakes, doctored videos so realistic even experts can’t tell them from reality. He wasn’t optimistic.

We asked Google News Lab fellow Marek Miller about the risks presented by deepfakes, doctored videos so realistic even experts can’t tell them from reality. He wasn’t optimistic.

Marek Miller is a Google News Lab fellow focused on training media on fact-checking and disinformation tools. Before joining Google, he spent almost 15 years in the media and journalism, working for some of Poland’s biggest publishers, primarily Polska Press Group. He has collaborated with the Dallas-based media organisation INMA (International News Media Association) and was a Coaching and Leadership Fellow at the Poynter’s Institute in Florida.

We asked Marek about the risks presented by deepfakes, doctored videos so realistic even experts cannot distinguish them from the real thing, as well the threats and opportunities of technology in the field of counter-disinformation.

This interview has been edited and condensed.

MM: I always tell people on my trainings the best tool they have for fact-checking is their own mind. In the end, there is always be a human behind it.

So how far technology can help us? It depends on the kind of work you deal with, because sometimes you basically don’t need any technology, just your own critical thinking. Then there are things like well-prepared images or videos which make technology necessary, especially for people who have no idea how to deal with it. Finally, we have things that cannot be tracked by the human eye like the deepfakes you mentioned.

There was a big story in the Washington Post about what’s being done by the most advanced AI researchers in the United States. They say they are totally outgunned in the fight against deepfakes. Unfortunately, it seems very pessimistic. Of course, technology will play a big role in the fight with deepfakes but it seems it is still falling behind the technology that allows people to prepare those deepfakes.  

Of course, there will be attempts to prepare an algorithm or a so-called deepfake-antivirus or whatever you want to call it. But as far as I’m concerned when there is action, there’s always a reaction. Even though we prepare something to spot deepfakes there will be another reaction against it, so I’m rather pessimistic.

I’m speaking theoretically here. I don’t want sound like I don’t see any solution to this problem. I can see how dangerous deepfakes are to journalism. But I’m not in the group of people that are totally scared of deepfakes. 

Everybody’s so scared of deepfakes, warning the dark times are coming and there is no way to find a solution. But I think we should focus on the dangers coming from another direction. Like the case with Nancy Pelosi’s slowed down video, which has nothing to do with deepfakes. It was an easy way to manipulate people, play with their emotions. If we consider deepfakes are still rather an expensive and time-consuming to create, I wonder if they are going to be the real threat or we should focus on other ways of being manipulated. The danger is coming from every direction. 

ZP: Currently, deepfakes are more of a show. Will they be scaled and become more disruptive?

MM: For me, it’s not a matter of scale, rather of money. I can imagine deepfakes becoming a weapon for high-profile business groups or others interested in lobbying something or influencing large groups of people. We may experience deepfakes linked to government actions from countries like Russia, Iran or China. They probably won’t be an everyday thing that we experience for now. 

It also depends what you mean by scale. If you think about scale as a huge amount of deepfakes that will at some point flow from everywhere, I don’t believe that will happen. But I believe we will see a deepfake that will just totally mess up our minds. It will be close to the perfect thing that will come and pop up just one day before the main election with no time for the person featured to react. This would totally mess up outcomes. 

This could happen in a situation when we have two candidates, one leading with 4% of votes. Such deepfakes would appear from time to time. That’s just my prediction, but it’s how I see the threat in the future.

ZP: Let’s move to the other side. How can technology help fight misinformation and disinformation?

MM: I can hear Facebook and Google speaking about AI in the role of tracking fake information in texts and everything. But as far as I know the main tool Facebook has is a guy that owns the website Lead Stories. He created an algorithm that follows trending stories on Facebook. I believe it’s called Trendolizer. The team follows top stories and verifies if they are true or not.

But even though there is an algorithm tracking trending stories there is still a huge human factor of people behind it – they are checking and looking deeply into stories to see if they’re true or not. Humans are still highly involved.

I remember an attempt by Open AI, a company founded by Elon Musk and backed by Microsoft. They were working on an algorithm that would automatically track fake information in text. As far as I remember, they traced two million articles and the algorithm backfired. It learned how to create fake information on its own. The project was killed. I don’t know how much of Elon Mask’s press conference was PR, but he said he was killing the project because we as people are not ready for such a solution. 

This was like a “deeptext” I would say, an example of a machine thinking and creating articles on its own. 

I still believe there is a huge space in the next 4-5 years for humans to deal with fake news within text, not algorithms. We will be backed up by algorithms.

Like in terms of looking for the most shared stories. I believe Facebook and Google will be focusing on this. Tracking the most popular stories within ecosystems and humans verifying them. I cannot yet imagine a machine that could follow everything in text and tell us if it’s true or not.

ZP: How does Google counter misinformation and disinformation?

MM: Alright, but I’m speaking from Google News Lab perspective not from Google in general. Just some background: I’m a Google News Lab teaching fellow, which is just a part of Google. So I can’t give you a lot of insight, because I’m not directly inside Google, others may be better placed to discuss this. 

There are ways Google is dealing with fake information. One thing is partnering with different fact-checking organizations all over the world. Poynter’s International Fact-checking Network is a huge partner to Google News Lab. If you take a look at what is formerly known as DNI, Digital News Initiative. It was also addressed to help fact-checking organizations.

So Google is fighting fake news and giving people a possibility to create tools to fight disinformation. But these are kind of passive ways of fighting disinformation. In terms of real tools, there are two that come to my mind. One is Google Fact-check Explorer There is a special search bar that you can actually track news that has been debunked. The way it works is based on cooperation of Google with other fact-checking organizations. 

When you come across some info that bothers you and you don’t remember whether it was debunked on not, you can use the special search bar. It looks exactly like the Google search bar – you just type a query that sounds like a title of an article you want to trace. And Google with a partnership with different fact-checking organizations helps you get to the point of finding out whether the article has been debunked or not. 

And there is also something that operates already in the United States and it’s coming to Europe, but not soon, called the Google fact-check tag. For example, if you are in the US and search something like “the effect of vaccines on autism”, since there is no direct link between the two, Google decided to clear out every result from its search engine that will lead you to think that there is a connection. If you type something like this into a search bar you will get the result how this information was debunked, what science says the roles of vaccines but no article about the connection between vaccines and autism.

That’s how Google is fighting disinformation. Once science and very credible sources debunk news, they disappear from the search. Like flat earth stuff. Everybody knows the earth is not flat. This is a crazy movement but it’s huge. In Google, you will find no information of any kind about the flat earth. You will just find information about how crazy this story is, about how it was debunked and so on.

It’s called Google Factcheck Tag and it will come to Europe. I don’t know when, but it’s coming – it’s already fully operational in the US.

%d bloggers like this: