Recently, Russian oligarch and close Putin’s ally Yevgeny Prigozhin half-jokingly admitted to meddling in the US election. However big or small Prigozhin-funded troll farm’s impact has been, it’s obvious that disinformation threats are on the rise, and malicious actors have an increasingly sophisticated toolkit.
One of the tools developed to track and combat disinformation campaigns and hate speech worldwide is Beam, a capability developed by the UK-based Institute for Strategic Dialogue (IDS) and CASM Technology. Beam works in 17 languages and reveals the scale of harmful campaigns on different continents. While Beam’s capabilities are not available publicly, the team behind Beam is prepared to cooperate with journalists, inviting them to get in touch and share ideas.
The Fix spoke to Carl Miller, founder of CASM Technology and co-developer of the product, about how Beam works and how it can power journalists’ investigations.
Technically speaking, Beam is branded “a multi-lingual, multi-platform capability to expose, track and confront information”. Carl Miller says that the product is “a number of different things all coming together”, but it all boils down to three main layers.
Beam allows journalists to do multiple functions: to gather data, analyse it, aggregate it in groups, and visualise it.
Depending on the journalist’s goal, they can deploy Beam in two different ways. The first is continuous monitoring of online platforms, certain forms of language, channels and spaces to identify emerging information threats. Thus, you may set up alerts to get notifications of any spikes above the threshold. Or you may have a regularly updated dashboard to monitor, for instance, hate speech against women on Twitter.
Carl Miller provides an example the team made with the BBC: “We built a dashboard which looks at Chinese state media and diplomatic accounts. Over a very long time with BBC Monitoring, we trained algorithms across four languages to split the narratives as accounts and messages into ten top-level themes. And that shows you then how those accounts have spoken about those narratives over the last year, which regions talked about geopolitics, which regions talked about the economy, which regions talked about Chinese culture and issues to do with its people and language, and so on. And then it lets journalists tunnel into all that data and find what’s important to them to write a story about”.
The second is a deeper investigation, where specific influence campaigns are researched and scrutinised to identify their targets, effects, methods and possible success.
So far, the technology is only text-based, though it can find patterns in pictures. Miller says that the team is aware it needs to work on this feature, but for the foreseeable future, Beam will be text-based.
It is not open to the public for two main reasons, Carl Miller says. First, the team prefers to work closely with journalists, training them to use the technology and helping in their investigations, so it is not able to have thousands of users. The second reason is security: the team doesn’t want the adversaries to adapt to the technology’s detection.
As for joining Beam, Miller encourages to “just get in touch” and discuss if there are any ideas where the technology would be helpful (contact form is available at the link, and Carl Miller’s email address is firstname.lastname@example.org). “It’s an open invitation because the whole point really was to skill up and equip civic society responses to information threats”, says Carl Miller. Now, as Beam becomes more public, the team will work on a more structured form of engagement, onboarding, and people training.
Not really. As Carl Miller tells The Fix, the most important requirement is to be a subject matter expert who cares about their topics and knows a lot about them. “We can’t do anything about climate disinformation without climate experts. We can’t do anything about pro-Russian influence operations without people that understand the myriad geopolitics, political and military issues at stake”, says Miller.
He believes that data journalists have the most valuable skill set for Beam. Miller clarifies that it may not be data scientists who “spend their whole day manipulating data” but people who can ask the right questions and know how to generate a lead and then chase it down.
Another group of people essential for the project are open source intelligence (OSINT) experts who may conduct targeted investigations using a separate suite of tools after finding a story in the data Beam provides. They can uncover the possible identities, motivations, and structures behind online campaigns.
The team shared in its public report that Beam has supported over 350 civil society organisations across ten countries to confront information threats. Also, it has created over 90 non-public data briefings for partners, 28 public investigations, 150 media exposes of disinformation and 15 reports of credible threats to the authorities.
Carl Miller shares an example showing how data journalists and OSINTers may work with the same Beam output differently. In the spring of 2022, the team researched the origins and spread of hashtags #IStandWithRussia and #IStandWithPutin, which appeared on social media after the beginning of Russia’s full-scale invasion of Ukraine.
Beam has provided data to the journalists at The Economist and the BBC. The first was a data journalist, and they did a piece on region clusters where these hashtags were spread and on the influence the campaign had. A journalist has not only researched suspicious accounts spreading these hashtags but also looked at its followers and found another aspect of this story: “After suspicious accounts posted pro-Russian content, the share of their followers’ tweets favouring Russia also tended to rise. In contrast, the suspicious accounts’ activity did not change in response to posts by their followers or by users they followed”, The Economist wrote at the time.
In comparison, the journalists from BBC practised a more OSINT approach: they tracked down the people whose images were used by some of these accounts and contacted them. They discovered that many images had been stolen, and those people had nothing to do with pro-Russian hashtags. But they also found some people who were actual participants in the campaign, holding pro-Russian beliefs.