Subscribe to our LinkedIn so you don't miss important media news and analysis
In the past two decades, Wikipedia has been a huge success story – an unlikely idea of a volunteer-built knowledge compendium materialised, yielding a 320-language online encyclopedia that millions of people use every day.
The financial model behind the resource has also worked well so far. The Wikimedia Foundation nonprofit has been able to attract enough donations for Wikipedia and the broader ecosystem around the website to thrive.
Yet “one of the sturdiest survivors of the old web” faces some long-term challenges its community needs to tackle – from information warfare and generative AI to government regulation of tech giants which could inadvertently harm Wikipedia. Let’s look at four threats the Wikimedia community has to counter.
The Wikimedia Foundation’s most recent draft annual plan points out declining readership in the United States and other Global North countries as one of the adverse trends it has to grapple with. “In the United States, we observed a -7% decline in average monthly unique visitors between Jan-Dec 2019 and Jan-Dec 2021”, the document notes. Already, this is among the reasons the nonprofit has to cut expenses in the next fiscal year, since it relies predominantly on banner fundraising in wealthy Western countries to raise money.
The foundation is in solid financial shape, and this trend doesn’t pose an existential problem today. Yet, declining readership, coupled with “weak” brand health among young audiences in the West, is a long-term risk.
Part of the reason is the turn to video and audio among young consumers (more on this below). Another is integration of Wikipedia content into services like voice assistants and “knowledge panels” in the Google search interface, where users can get information without going to Wikipedia itself – and thus without seeing a fundraising banner asking them to donate.
The Wikimedia Foundation has plans in place to counter these risks, including a new paid service for big tech companies which they can use to share some of the gains they’ve received from using Wikipedia’s content. But this comes with its own set of risks – Wikipedia shouldn’t become beholden to big tech – so online fundraising remains a core revenue source.
Half a year into ChatGPT’s public launch, it’s clear that generative AI will have a significant impact on the media business, from short-term impacts like simplifying content production to long-term threats to news media funding model, which relies heavily on search traffic.
The rise of generative AI might be a double-edged sword for Wikipedia. It can make the encyclopedia even more valuable for readers – as AI considerably simplifies the process of text creation, Wikipedia’s system of safeguards might provide a safe haven from the flood of AI-generated text elsewhere on the web.
On the other hand, AI might further accelerate the trend that’s already underway – a turn to audio and video as more engaging and authentic mediums. Wikipedia, despite recent technical advancements, remains a fundamentally text-based resource where interactive mediums play a small role. While Wikipedia doesn’t have serious direct competitors in the “market” of online encyclopedias, YouTube and TikTok might eat into the website’s popularity and relevance.
In the shorter term, generative AI also might make it easier for bad actors to overwhelm the community of volunteers running Wikipedia, chipping away at their valuable time and mental resources that could instead go towards the core task of developing an encyclopedia.
A year ago, researcher Carl Miller posed a theoretical scenario in an interview with The Fix: “…if I wanted to, say, exhaust a particular community defending a particular page, I could start to build automated or semi automated processes to drag as many members of that community as I could into a whole series of bruising, nasty, exhausting, draining discussions in an automated way”.
Wikipedia has been largely able to protect itself from malicious actions of this kind, but the increasing availability of high-quality generative AI tools might make this more difficult.
However useful generative AI might become for malicious actors, old and tried bot farms aren’t going anywhere either. Wikipedia is an important target for authoritarian regimes trying to manipulate the narrative, whether for foreign or domestic audiences.
As we wrote last year “Wikipedia has largely been able to fend off large-scale manipulation, thanks to relying on an army of volunteer editors and administrators, as well as on machine learning tools that help spot malicious changes”. Yet, “with new world challenges like the rise of Chinese authoritarianism and Russia’s all-out invasion of Ukraine, Wikipedia might become an increasingly coveted target of information warfare by malicious actors”.
The risks are more pronounced not for the English-language section of Wikipedia, which is the biggest and the most well-resourced, but for smaller language editions that might lack the same robust community protection mechanisms. Although the danger has been mostly theoretical so far, there have been reports of successful pro-China infiltration on Chinese-language Wikipedia. (While Wikipedia is blocked in mainland China, the Chinese edition is widely read in Taiwan and other Chinese-speaking territories).
Last week the US Supreme Court reaffirmed Section 230 – a crucial piece of legislation from the late 1990s that protects online platforms from liability for material posted by its users. That’s been a big win for Google and Twitter but also for Wikipedia, which relies on a large community of volunteers to create its content and doesn’t have direct editorial control.
Government actions in some other countries are more problematic. Of course, some autocratic regimes are outright banning Wikipedia – as China did long ago and Russia is apparently hoping to do one day. Yet, even in democratic countries government regulation aimed at tech giants like Google and Facebook might make Wikipedia an inadvertent victim.
One notable case in the UK, where the Online Safety Bill, if passed in its current form, could in theory lead to Wikipedia being inaccessible in the country. As The Guardian reported in April citing Lucy Crompton-Reid, the chief executive of Wikimedia UK, “the popular site could be blocked because it will not carry out age verification if required to do so by the bill”.
Although this specific bill can be fixed, the broader trend means that the Wikimedia Foundation and the community need to spend a lot of time working with policymakers to explain how Wikipedia works and why it’s fundamentally different from traditional social platforms.
Source of the cover photo: Helpameout, CC BY-SA 3.0, via Wikimedia Commons
Everything you need to know about European media market every week in your inbox
We are using cookies to give you the best experience on our website.
You can find out more about which cookies we are using or switch them off in settings.