Nearly Right

From spy agency to jailbreak tool: how a stolen American exploit kit exposed Apple's security blind spot

A US defence contractor's iPhone hacking tools were sold to a Russian broker, weaponised by Chinese criminals, and repurposed by teenage jailbreakers in under twelve months. Apple took years to patch the devices most at risk.

On 12 March 2026, a screenshot began circulating on jailbreak forums. It showed a website called 34306.lol running on an iPhone, a green dot beside the words "Success! Tweaks injected." The code powering this tool had been designed to spy on dissidents. It had been stolen by an insider and sold to a Russian broker. It had been deployed against Ukrainian civilians and repurposed to drain cryptocurrency wallets. Now, in its latest incarnation, hobbyist developers were using it to liberate old iPhones from Apple's software restrictions.

The exploit kit is called Coruna. Its twelve-month journey from a secure facility in Sydney to a teenager's jailbreak website reveals how digital weapons actually move through the world, and how little anyone can do to stop them once they start moving. A single betrayal set off a chain reaction that outran every institutional response, from the defence contractor who built the tools, to the diplomats trying to regulate the market, to Apple itself, which sat on fixes for years before Google's disclosure forced the company to act.

Built for governments, sold for cryptocurrency

Coruna did not begin life on the dark web. According to TechCrunch, corroborated by two former employees, the exploit kit was developed at least in part by Trenchant, a specialised hacking division of the American defence contractor L3Harris. Trenchant has Australian roots. It was formed after L3Harris acquired two exploit development firms, Azimuth Security and Linchpin Labs, in 2018. Both had earned reputations supplying sophisticated hacking tools to the US government and its Five Eyes intelligence partners.

The exploits Trenchant produced were restricted to a small circle of allied intelligence agencies. They targeted iPhones running iOS 13.0 through 17.2.1, a range spanning devices released from September 2019 to December 2023. Five full exploit chains. Twenty-three individual vulnerabilities. An arsenal capable of taking a target from an innocent webpage visit to total device compromise, no interaction required beyond opening a link.

Then Peter Williams decided to sell them.

Williams, a 39-year-old Australian, had served in the Australian Signals Directorate, the country's equivalent of the NSA, before joining the company that became Trenchant. As general manager, he had access to the very tools his employer existed to protect. Between 2022 and 2025, he used a portable hard drive to transfer at least eight exploit components out of secure networks at Trenchant's offices in Sydney and Washington, DC. His buyer was Operation Zero, a Russian exploit broker that openly markets its services to the Russian government and non-NATO clients.

Williams received $1.3 million in cryptocurrency. He sold tools his employer valued at $35 million. On 24 February 2026, a federal judge in Washington sentenced him to 87 months in prison, and the US Treasury simultaneously sanctioned Operation Zero and its owner, Sergey Zelenyuk.

By then, Coruna had already travelled far beyond Russia.

Three actors in twelve months

Google's Threat Intelligence Group first encountered fragments of Coruna in February 2025, embedded in an exploit chain used by a customer of an unnamed surveillance vendor. The code arrived wrapped in a distinctive JavaScript framework that fingerprinted each visiting device, identified its iPhone model and iOS version, then delivered a tailored exploit. GTIG recovered a WebKit remote code execution vulnerability, CVE-2024-23222, which Apple had quietly patched on newer devices in January 2024 without crediting any external researcher.

By summer, GTIG noticed the identical framework on cdn.uacounter.com, loaded as a hidden iframe on dozens of compromised Ukrainian websites selling industrial equipment, retail tools, and local services. The framework targeted only iPhone users from a specific geolocation. GTIG attributed the campaign to UNC6353, a suspected Russian espionage group, and worked with Ukraine's CERT-UA to clean up the compromised sites.

The third deployment was the most reckless. In late 2025, GTIG found the same JavaScript framework spread across a sprawling network of fake Chinese financial websites, many impersonating cryptocurrency exchanges. A counterfeit WEEX site displayed pop-ups urging visitors to switch to their iPhone for the best experience. The request served two purposes simultaneously: it confirmed the visitor's interest in cryptocurrency and ensured they were browsing on a vulnerable device. Unlike the earlier campaigns, this one applied no geolocation filtering. Any vulnerable iPhone, anywhere in the world, was fair game.

It was during this phase that the Chinese threat actor tracked as UNC6691 made a mistake that unravelled the entire operation. They deployed a debug build of the exploit kit, leaving every exploit in cleartext alongside its internal codename. GTIG could suddenly see the full architecture. They learned the kit was called Coruna. And they could trace its capabilities back through every previous deployment.

The mobile security firm iVerify independently reverse-engineered the same toolkit, which it dubbed CryptoWaters. iVerify called it the first observed mass exploitation of iOS devices by a financially motivated criminal group, and estimated at least 42,000 devices had been compromised. What had started as a precision instrument for state surveillance had become, in the space of a year, a blunt tool for petty theft at scale.

A spy tool retooled for robbery

The payload at the end of Coruna's exploit chain tells the story of that transformation. Tracked by GTIG as PLASMAGRID, it was not conventional surveillance software. It injected itself into powerd, a system daemon running with root privileges on iOS, while disguising itself as com.apple.assistd, a legitimate Apple service. Once embedded, it went to work harvesting financial data.

PLASMAGRID hooked functions in at least 18 cryptocurrency wallet applications, from MetaMask and Phantom to Exodus, Uniswap, and TrustWallet. It scanned the device's photo gallery for QR codes. It parsed Apple Notes for BIP39 seed phrases and keywords like "backup phrase" and "bank account." A user who had ever photographed a recovery card or jotted a password in Notes was exposed.

The implant's code comments were written in Chinese, and GTIG noted that some log strings appeared to have been generated by a large language model, complete with emoji. It communicated with command-and-control servers over HTTPS, and if the primary servers went dark, a fallback algorithm using the string "lazarus" as its seed generated predictable .xyz domains for the implant to phone home to. iVerify found additional modules targeting WhatsApp and iMessage beyond those GTIG identified, evidence that the kit was still in active development even as researchers were pulling it apart.

The sophistication of the delivery mechanism and the crudeness of the payload sit in jarring contrast. Coruna's exploit chains represent years of engineering at the highest level of the craft, the kind of work that demands deep knowledge of Apple's security architecture and millions of dollars of investment. The financial malware bolted onto the end reads like a rush job by a separate team with blunter priorities. The elegance stops where the money starts.

Apple's three-year gap

Apple released iOS 15.8.7 on 11 March 2026, patching the Coruna vulnerabilities on the iPhone 6S, iPhone 7, first-generation iPhone SE, and several older iPads. A companion update, iOS 16.7.15, covered the iPhone 8, iPhone 8 Plus, and iPhone X. The updates were described simply as containing "important security fixes."

Apple's own security documentation tells a more uncomfortable story. Each vulnerability entry includes a line specifying when the same fix was first shipped on newer hardware. For CVE-2023-41974, the fix arrived in iOS 17 on 18 September 2023. For CVE-2024-23222, it shipped in iOS 17.3 on 22 January 2024. For CVE-2023-43000, the patch landed in iOS 16.6 on 24 July 2023.

The oldest of these fixes is nearly three years old. Apple had the patches. It chose not to apply them to older devices for years, and only did so after Google published a detailed technical disclosure with some of the exploit infrastructure still live and accessible.

This matters because the affected phones were not ancient relics when the vulnerabilities were first fixed elsewhere. The iPhone 6S was still within Apple's stated support window. The iPhone 8 was receiving regular updates. These were devices Apple had committed to protecting, running software Apple knew was vulnerable, while Apple possessed fixes it had already deployed on newer hardware.

Backporting security patches to older architectures is genuinely difficult and expensive. Apple must make resource allocation decisions, and not every fix can ship simultaneously to every device ever manufactured. This is reasonable. What is harder to defend is the duration of the gap. Two to three years is not a triage delay. It is a policy choice, and one whose cost the Coruna case makes concrete: thousands of those unpatched devices were actively exploited during the interval.

The comparison with Android's update practices, often invoked to Apple's advantage, warrants more careful examination on these terms. Google's Nexus 6P, released the same year as the iPhone 6S, received its last security update in 2018. Three years of support against Apple's eleven. Measured by longevity, the comparison is no contest. But longevity of support means little if known vulnerabilities sit unpatched for years on the devices that need protection most, the older handsets whose owners are least likely to have upgraded and most likely to be targeted.

The arms bazaar nobody can shut down

Coruna's lifecycle maps onto a pattern that arms control scholars have studied for decades. A state develops a capability for controlled use. An insider breaks the chain of custody. The capability enters a secondary market. It proliferates to actors with progressively less restraint.

iVerify drew the comparison to EternalBlue, the NSA-developed Windows exploit stolen by the Shadow Brokers and dumped publicly in April 2017. Within weeks, EternalBlue powered the WannaCry ransomware attack that infected 230,000 computers across 150 countries, crippling Britain's National Health Service and causing billions of dollars in damage. Coruna's proliferation followed a different mechanism, not a public leak but a commercial transaction, from broker to buyers and onward through channels that remain opaque. The outcome was the same. Tools built for targeted, authorised use ended up serving anyone willing to pay.

Google has acknowledged the broader policy challenge, noting its participation in the Pall Mall Process, a diplomatic initiative launched in 2024 by 25 countries. Signed by Google, Apple, Microsoft, and Meta among others, the process aims to develop international norms limiting the misuse of commercial cyber intrusion capabilities. Its model, in spirit if not in mechanism, draws on the kind of multilateral frameworks that govern the proliferation of conventional and nuclear weapons.

The analogy illuminates the problem more than the solution. Nuclear non-proliferation works, to the extent it does, because fissile material is difficult to produce and straightforward to detect. Software exploits share neither property. They can be copied onto a thumb drive, transmitted over encrypted channels, and deployed from any internet connection on earth. Once an exploit exists in transferable form, the physics of the situation favour the proliferator. Everest Group analyst Gautam Goel put it plainly: regulating a single category of vendor does little when the ecosystem includes acquisition programmes, vulnerability brokers, and secondary markets that ensure capabilities will circulate regardless of what diplomats agree to.

Williams's case illustrates the human dimension of this structural problem. He was a trusted insider, a former signals intelligence officer, occupying one of the most sensitive positions in the Western exploit development pipeline. He sold eight components for $1.3 million, roughly four cents on the dollar relative to their estimated replacement cost. The gap between what he received and what the tools were worth suggests the market for stolen exploits is not yet efficient. The gap between what Williams sold and what those exploits eventually enabled suggests the market is already devastating.

The walled garden's unexamined trade-off

Coruna sharpens a critique that security researchers have pressed for years. Boris Larin, principal security researcher at Kaspersky's GReAT team, described what he sees as a tension at the heart of Apple's approach: the same closed ecosystem that makes iPhones secure by default makes it harder for the wider security industry to provide independent protection when Apple's defences are breached.

Two of Coruna's exploits target the same vulnerabilities used as zero-days in Operation Triangulation, the iOS espionage campaign Kaspersky uncovered in 2023. Larin confirmed the overlap but rejected claims that Coruna's authors were the same. Both CVEs have publicly available implementations, he noted, meaning any sufficiently resourced actor could have developed independent exploits. His concern ran deeper. When a toolkit as comprehensive as Coruna breaks through Apple's perimeter, nothing else stands between the attacker and the user's data. On every other major computing platform, endpoint security software can detect and respond to compromise independently. On iOS, Apple insists on being the sole defender.

Goel framed the enterprise consequences. Most mobile security programmes were built around device management rather than device integrity, he observed. If an attacker achieves kernel-level access, the device can misrepresent its own health status, and the entire management layer becomes a fiction. iVerify's Rocky Cole argued that Apple should open a security framework on iOS comparable to what already exists on macOS, where third-party tools can monitor system integrity and flag anomalies that Apple's own checks might miss.

Apple has resisted this for reasons that are not purely commercial. Every API that allows a security tool to inspect the system is an API an attacker can abuse. The walled garden's restrictions serve as a security architecture, not only a business model. Opening them up could create more attack surface than it eliminates.

Coruna does not settle this argument. But it puts a price on the trade-off. Apple's security team is among the best in the world. It is also finite, making daily decisions about which devices to patch and which to defer, while adversaries with state-level budgets probe every seam in the wall. When the sole defender's coverage has gaps, there is no backup.

The code's last life

A GitHub repository maintained by developer Khanh Duy Tran now hosts the partially deobfuscated and symbolicated Coruna exploit code, credited in part to the developer known as 34306 and others including Nick Chan. The repository's description is disarmingly terse: "The leaked exploit toolkit for various iOS versions."

The speed of the transformation is striking. Google published its blog on 4 March. Several of the indicator-of-compromise URLs it referenced were still live, meaning the actual exploit code was accessible to anyone with the skill to retrieve it. Jailbreak developers, who spend their careers studying precisely the kind of iOS vulnerabilities Coruna chains together, began repurposing the code within hours. By the time Apple shipped its patches on 11 March, web-based jailbreak tools were already running.

There is something apt about where this code has ended up. Each actor in Coruna's chain of custody used the same capability for a different purpose. Surveillance operators installed monitoring implants. Russian operatives targeted Ukrainian web users. Chinese criminals injected hooks to siphon cryptocurrency. The jailbreakers are using it for something that resembles, in bare outline, the original capability stripped of its menace: running chosen software on hardware that Apple would prefer to keep locked down.

But the story does not close with Apple's belated patches or Williams's prison sentence or the jailbreakers' weekend project. Coruna's source code is public. Its techniques have been documented by GTIG, iVerify, and independent researchers in granular detail. The specific vulnerabilities it targeted are fixed, but the methods it pioneered for bypassing pointer authentication, escaping WebKit's sandbox, and escalating through the kernel are now reference material for anyone building the next generation of iOS attacks.

Google's researchers made the point explicitly. Multiple threat actors, they wrote, have acquired advanced exploitation techniques that can be reused and modified with newly identified vulnerabilities. What has proliferated is not just a set of bugs but a body of knowledge. Vulnerabilities can be patched. Techniques cannot be unlearned. And the distance between a published technique and a working exploit is, for the people who build these tools, shorter than anyone outside the field tends to appreciate.

#cybersecurity