Nearly Right

From walkouts to warfighters: how tech's ethical resistance to military AI collapsed in eight years

Anthropic's stand against the Pentagon may be remembered not as a turning point but as the final act of a retreat that began when Google dropped Project Maven

On 5 March 2026, Dario Amodei, the chief executive of Anthropic, published a carefully worded statement apologising for the "tone" of an internal memo that had been leaked to the press. The memo, written in the frantic hours after President Trump branded Anthropic a "radical left, woke company" and ordered every federal agency to cease using its technology, had contained some unguarded language. Amodei had reportedly accused the administration of demanding "dictator-style praise" and described rival OpenAI's hastily arranged Pentagon deal as "safety theatre." Now he was walking it back, calling the memo "out-of-date" and insisting that Anthropic had "much more in common with the Department of War than we have differences."

At the very moment Amodei was softening his language, Anthropic's Claude model was doing something rather less conciliatory. Embedded in Palantir's Maven Smart System on classified military networks, Claude had reportedly generated approximately a thousand prioritised targets on the first day of American strikes against Iran, synthesising satellite imagery, signals intelligence and surveillance feeds to produce GPS coordinates and weapons recommendations in near-real time. The company that was apologising for sounding too combative was, through its technology, actively assisting in combat.

This is not hypocrisy in any simple sense. The situation is stranger and more instructive than that. It reveals how completely the relationship between Silicon Valley and the American military has been transformed, and how the language of ethics has become a lubricant for, rather than a brake on, the integration of frontier artificial intelligence into warfare.

A generation that could afford its principles

There is a German short film from 1969 called Nicht löschbares Feuer, made as a protest against napalm production. In it, a factory worker explains that every day he takes home a piece from the vacuum cleaner factory where he works, trying to assemble a vacuum cleaner at home. But no matter how he arranges the parts, it always becomes a sub-machine gun. The film's closing line is striking for how directly it addresses the workers themselves: what we produce depends on us.

For a brief period, parts of the technology industry seemed to believe this. The generation of engineers who entered the field between roughly 1995 and 2010 absorbed, by cultural osmosis if nothing else, a set of assumptions about the relationship between technology and state violence. Some of these assumptions were inherited from the counterculture that had shaped Silicon Valley's self-image since the 1970s. Some grew from the specific context of the Iraq war, which was grinding through its bloodiest years just as a new cohort of computer science graduates was entering the workforce. Refusing to build tools for warfare was, in certain professional circles, an ordinary moral position, no more remarkable than refusing to work for a tobacco company.

What made this stance possible was not only conviction but circumstance. The technology sector was booming on consumer revenue. Military contracts were a rounding error against advertising dollars. Engineers could afford their principles because the market did not demand their abandonment. The vacuum cleaner factory was, for the moment, actually making vacuum cleaners.

That era now looks less like a permanent transformation and more like an interlude. Silicon Valley was born from military funding. The semiconductor industry grew in the shadow of defence procurement. DARPA funded the research that became the internet, and the military's appetite for computation shaped the region's economy for decades before the consumer web arrived. The generation that recoiled from military work was not breaking with tradition. It was enjoying a holiday from it, sustained by an advertising-driven business model that temporarily made the Pentagon's money unnecessary. When that model began to mature, when growth rates slowed and new revenue streams were needed, the holiday ended.

The high-water mark

The date that matters is not February 2026 but April 2018, when more than three thousand Google employees signed an open letter to Sundar Pichai objecting to the company's participation in Project Maven, a Pentagon programme that used machine learning to analyse drone surveillance footage. The letter was blunt in a way that now reads as almost quaint. "We believe that Google should not be in the business of war," it began. A dozen employees resigned. Within weeks, Google announced it would not renew the Maven contract and published a set of AI principles that included commitments against developing weapons or surveillance technology.

The Google walkout succeeded for reasons specific to its moment. Google's consumer brand was vulnerable to moral pressure. The contract was worth a modest fifteen million dollars, trivial against the company's revenues. Employee activism in the valley still carried real force. Most importantly, the technology itself was limited. Project Maven analysed drone footage. It did not write operational plans, generate target lists, or synthesise intelligence across multiple classified sources.

Every one of those conditions has since reversed. But the walkout's most consequential legacy was not the contract it cancelled. It was the precedent it failed to set. Google dropped Maven, but Maven did not drop. The project migrated to Microsoft and Amazon. Google itself later bid for the Joint Warfighter Cloud Capability, carefully assuring everyone that its AI principles would remain central. The ethical objection did not halt the work. It merely rerouted it through companies whose employees were less inclined, or less empowered, to object.

This is the dark irony of the most celebrated act of tech worker activism in recent memory. The Google walkout demonstrated that a sufficiently motivated workforce could prevent one company from doing one thing. It also demonstrated that preventing the thing itself required something else entirely, something that no company and no workforce was in a position to provide.

Safety as scaffolding

Anthropic was founded in 2021 by former OpenAI researchers who believed that AI safety required a dedicated institutional commitment. The company's origin story was explicitly moral. Dario Amodei and his sister Daniela left OpenAI because they believed it was not taking the risks of advanced AI seriously enough. Anthropic would be different. It would be the responsible lab, the company that put safety research at the centre of its mission, the outfit that published careful analyses of how language models might be made more honest and less harmful.

This reputation was not merely decorative. It was, for a time, genuinely earned. Anthropic invested heavily in interpretability research, developed its Constitutional AI approach to alignment, and maintained a public posture of caution about the pace of AI development that distinguished it sharply from the relentless optimism of OpenAI and the studied indifference of Meta. When Anthropic warned that advanced AI could pose risks comparable to nuclear weapons, many people in the field took the claim seriously.

What happened next followed a logic that psychologists studying moral behaviour would recognise immediately. In research on what is sometimes called moral licensing, people who perform a conspicuously virtuous act subsequently give themselves permission to behave less virtuously. The good deed functions not as a floor but as a credit, something to be drawn down against future compromises. Anthropic's safety brand may have operated in precisely this way, not only for the company itself but for the entire AI-military ecosystem. Because "the responsible lab" was willing to integrate with Palantir, deploy on classified networks, and support military operations including intelligence analysis, operational planning and cyber operations, the broader question of whether frontier language models belonged in the targeting chain at all was never seriously confronted.

Consider the sequence. By November 2024, Anthropic had partnered with Palantir and Amazon Web Services to place Claude on classified defence networks. By July 2025, the company had signed a two-hundred-million-dollar contract with the Department of Defence. Claude became, in the words of analysts at Piper Sandler, "heavily embedded in the military and the intelligence community." It was the first frontier AI model integrated into mission workflows on classified systems. None of this attracted anything approaching the fury that had greeted Google's comparatively modest Maven contract seven years earlier. The walkouts did not come. The safety brand had done its work. If the most cautious company in the industry considered military integration acceptable, who was anyone else to object?

What the red lines actually permit

It is worth examining what Anthropic was actually fighting for when negotiations with the Pentagon collapsed. The company maintained two restrictions on military use of Claude. It would not allow the model to be used for fully autonomous weapons, meaning systems that could identify, select and engage targets without human oversight. And it would not allow the model to be used for mass domestic surveillance of American citizens.

These are, on their face, reasonable positions. They are also extraordinarily narrow. They permit the use of Claude for target identification so long as a human approves the final strike. They permit intelligence synthesis, including the aggregation and analysis of signals intelligence from foreign populations. They permit operational planning, logistics optimisation, cyber operations, and the acceleration of the kill chain from weeks to hours. They permit, in short, virtually everything the American military actually does with AI today. The red lines exclude the two capabilities that even the Pentagon's own directives already restrict, albeit imperfectly.

This is not the posture of a company that objects to war. It is the posture of a company that objects to two specific categories of future capability while actively enabling present ones. Amodei himself confirmed as much, stating publicly that Anthropic had "never raised objections to particular military operations." The company expressed no objection to Claude's reported role in the January 2026 raid that captured Venezuelan president Nicolas Maduro, an operation in which Palantir's Maven system, powered by Claude, was reportedly used for data analysis and targeting. The red lines were never about whether AI should be used in military operations. They were about whether AI should be used in military operations without the fig leaf of human approval.

The Pentagon's objection was therefore not to the substance of Anthropic's restrictions but to the principle that any private company could impose contractual limitations on military use of a technology the government had purchased. As the Pentagon's statement put it, "the military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability." The dispute was about who holds power over the deployment of AI in national security, and whether "all lawful purposes" would become the only acceptable standard.

Oppenheimer in miniature

There is a historical parallel here that goes deeper than surface resemblance. J. Robert Oppenheimer did not oppose the development of the atomic bomb. He led it with what colleagues described as visible pride, pumping his fists at a Los Alamos assembly the evening Hiroshima was bombed. His moral crisis came afterwards, when he tried to prevent the development of the hydrogen bomb, a weapon he considered strategically unnecessary and morally repugnant. For this act of resistance, which came after years of enthusiastic collaboration, Oppenheimer had his security clearance revoked in 1954. The political fight, as Stanford scholar Scott Sagan has observed, was between a liberal scientist wanting to prevent an arms race and conservative political actors wanting to win one. Oppenheimer was not punished for building the weapon. He was punished for trying to constrain the next one.

Anthropic's trajectory compresses this pattern into a fraction of the time. The company did not refuse to work with the military. It pursued military contracts, integrated into classified systems, and supported active operations. Its moral crisis came not over the decision to arm the state but over the terms of continued collaboration. Anthropic was not punished for building the targeting system. It was punished for suggesting that the targeting system should have two specific contractual limitations that largely mirrored restrictions already embedded in existing law and policy.

The Oppenheimer parallel also illuminates the role of the rival who steps in where the dissenter steps back. Edward Teller, who championed the hydrogen bomb and testified against Oppenheimer at his security hearing, occupied much the same structural position as Sam Altman, who announced OpenAI's Pentagon deal within hours of Anthropic's blacklisting. Altman's subsequent admission that the deal was "opportunistic and sloppy" did nothing to change its strategic effect, which was to demonstrate that if one company held the line, another would immediately step over it. The pattern is not collusion between rivals. It is the ordinary operation of a market in which moral constraints function as competitive disadvantages, to be arbitraged away by the first willing participant.

Leo Szilard, the physicist who drafted a petition urging President Truman not to use the atomic bomb on Japanese cities, watched his effort fail in part because Oppenheimer himself declined to circulate it at Los Alamos. Decades later, many of the petition's signatories found themselves unable to obtain the security clearances needed to continue working in nuclear policy. Dissent was not forbidden. It was simply made professionally fatal. The Anthropic supply chain risk designation operates by the same mechanism, updated for the age of software procurement.

The solidarity illusion

After the Pentagon's designation, something remarkable happened. More than a million people signed up for Claude each day. The app surged past ChatGPT and Gemini to become the most downloaded AI application in more than twenty countries. Chalk graffiti praising Anthropic appeared on the pavement outside its offices. Thirty former military and intelligence officials, including former CIA director Michael Hayden, wrote to Congress calling the designation a "dangerous precedent." Republican Senator Thom Tillis called the public fight "sophomoric." Even some OpenAI employees signed open letters supporting Anthropic's position.

It was a genuine outpouring, and it was also largely beside the point. Consumer subscriptions do not reduce Claude's role in military operations. They increase Anthropic's revenue, which sustains the company's ability to develop more powerful models, which makes those models more attractive to military clients. The people downloading Claude to express solidarity with Anthropic's ethical stand were, in a structural sense, subsidising the development of the same technology that was selecting targets in Tehran.

The deeper problem is that the mechanisms of accountability available to ordinary citizens are profoundly mismatched to the systems they are trying to influence. You can switch your AI subscription. You cannot vote on whether Claude is embedded in Palantir's targeting system. You can sign a petition. You cannot read the classified contract that governs how your preferred chatbot processes signals intelligence. The architecture of modern AI deployment is designed to make consumer-facing ethics and military application exist in separate, mutually invisible domains. The person using Claude to draft a birthday message and the analyst using Claude to prioritise strike coordinates are both customers. They will never meet, and neither will fully comprehend the other's experience of the same product.

This bifurcation is not new. Defence contractors have always sold civilian products alongside military ones. But the gap between the two uses has never been so thin. The same model, trained on the same data, running on the same infrastructure, simultaneously helps a student revise an essay and helps a military planner compress a targeting cycle from days to minutes. There is no military version of Claude and civilian version of Claude. There is Claude, and the question of what it does depends entirely on who is asking.

The democratic argument and its limits

The strongest case against Anthropic's position was made with characteristic directness by Sam Altman himself. "I am terrified of a world where AI companies act like they have more power than the government," he wrote on X during his weekend defence of the OpenAI-Pentagon deal. In a democracy, elected officials and the military chain of command, not unelected technology executives, should determine how AI is used for national security. Private companies do not get to insert themselves into the chain of command.

This argument has genuine force, and it exposes a real tension in Anthropic's position. The company is, after all, a private entity with no democratic mandate attempting to impose restrictions on the elected government's use of a purchased product. The Pentagon's Emil Michael put the point with bruising simplicity: if you don't trust the democratic process, "what do you believe in?"

The argument collapses, however, when you examine the context in which it is being made. The "democratic process" that designated Anthropic a supply chain risk was not a congressional vote or a judicial ruling. It was a social media post by a defence secretary, followed by a presidential decree on Truth Social, executed against a company that had reportedly declined to donate to the president or offer him public praise. The "lawful purposes" that the Pentagon demanded are defined by laws and executive policies that the current administration can change unilaterally. As Jessica Tillipman, associate dean for government procurement law at George Washington University, noted, the published contract excerpts from OpenAI's deal do not give the company any independent right to prohibit otherwise-lawful government use. They simply state that the Pentagon cannot use the technology to break laws that the Pentagon itself can rewrite.

The appeal to democratic accountability rings hollow for a second reason. In a functioning democracy, the checks on military power operate through congressional oversight, judicial review, and public debate informed by transparency. Congress has not voted to authorise the war in which Claude is being used. The full text of the OpenAI-Pentagon contract has not been released. Brad Carson, a former congressman and general counsel of the Army, examined the contract language OpenAI published and concluded that the provision supposedly prohibiting domestic surveillance "doesn't really exist." Democratic accountability requires functioning democratic institutions. Invoking it while those institutions are dormant is not principle. It is convenience.

What comes after ethics

The migration of ethical resistance in Silicon Valley follows a clear pattern of retreat to ever-narrower ground. In 2018, employees walked out and a contract was cancelled. The question was whether tech companies should be in the business of war at all. In 2024, a company founded on safety principles integrated with Palantir and deployed on classified networks, but maintained contractual red lines. The question had shrunk to whether two specific military applications should be contractually prohibited. In 2026, those red lines triggered a government designation normally reserved for foreign adversaries, and the company responded by apologising for its tone, offering to continue service at nominal cost, and affirming how much it had in common with the Department of War. The question was now whether the designation was procedurally valid under statute.

Ethics became negotiation. Negotiation became litigation. The question "should we do this?" was replaced by "can they make us?" Each stage represents not a failure of individual courage but the operation of structural forces that no single company, no matter how principled, can resist alone. When Anthropic held the line, OpenAI crossed it within hours. When consumer support surged, it fed the same development pipeline that serves military clients. When legal challenges were mounted, they addressed procedure rather than substance. The system absorbs resistance through market mechanisms and institutional pressure until the resistance is either abandoned or confined to courtrooms where it cannot disrupt the procurement schedule.

The factory worker in that 1969 film kept bringing home pieces, kept trying to build a vacuum cleaner, and kept producing a sub-machine gun. He never stopped trying, and the machine never changed. In 2026, the assembly is complete. The same model that helps you write a letter generates a thousand targeting coordinates in a day. No one tricked the engineers into building it. No one compartmentalised the purpose behind layers of euphemism, as earlier generations of defence researchers sometimes experienced. The parts were always going to fit together this way. The only question that remains, now that the machine is built and running, is whether anyone with the authority to do so will choose to regulate what it produces. The engineers tried. The company tried. Neither had the power. That power belongs to legislatures, which have so far preferred not to use it.

Somewhere on a classified network, Claude continues to process intelligence feeds. Somewhere on an app store, Claude continues to attract users who admire its maker's principles. These two facts are not in tension. They are the same fact, viewed from different angles, and the distance between them is the measure of how far we have travelled from the world where building tools for war was something a person could simply refuse to do.

#artificial intelligence