Nearly Right

Sanders calls for AI moratorium while tech giants pour billions into datacenters

The senator wants to slow down. The infrastructure is already being built.

At a recent technology conference, the founders of Poolside demonstrated their AI coding assistant converting an obscure government programming language into Rust. The demo took under three minutes. But what caught the ear wasn't the technology—it was a casual aside. The company, barely two and a half years old, is building a "multi gigawatt campus in West Texas" and has 40,000 of Nvidia's most powerful processors coming online. Jason Warner, the co-founder, mentioned these facts the way one might mention a new desk chair.

That same week, Senator Bernie Sanders appeared on CNN to call artificial intelligence "the most consequential technology in the history of humanity." He wants a moratorium on datacenter construction. "We need to slow this thing down," he said. "It's not good enough for the oligarchs to tell us, 'It's coming, you adapt.'"

One man describes gigawatt-scale infrastructure as casually as weekend plans. Another pleads for democratic deliberation. The distance between them isn't ideological. It's temporal. They are operating at speeds so different they might as well inhabit separate centuries.

When change is easy, the need cannot be foreseen

In 1980, a British philosopher named David Collingridge noticed something troubling about how societies relate to technology. When a technology is young enough to control, we don't yet understand what it does. By the time we understand what it does, it's too embedded to control. He called this the dilemma of control, and summarised it in a sentence that reads like prophecy: "When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time-consuming."

This insight, largely unknown outside academic technology studies, explains why Sanders's moratorium faces obstacles that go beyond politics. The problem isn't convincing people that AI governance matters—most Americans share his concerns. The problem is that the infrastructure is being poured into the ground while the debate unfolds.

The numbers are staggering. More than $61 billion flowed into the datacenter market in 2025. The four largest hyperscalers—Amazon, Google, Microsoft, Meta—will spend over $350 billion on capital expenditure this year. OpenAI's Stargate project promises $500 billion and ten gigawatts of computing capacity, roughly ten nuclear plants' worth. These aren't projections. The concrete is setting in Texas. The substations are humming in Virginia. The cooling systems are drinking water in Wisconsin.

Collingridge was writing about railways and nuclear reactors, technologies that took decades to entrench. AI is compressing that timeline into years. By the time Congress holds its hearings, the thing being debated will already exist—not as concept but as infrastructure, jobs, contracts, and sunk costs that create their own political gravity.

What Hiroshima teaches—and doesn't

The comparison that dominates AI governance discussions is nuclear weapons. If humanity built international frameworks to prevent atomic war, surely we can govern artificial intelligence. Sanders himself frames AI as requiring the same seriousness we brought to the bomb.

There's something to this. A recent RAND Corporation analysis of the Baruch Plan—America's 1946 offer to place nuclear weapons under international control—reveals that even existential stakes don't guarantee successful governance. The Soviets rejected Baruch not because they wanted nuclear war but because they saw the proposal as freezing American advantage. The nonproliferation regime that eventually emerged looked nothing like what American negotiators initially wanted.

This history should make us sceptical when the companies best positioned to benefit from regulation are the ones calling loudest for it. When Sam Altman tells Congress a new agency should license AI efforts above certain capability thresholds, the question of who meets those thresholds matters. It might be OpenAI and a handful of competitors. It probably won't be the startup in a garage.

But the nuclear analogy breaks down where it matters most. Nuclear weapons required states. Private firms are building AI with global capital. Nuclear materials could be tracked through physical inspection. Compute cannot. While Sanders calls for a moratorium, China is building five massive datacenters in Shanghai alone. A unilateral American pause wouldn't stop AI development. It would relocate it.

The unexpected front line

Here is what most coverage of AI governance misses entirely: the political momentum against unconstrained expansion isn't building in Senate hearings or philosophy departments. It's building in Loudoun County, Virginia, where residents can hear the diesel backup generators. In Port Washington, Wisconsin, where three people were arrested protesting water usage at a proposed datacenter. In Perkins Township, Ohio, where a farmer named Tom Hermes told the Guardian he worries about water pressure now that Aligned Data Centers has broken ground nearby.

Julie Bolthouse is the director of land use for the Piedmont Environmental Council. She lives in what's called Data Center Alley—the world's largest concentration of computing facilities, sprawling across Northern Virginia's suburbs. "We've entered the realm of absurdity," she says. "The goal is to slow down and figure out what in the world we're actually doing."

She is not talking about superintelligence. She is talking about noise.

More than $64 billion worth of datacenter projects have been blocked or delayed by local opposition. In Richmond, a $500 million proposal stalled after residents complained about noise. In Prince William County, a $24.7 billion project faces multiple lawsuits. In Cascade Locks, Oregon, voters recalled two officials who approved a datacenter, killing the project entirely.

These people don't care about artificial general intelligence. They care about electricity bills—wholesale costs have risen 267 per cent in some areas near datacenters. They care about water tables and property values and the hum that never stops. Montre Moore, first vice president of the Milwaukee NAACP, watched his heating bill jump from $118 to $160 last winter. "We are in for a world of hurt," he says, "from a rate perspective and from an environmental perspective."

The debate treats these concerns as secondary to existential risk. But it is these local fights—not the Senate testimony—that have actually stopped projects from being built. There is a lesson here about where democratic constraint actually comes from. Not from grand frameworks. From people whose sleep has been disturbed.

The permanent gap

The distance between AI development and democratic response isn't a lag that will close with sufficient urgency. It reflects something structural.

Technology companies practise agile development: rapid iteration, continuous deployment, tolerance for failure as learning. Democratic governance practises the opposite: deliberation, consensus, caution. The Brookings Institution calls this the "pacing problem"—technology changes exponentially while legal systems change incrementally. But calling it a problem implies a solution. What if it's simply a feature of how these systems work?

Critics argue this framing privileges state control over market adaptation. Fair enough. Markets do adapt, often faster than regulators. When harm becomes visible, companies have incentives to mitigate it—if only to avoid regulation. The AI industry's growing investment in safety research reflects this.

But market feedback requires harm to be visible and attributable. For diffuse risks—job displacement, algorithmic bias, social trust eroding in ways no single company caused—the feedback loop doesn't function. And for physical infrastructure, market correction arrives after the capital is committed. By then the datacenter exists. The jobs depend on it. The tax base assumes it.

When Poolside's co-founder mentions that their AI agents now run "for hours" on complex tasks, with "days" on the horizon, he is describing a capability trajectory that formal governance cannot plausibly constrain in advance. The frameworks being discussed in Congress assume time for impact assessment, stakeholder consultation, careful deliberation. That assumption may no longer hold.

What remains

Neither the optimism of governance advocates nor the fatalism of democratic pessimists quite captures what is happening. Something stranger is unfolding: a transformative technology developing faster than the institutions meant to govern it can respond, with no clear precedent for what comes next.

Sanders may be right that AI deserves democratic deliberation before deployment. But deployment is happening. The infrastructure is rising. The compute is coming online. The question has shifted. It is no longer whether formal governance will shape AI development—it probably won't, not before the technology embeds itself in economic and social infrastructure. The question is whether other constraints might emerge: local resistance, technical limitation, market correction, international competition, the simple friction of reality refusing to cooperate with plans.

Julie Bolthouse, watching from Data Center Alley, puts it simply: "There are different ways to pause. It's not just that we can't provide the service—we can't provide it in a way that doesn't destroy everything else."

Whether that instinct—the instinct to ask what gets destroyed—scales from Loudoun County to the species level may matter more than anything happening in Congress. The hearings will continue. The testimony will accumulate. And somewhere in West Texas, the concrete will keep setting.

#artificial intelligence