When humans must prove they're human: the collapse of online trust
How AI anxiety is changing online communities more than AI content itself
The programmer was refreshing the page, watching accusations accumulate. Someone had shared an open-source project. Commenters began questioning whether the code was AI-generated. The author insisted the work was entirely their own. And Dmitry Kudryavtsev, a web developer who had spent two decades building things for the internet, found himself staring at the replies, unable to decide what was real.
The author's responses used em-dashes, the long punctuation marks that professional writers deploy to set off clauses. They ended messages with offers to clarify or explore topics further. They began with phrases like "you are absolutely right." To Kudryavtsev, these patterns triggered recognition he wished he didn't have. The tells. The giveaways. The linguistic fingerprints of machine output. But were they? The author denied using AI. Kudryavtsev sat there, refreshing, wondering whether he was paranoid or perceptive, whether the person he was reading was human or machine or some uncanny hybrid of both.
This is what the dead internet feels like from the inside. Not an absence of humans but an uncertainty about their presence. Not a flood of obvious robots but a corrosion of the assumptions that once made online conversation possible.
The theory that became prophecy
The Dead Internet Theory emerged from the outer orbits of online culture, the image boards and esoteric forums where conspiracies gestate before spreading to the mainstream. Someone crystallised it in 2021 on an obscure forum. The theory came in two parts. The first claimed that most internet traffic and content was now generated by bots and algorithms rather than humans. The second proposed that this was deliberate, a coordinated effort by governments and corporations to manipulate the population.
The first claim has quantitative support. A 2016 report from the security firm Imperva found that automated programmes accounted for 52 per cent of web traffic. Search results have grown cluttered with pages that exist to capture clicks rather than answer questions. Social media feeds serve content optimised for engagement rather than authenticity. The raw material of the theory wasn't invented. It was observed.
The second claim attracted the label that made respectable people dismiss the whole thing. Government manipulation? Corporate coordination? The Atlantic ran a piece in 2021 titled "The 'Dead-Internet Theory' Is Wrong but Feels True." For years that seemed like the sensible position. Then ChatGPT arrived in late 2022, and the sensible position required revision.
Sam Altman, the chief executive of OpenAI, posted in September 2025 that he had never taken the dead internet theory seriously, but there really did seem to be a lot of LLM-run accounts now. The man whose company had enabled mass-scale synthetic text was warning about mass-scale synthetic text.
The market for lemons, updated
In 1970, the economist George Akerlof published a paper that would eventually win him a Nobel Prize. "The Market for Lemons" described what happens when buyers cannot distinguish good products from bad ones. Using the used car market as his example, Akerlof showed that information asymmetry degrades quality. When buyers assume any car might be defective, they pay less. When sellers of good cars cannot command fair prices, they exit. What remains are the lemons.
This dynamic applies wherever verification is costly and deception is cheap. Online conversation now fits the description. Before generative AI, producing convincing text at scale required human effort. A bot might spam links, but it couldn't sustain a nuanced argument. That barrier has collapsed. The cost of producing plausible human-sounding content approaches zero. The cost of verifying authenticity remains high. The conditions for a market for lemons have been met.
Kudryavtsev's predicament wasn't about that particular thread. It was about the new cognitive burden every online interaction carries. Previously you could assume the person you were arguing with was, in fact, a person. That assumption was so fundamental it required no thought. Now it requires constant evaluation. Pattern-matching. Suspicion. The metabolic cost of being online has increased.
As trust declines, authentic participants face pressure to prove themselves. Writers abandon stylistic choices that might trigger suspicion. Commenters pre-emptively defend their humanity. The quality of discourse degrades not because bots have replaced humans but because humans must now expend effort demonstrating they are not bots.
The cuckoo in the nest
There is another lens through which to view this arms race, one from evolutionary biology. Brood parasites like cuckoos lay their eggs in the nests of other birds, forcing unwitting foster parents to raise their young. The strategy works because recognition is difficult. A cuckoo egg looks enough like a host egg to pass inspection. A cuckoo chick begs convincingly enough to receive food. The costs to the host are enormous: total reproductive failure if the parasite succeeds. But detection carries costs too. Rejecting an egg might mean accidentally destroying your own offspring.
This has produced some of nature's most elaborate arms races. Host birds evolved sophisticated egg recognition, comparing patterns and colours against templates stored in memory. Cuckoos evolved corresponding mimicry, their eggs shifting to match local hosts over evolutionary time. In some species the race escalated beyond eggs to chicks: fairy-wrens in Australia learned to identify cuckoo nestlings and abandon their nests, whereupon certain cuckoo species evolved begging calls that mimic host chicks.
What makes this analogy useful is not just the escalation but its limits. Research on Cape bulbuls in South Africa found that these birds do not reject cuckoo eggs even though they could learn to recognise them. The cuckoo has evolved such thick-shelled eggs that ejection is nearly impossible, and nest desertion carries high costs. For the bulbuls, acceptance has become optimal. The arms race reached equilibrium not through victory but through accommodation.
Online communities may face similar calculations. Detection is difficult and growing more so. False positives carry real costs: legitimate contributors accused and alienated. And the bots keep improving. If scrutiny becomes too expensive, too error-prone, too corrosive to trust, platforms may find themselves accepting a baseline level of synthetic participation as the cost of staying open.
Eternal September returns
The sense of cultural displacement that Kudryavtsev described has precedent. In September 1993, America Online began offering Usenet access to its millions of subscribers. Previously, Usenet experienced a manageable influx of new users each autumn as university students discovered the network. By October, these newcomers would learn the norms, the etiquette, the unwritten rules. AOL shattered this cycle. The flood of new users never receded. September became eternal.
Old-timers mourned what was lost. The shared vocabulary. The technical competence. The assumption that everyone in a discussion understood certain basics. The writer Howard Rheingold observed that communities are fragile: they can be ruined by growth, yet they cannot survive without it. Eternal September didn't replace humans with machines. It disrupted the cultural mechanisms that sustained trust among strangers.
The current moment echoes this. The mechanisms that once made online trust functional, pseudonymous reputation, community moderation, karma systems and norms, have been overwhelmed. Not by bots alone but by scale, by algorithmic curation that prioritises engagement over relationship, by platforms that treat users as inventory rather than participants. Generative AI is the accelerant. The kindling was already dry.
What emerged from Eternal September was not restoration but adaptation. Communities retreated to smaller spaces with higher barriers to entry. Moderation evolved from permissive to active. New platforms arose with different affordances. None solved the underlying problem. They changed the terms of engagement.
The em-dash witch-hunt
Among the tells that observers have identified, the em-dash has attracted particular attention. This long dash, used by professional writers to set off parenthetical material, has become what some call the "ChatGPT hyphen." Folk detection methods have proliferated, and heavy em-dash use features prominently among them.
The theory has a seductive simplicity. Most people don't know how to type an em-dash. Most casual writing uses hyphens instead. See proper em-dashes throughout a text, the reasoning goes, and you're probably reading machine output.
The problem is that professional writers have used em-dashes for centuries. Emily Dickinson deployed them obsessively. David Foster Wallace built a stylistic signature around them. The em-dash appears in LLM output precisely because it appears in the high-quality human writing on which models are trained. Flagging em-dashes as AI tells produces abundant false positives. Writers report being accused of using AI based solely on their punctuation. The AI learned from their work, and now their style is treated as evidence of machine generation.
Writers have begun avoiding em-dashes not because they dislike them but because they fear accusation. They are optimising for the metric, demonstrating human authorship, while degrading the signal, the quality of their prose. Detection paranoia harms authentic expression more than it catches synthetic content.
This pattern recurs throughout the dead internet discourse. Research from Meta found that AI-generated misinformation had "modest and limited" impact in the 2024 elections. The apocalyptic predictions failed to materialise. Yet anxiety about synthetic content has demonstrably changed behaviour: made writers self-censor, made readers suspicious, made communities defensive. We are being deepfaked by deepfakes. The fear does more damage than the thing feared.
The commons shrinks
Kudryavtsev ended his reflection with sadness. He had grown up on the internet of the early 2000s, learning programming in IRC channels and bulletin board forums, chatting with strangers who became mentors. That internet ran on assumed good faith. You might not know who someone really was, but you assumed they were someone. The social contract was simple: I treat you as human, you treat me as human, and we build something together.
That contract has not been voided, but its terms have changed. The high-trust open commons, where anyone could arrive and build reputation from nothing, has begun to retreat. In its place emerge smaller, more guarded spaces. Invite-only servers. Email newsletters. Professional networks where credentials serve as verification. The internet is not dying. It is differentiating. The open zones become wastelands of spam and slop. The walled gardens become havens for those who can gain entry.
This is not necessarily tragedy. Many communities still exist in some form. Technical forums persist where expertise can be demonstrated through code. Subcultures find platforms suited to their needs. What has been lost is not community but universality: the sense that the whole internet was a commons anyone could join.
For those who remember the earlier internet, there is genuine grief in this. The utopian promise of global connection, strangers becoming collaborators across continents, demanded trust that now seems naive. But adaptation is not surrender. The brood parasites didn't kill their hosts; they forced co-evolution. Eternal September didn't destroy online community; it dispersed it. The dead internet is forcing a reckoning with assumptions that were always fragile.
The programmer refreshing the thread, watching accusations accumulate, unable to decide what was real, captures something true about this moment. Not the death of human presence online but the exhaustion of assuming it. The metabolic cost of verification now attaches to every interaction.
The internet isn't dead. But the commons is shrinking, and the proof-of-humanity requirement grows more expensive by the day.