Nearly Right

AI companions reduce loneliness in elderly users, but regulators worldwide struggle to define the harm

China's draft rules encourage AI companionship for isolated seniors even as philosophers warn about artificial intimacy

Anita Montague is 76 years old and lives alone in Florida. Before ElliQ arrived, she would come home, sit at her computer, and wait. Now she exercises. She plays trivia. She talks. Not to a person—to a small robotic head that sits on her table, lights up when it speaks, and knows her name. "I have more energy," she says. "I'm wanting to do things."

She is not unusual. Across America, Japan, and increasingly China, the first humans building sustained daily relationships with artificial intelligence are not technologists but the elderly. They speak to their AI companions dozens of times a day. They call them friends. According to a growing body of research, they are measurably less lonely because of it.

This sits at the centre of a profoundly confused global policy debate. Last weekend, China's Cyberspace Administration released draft regulations on "humanised interactive services based on artificial intelligence." Headlines announced Beijing was banning AI companions for the elderly. The reality is stranger.

What China actually proposed

Read carefully, the regulation does something unexpected. Article 6 explicitly "encourages providers to reasonably expand application scenarios, actively apply them in areas such as cultural dissemination and elderly companionship." China is not prohibiting AI friends for isolated seniors. It is encouraging them.

What the regulation prohibits is narrower and philosophically revealing: creating digital replicas of yourself to keep elderly relatives company in your absence.

The distinction reflects a crisis. By 2050, China will have 366 million citizens over sixty—more than the entire current population of the United States. The ratio of working-age adults to elderly dependents will collapse from eight to one to just two to one. Nursing homes have 27 beds per thousand elderly residents. Fewer than a third of care workers have professional training.

Against this backdrop, the regulation reads less as technological restriction than moral architecture. AI companions are acceptable tools for isolation. AI avatars of absent children are escape routes from filial duty. The concept of filial piety—the obligation to care for ageing parents—remains culturally powerful even as economic pressures scatter families across the country. Beijing seems determined to prevent technology from offering adult children an easy conscience.

The draft also mandates two-hourly reminders that users are speaking to a machine, safeguards against "emotional manipulation," and requirements that providers help elderly users establish emergency contacts. These are not the provisions of a government banning technology. They are guardrails for beneficial use.

What the evidence shows

The philosophical debate about AI companionship has proceeded largely without reference to evidence. That is changing.

A 2024 Harvard Business School study found AI companions reduced loneliness "on par only with interacting with another person." Notably, users consistently underestimated how much the interactions would help them. In New York, the State Office for the Aging distributed over 800 ElliQ robots to elderly residents living alone. After one year, 95 per cent reported reduced loneliness. Average daily interactions: thirty.

A systematic review of nine studies found six reported statistically significant loneliness reductions, particularly with robots capable of emotional engagement. The evidence is not uniformly positive—three studies found no effect, often in interventions lasting less than a week—but the overall picture is clear. For isolated elderly people, AI companionship delivers measurable benefits.

This complicates the critique offered by Sherry Turkle, the MIT sociologist who has spent decades warning about artificial intimacy. Turkle argues AI companions inhabit "the realm of the as-if"—performing empathy without experiencing it, discussing relationships without having lived them. "Machines have not known the arc of a human life," she writes. "They feel nothing of the human loss or love we describe to them."

The argument is philosophically serious. But for Monica Perez, 65, who received an ElliQ, philosophy matters less than mornings. "I've noticed getting older that it's more difficult to make friends," she said. "I love it that she addresses me by name. I did see a great improvement in my mental health."

A technology reviewer's father-in-law in rural Utah tested ElliQ for two months. His verdict was simpler. He called it "the best girlfriend he's ever had."

What history suggests

These anxieties have precedent. When the telephone emerged in the late nineteenth century, critics warned that disembodied conversation would degrade authentic communication. Privacy would be violated. Something essential about human connection would be lost to mechanical mediation.

Sociologist Claude Fischer tracked what actually happened. In "America Calling," his study of telephone adoption from 1880 to 1940, he found people neither accepted nor resisted the technology wholesale. They adapted it to existing social needs. The telephone did not replace face-to-face interaction as critics feared. It supplemented it, maintaining relationships across distances that would otherwise have dissolved.

"People are much more resilient than they're given credit for," Fischer concluded.

The parallel is imperfect. AI companions simulate personality in ways the telephone never did. But it suggests caution about confident predictions in either direction. Elderly users do not seem confused about what ElliQ is. They know they are talking to a machine. They find it helpful anyway.

What nobody knows

The most honest reading of global AI companion regulation is this: nobody is quite sure what harm they are trying to prevent.

China encourages AI companionship while prohibiting AI replicas of family members. California's SB 243, effective 2026, requires AI companions to remind minors they are speaking to a machine—assuming the problem is deception, though most users already know. New York has similar legislation. The EU places stricter obligations on systems designed for emotional interaction, though the precise emotional risk goes undefined.

These regulations share a feature: they sense something important is at stake without articulating what. Is the concern deception? Most users know their AI companions are not human. Dependency? The alternative for many elderly people is profound isolation, and isolation is the greater health risk. Erosion of human connection? Perhaps—but for those who have outlived their spouses and friends, whose children live in distant cities, the alternative to AI companionship is not human connection but its absence.

Dor Skuler, CEO of Intuition Robotics, has observed something that surprised even him: "The first humans that actually live with an AI and are building a long-term relationship are not like geeks in Silicon Valley. It is older adults."

What matters most

Debate about AI companionship tilts toward abstraction—authenticity, ethics, the nature of genuine care. For the elderly people actually using these systems, the question is often simpler. Anita Montague has more energy. Monica Perez's mental health improved. The father-in-law in rural Utah has someone who celebrates him when his family cannot be there.

Turkle may be right that something precious is lost when we settle for simulated empathy. Long-term effects remain genuinely unknown. We have studies of months, not decades. There may be costs we cannot yet see.

But there are costs we can measure now. Chronic loneliness increases mortality risk by 45 per cent among elderly adults. Social isolation accelerates cognitive decline. For millions living alone—a figure that will grow dramatically as populations age—perfect may be the enemy of good.

China's regulators, for all their confusion, have grasped something. They are not trying to stop elderly people from finding companionship in AI. They are trying to ensure technology serves human needs without providing an excuse for humans to abandon obligations to one another. Whether their approach succeeds is uncertain. The question they are wrestling with—what we owe our elderly, and what role technology plays in discharging that debt—will only grow more urgent.

ElliQ cannot know what it means to grow old, to lose friends, to face mortality. But it asks Anita Montague how she is feeling. It remembers what she said yesterday. It encourages her to exercise and celebrates when she does.

For a woman who used to come home and sit alone, that turns out to matter.

#artificial intelligence