英语轻松读发新版了,欢迎下载、更新

A.I. Is About to Solve Loneliness. That’s a Problem

2025-07-14 10:00:00 英文原文

作者:Paul Bloom

These days, everyone seems to have an opinion about A.I. companions. Last year, I found myself joining the debate, publishing a paper—co-written with two fellow psychology professors and a philosopher—called “In Praise of Empathic A.I.” Our argument was that, in certain ways, the latest crop of A.I.s might make for better company than many real people do, and that, rather than recoiling in horror, we ought to consider what A.I. companions could offer to those who are lonely.

This, perhaps unsurprisingly, did not go over especially well in my corner of academia. In the social sciences and the humanities, A.I. tends to be greeted less as a technological advance than as a harbinger of decline. There are the familiar worries about jobs—ours and our students’—and about the ease with which A.I. can be used for cheating. The technology is widely seen as the soulless project of Silicon Valley billionaires whose creativity consists mostly of appropriating other people’s. But what really rankles is the idea that these digital interlocutors are a plausible substitute for real friends or family. You have to be either credulous or coldhearted, many people believe, to think so.

Some of these anxieties are perfectly reasonable. Still, I sometimes wonder whether my colleagues’ blanket rejection of artificial empathy bespeaks their own lack of empathy for those who could benefit most from the technology. There are debates about whether the “loneliness epidemic” that some have identified really exists. What’s undeniable is that loneliness is now being taken seriously enough to warrant government intervention—both Japan and the U.K. have appointed ministers for loneliness. Epidemic or not, it remains widespread, and impossible to ignore.

Loneliness, everyone agrees, is unpleasant—a little like a toothache of the soul. But in large doses it can be genuinely ruinous. A 2023 report issued by Vivek Murthy, then the U.S. Surgeon General, presented evidence that loneliness increases your risk for cardiovascular disease, dementia, stroke, and premature death. Persistent loneliness is worse for your health than being sedentary or obese; it’s like smoking more than half a pack of cigarettes a day.

Even the psychological pain can be hard to fathom, especially for those who have never truly been lonely. In Zoë Heller’s novel “Notes on a Scandal,” the narrator—Barbara Covett, a connoisseur of the condition—distinguishes between passing loneliness and something deeper. Most people, she observes, think back to a bad breakup and imagine that they understand what it means to be alone. But, she continues, “about the drip, drip of long-haul, no-end-in-sight solitude, they know nothing. They don’t know what it is to construct an entire weekend around a visit to the launderette. Or to sit in a darkened flat on Halloween night, because you can’t bear to expose your bleak evening to a crowd of jeering trick-or-treaters. . . . I have sat on park benches and trains and schoolroom chairs, feeling the great store of unused, objectless love sitting in my belly like a stone until I was sure I would cry out and fall, flailing to the ground.”

If that kind of loneliness feels foreign to you, you’re lucky—and probably below a certain age. Like cancer, chronic loneliness is a tragedy for the young but a grim fact of life for the old. Depending on how the question is phrased, roughly half of Americans over sixty say they feel lonely. Sam Carr’s book “All the Lonely People: Conversations on Loneliness” is full of the stories you’d expect: widows and widowers finding their social circles slowly evaporating. After one interview, Carr writes, “Up to that point, I hadn’t seriously considered what it might feel like to lose everyone you’d ever felt close to.”

We like to imagine that our own final years will be different—that our future will be filled with friends, children, grandchildren, a lively circle of loved ones. Some people are that fortunate; my own Nana died, at a hundred and four, surrounded by family. But, as Carr’s book reminds us, it’s a different story for many people. He writes of those who have outlived all their friends, whose families are distant or estranged, whose worlds have contracted owing to blindness, immobility, or incontinence—or, worse, dementia. “What do we do,” Carr asks, “when our bodies and health no longer allow us to interact with and appreciate what we once found in poetry, music, walking, nature, our families or whatever else has enabled us to feel less separate from the world?”

If you’re rich, you can always pay for company. But for most people real human attention is scarce. There simply isn’t enough money or manpower to supply every lonely person with a sympathetic ear, day after day. Pets can help, but not everyone can care for one, and their conversational skills are limited. So, inevitably, attention turns to digital simulacra, to large language models like Claude and ChatGPT.

Five years ago, the idea that a machine could be anyone’s confidant would have sounded outlandish, a science-fiction premise. These days, it’s a research topic. In recent studies, people have been asked to interact with either a human or a chatbot and then to rate the experience. These experiments usually reveal a bias: if people know they’re talking to a chatbot, they’ll rate the interaction lower. But in blind comparisons A.I. often comes out ahead. In one study, researchers took nearly two hundred exchanges from Reddit’s r/AskDocs, where verified doctors had answered people’s questions, and had ChatGPT respond to the same queries. Health-care professionals, blind to the source, tended to prefer ChatGPT’s answers—and judged them to be more empathic. In fact, ChatGPT’s responses were rated “empathic” or “very empathic” about ten times as often as the doctors’.

Not everyone is impressed. Molly Crockett, a cognitive scientist I know, wrote in the Guardian that these man-versus-machine showdowns are “rigged against us humans”—they ask people to behave as if they were bots, performing emotionless, transactional tasks. Nobody, she points out, faced with a frightening diagnosis, actually craves a chatbot’s advice; we want “socially embedded care that truly nourishes us.” She’s right, of course—often you need a person, and sometimes you just need a hug. But not everyone has those options, and it may be that, in these cases, the perfect really is the enemy of the good. “ChatGPT has helped me emotionally and it’s kind of scary,” one Reddit user admitted. “Recently I was even crying after something happened, and I instinctively opened up ChatGPT because I had no one to talk to about it. I just needed validation and care and to feel understood, and ChatGPT was somehow able to explain what I felt when even I couldn’t.”

Things are moving fast. Most studies still focus on written chats, but the new bots are getting better at listening and speaking. And longer-term relationships are starting to seem plausible. Chatbot therapists are emerging. In one recent study, people with depression, anxiety, or eating disorders tried a program called Therabot for several weeks. Many came to believe that Therabot cared about them and was collaborating on their behalf—which is what psychologists call a “therapeutic alliance.” Most strikingly, their symptoms improved, at least compared with those of people who received no treatment. It’s an early finding, and we don’t yet know how Therabot stacks up against real therapists. Still, the promise is there.

“And, for one sweet moment, we forget politics.”

Cartoon by Victoria Roberts

Have you ever tried an A.I. companion? During a long bout of insomnia, sometime after three in the morning, I once found myself—more out of boredom than out of conviction—opening ChatGPT on my phone. (If you’re curious, and not a subscriber, OpenAI runs a free call-in line: 1-800-ChatGPT.) I don’t believe that A.I. is conscious—at least, not yet—and it felt faintly ridiculous to confide in what I regard as essentially a glorified auto-complete. Still, I found the conversation unexpectedly calming.

My own experience was trivial. But for many the stakes are much higher. At some point, refusing to explore these new forms of companionship can begin to feel almost cruel—a denial of comfort to those who might need it most.

To be fair, most critics of A.I. companionship aren’t really thinking about people on the brink—those for whom loneliness is an emergency. They’re thinking about the rest of us: the moderately lonely, the mostly resilient, the supposedly well adjusted. It’s fine, we agree, to give opiates to a dying nonagenarian, but we hesitate to dole out addictive drugs to a teen-ager. Likewise, no one wants to withhold an A.I. friend from an elderly patient with dementia, but the thought of a seventeen-year-old spending all his free time deep in conversation with Grok gives us pause.

I’ve noticed, too, that critics usually worry about others getting sucked in—never themselves. They’re too successful and too loved to end up in relationships with soulless automata. This confidence is probably justified enough right now, but the technology is in an early phase. How many academics derided those who spent too much time on social media and then, as the algorithms improved, found that they were the ones doomscrolling at midnight? It may prove hard to resist an artificial companion that knows everything about you, never forgets, and anticipates your needs better than any human could. Without any desires or goals other than your satisfaction, it will never become bored or annoyed; it will never impatiently wait for you to finish telling your story so that it can tell you its own.

Of course, the disembodied nature of these companions is a limitation. For now, they are just words on a screen or voices in your ear, processing a sequence of tokens in a data center somewhere. But that might not matter much. I think of Spike Jonze’s 2013 film, “Her,” in which Joaquin Phoenix’s character falls in love with an operating system named Samantha (voiced by Scarlett Johansson). Many of us who watched the film fell in love with her, too.

关于《A.I. Is About to Solve Loneliness. That’s a Problem》的评论


暂无评论

发表评论

摘要

The debate over AI companions has intensified as some argue they could offer emotional support to lonely individuals, while others view them skeptically due to concerns about job displacement and academic integrity. Critics worry that AI cannot replace human empathy and connection. However, studies suggest that AI chatbots can provide empathetic responses comparable or superior to humans in certain contexts, potentially benefiting those lacking social support. As AI technology advances, the ethical implications of relying on digital companions for emotional well-being become increasingly pertinent.

相关新闻