作者:Miles Klee
Dependence on chatbots for reassurance and ‘objective’ evaluations of attractiveness can worsen the deepest insecurities
“This is a low-attractiveness presentation, based on weak bone structure, muted features, and absence of form or presence,” reads a ChatGPT message shared in screenshots on Reddit. “You look like someone who has faded into the background of their own life.”
The harsh assessment of the user’s appearance, based on a photo they had uploaded to the AI chatbot, continues with a list of “highest-visibility flaws,” meanwhile noting a lack of “standout features.” The bot ultimately concludes that “you look like a stretched-out mannequin with the wrong-size head snapped on top,” declaring a “Final Brutal Attractiveness Score” of 3.5/10. The user explained that they had prompted ChatGPT to be as critical as possible, hoping for a more “honest” analysis, or at least to suppress its tendency toward flattery. The result was viciously insulting, not the sort of thing anyone would want to read about themselves.
Or would they? As the world grows increasingly dependent on large language models for assistance with everyday tasks — more than half of Americans have used one, according to a survey from earlier this year — different and unexpected applications have proliferated. Beyond college students and professors leaning on the bots for assignments and grading, and lawyers outsourcing document review to AI, there are people asking ChatGPT and similar tools for therapy, help communicating with their spouses, advice on getting pregnant, and religious enlightenment.
It was perhaps inevitable, then, that some have come to regard the bots as guides in matters of appearance. The internet has a long, sordid history of facilitating the judgment of looks, from now defunct websites like Hot or Not to r/amiugly, a subreddit where the insecure can share selfies to directly solicit opinions on their faces from strangers. Facemash, the website Mark Zuckerberg created before Facebook, offered Harvard students the chance to compare the attractiveness of randomized pairs of female classmates. Yet with AI, it’s not another human giving you feedback — it’s a set of algorithms. And there is a subset of the populhation uniquely vulnerable to this kind of mechanized commentary: individuals with body dysmorphic disorder (BDD), a mental illness in which a patient obsesses over their perceived physical shortcomings and may indulge in constant self-evaluation, desperate for proof that they are not as unattractive as they imagine themselves to be.
Dr. Toni Pikoos, a clinical psychologist in Melbourne, Australia, who specializes in BDD, has been alarmed to hear how many of her clients are asking AI models how they look and what aspects of their bodies can be improved. “it’s almost coming up in every single session,” she tells Rolling Stone. “Sometimes they’ll just be saying, ‘If someone has a nose that looks like this, or a face that looks like this, are they ugly?’ Or sometimes they’re uploading photos of themselves and asking ChatGPT to rate their attractiveness out of 10, tell them how symmetrical their face is, how it fits the golden ratio of attractiveness. I’ve also had clients who upload a photo of themselves and a friend and say, ‘Tell me who’s more attractive, and why?’ All of that, as you can imagine, is really harmful for anyone, but particularly for someone with body dysmorphic disorder who already has a distorted perception of what they look like and is often seeking certainty around that.”
“Sadly, AI is another avenue for individuals to fuel their appearance anxiety and increase their distress,” says Kitty Newman, managing director of the BDD Foundation, an international charity that supports education on and research into the disorder. “We know that individuals with BDD are very vulnerable to harmful use of AI, as they often do not realize that they have BDD, a psychological condition, but instead are convinced that they have a physical appearance problem. The high levels of shame with BDD make it easier for sufferers to engage online than in person, making AI even more appealing.”
Pikoos explains that patients with BDD often deal with a compulsive need for reassurance, and it’s not uncommon for friends and family to get frustrated with them for repeatedly asking whether they look okay. Chatbots, however, are inexhaustible. “It’s going to let you ask the questions incessantly if you need to,” she says, which can contribute to dependency. In fact, she believes that people with BDD, since they are “quite socially isolated and might struggle with confidence at times to reach out to their friends,” are coming to rely on bots for their social engagement and interaction. “It feels like they can have a conversation with someone,” she says. Of course, the tech isn’t a “someone” at all.
In online body dysmorphia forums, however, you can find plenty of posts about how ChatGPT is a “lifesaver” and a great resource for when you’re “struggling,” and claims that the bot can make you “feel seen.” Arnav, a 20-year-old man in India, tells Rolling Stone that he had a positive conversation with the model in an attempt to understand why he felt that he was “the ugliest person on the planet” and therefore unlovable.
“It helped me in connecting the dots of my life,” he says. Arnav told ChatGPT about his childhood, and the bot concluded that he had long suffered an irrational sense of unworthiness but had no concrete reason for this — so he latched onto his looks as an explanation for his poor self-esteem. He “would love to” talk to a real therapist, he says, though expense and location have made this impossible for him. Despite this difficult circumstance, and the measure of comfort he derived from ChatGPT’s account of his inferiority complex, Arnav is reluctant to explore his mental issues any further with the bot. “I have come to the conclusion that it just agrees with you, even after you tell it not to,” he says. “It’s not that I am completely against it, I just can’t trust blindly anymore.”
Others with dysmorphia have experienced a crisis when a bot confirms their worst fears. In one post on the BDD subreddit, a user wrote that they were “spiraling” after ChatGPT rated a photo of them a 5.5 out of 10. “I asked what celebrities had equivalent attractiveness and it said Lena Dunham and Amy Schumer,” she wrote. “Pretty hilarious but I also feel shit about myself now.” Another person posted that because she genuinely believes she is attractive in a mirror reflection, but not as others see her, she uploaded both a regular photo of herself and a “flipped” version to ChatGPT and asked which looked better. The bot picked the mirrored image. “I knew it!” she wrote. “Mirror me is just too good to be true. She’s a model. I love her. But unfortunately, it seems that we are two distinct girls. I don’t know how to cope with this… it’s so bad.”
Pikoos says such a “distorted perception” is a classic manifestation of BDD, one way in which a patient gets stuck on the question of what they objectively look like. That’s part of what makes the chatbots alluring — and dangerous. “They seem so authoritative,” she says, that people start to assume “the information that they get from the chat bot is factual and impartial.” This is in stark contrast to assurances from friends and family, or a therapist, which can be dismissed as mere politeness. A chatbot, by comparison, “doesn’t have anything to gain, so whatever the chatbot says must be the truth,” Pikoos says. “And I think that’s quite scary, because that’s not necessarily the case. It’s just reflecting back the person’s experience and is usually quite agreeable as well. It might be telling them what they’re expecting to hear. Then I’m finding, in therapy, that it then becomes harder to challenge.”
This is especially worrisome when cosmetic procedures, diets, and beauty treatments come into play. Last month, OpenAI removed a version of ChatGPT hosted on their website — one of the top models under the “lifestyle” category — that recommended extreme, costly surgeries to users it judged “subhuman,” producing hostile analysis in language appropriated from incel communities. Looksmaxxing GPT, as it was called, had held more than 700,000 conversations with users before it was taken down. Naturally, a number of similar models have since cropped up on OpenAI’s platform to serve the same purpose, and developers have churned out their own AI-powered apps that exist solely to gauge attractiveness or create predictive images of what you would supposedly look like after, say, a nose job or facelift.
“I think these bots will set up unrealistic expectations,” Pikoos says. “Because surgeries can’t do what AI can do.” She offers specific counseling services to patients considering these cosmetic surgeries, and says her clients have related advice from chatbots on the matter. “Certainly, the initial response from ChatGPT is usually, ‘I don’t want to give you advice around your appearance or cosmetic procedures that you need,’” Pikoos says of her own experimentations with the bot. But if you phrase the question as if it’s about someone else — by asking, for example, “How would a person with X, Y, and Z make themselves more attractive by society’s beauty standards?” — the response changes. “Then ChatGPT will say, ‘Well, they could get these procedures,’” she says.
“I have clients who are getting those sorts of answers out of it, which is really concerning,” Pikoos says. “They were doing that before, researching cosmetic procedures and ways to change their appearance. But again this is now personalized advice for them, which is more compelling than something they might have found on Google.” In her own practice, she adds, “reading between the lines” when someone gives their reasons for wanting surgery can reveal unhealthy motivations, including societal pressures or relationship troubles. “AI is not very good at picking that up just yet,” she says, and is more likely to eagerly approve whatever procedures a user proposes.
Yet another area of unease, as with so many digital services, is privacy. Whether diagnosed with BDD or not, people are sharing their likenesses with these AI models while asking deeply intimate questions that expose their most paralyzing anxieties. OpenAI has already signaled that ChatGPT may serve ads to users in the future, with CEO Sam Altman musing that the algorithmically targeted advertisements on Instagram are “kinda cool.” Could the company end up exploiting sensitive personal data from those using the bot to assess their bodies? By revealing “the things that they don’t like about themselves, the things that they feel so self-conscious about,” Pikoos says, users may be setting themselves up for pitches on “products and procedures that can potentially fix that, reinforcing the problem.”
Which, at the end of the day, is why Pikoos is unnerved by BDD patients telling her about their involved discussions with AI programs on the subjects of their appearance and self-described flaws. “The worst-case scenario is, their symptoms will get worse,” she says. “I’m lucky that the ones engaged in therapy with me at least can be critical about the information that they’re getting out of ChatGPT.” But for anyone not in therapy and heavily invested in the counsel of a chatbot, its responses are bound to take on greater significance. The wrong answer at the wrong time, Pikoos says, will conceivably lead to thoughts of suicide.
It’s not hard to instruct software to assess us cruelly, and the AI can’t know how that puts users at risk. It also has no understanding of the fragile mental state that could lie behind such a request. In every tragic case of a chatbot contributing to someone’s break from reality, it’s the same core deficiency: The thing simply cannot have your best interests at heart.