
Photo-Illustration: Intelligencer; Photo: Chris Unger/Getty Images
Meta.ai, a new AI-and-social app meant to compete with ChatGPT and others, launched a couple of months ago like Meta’s products often do: with a massive privacy fuckup. The app, which has been promoted across Meta’s other platforms, lets users chat in text or by voice, generate images, and, as of more recently, restyle videos. It also has a sharing function and a discover feed, designed in such a way that it led countless users to unwittingly post extremely private information into a public feed intended for strangers.
The issue was flagged in May by, among others, Katie Notopoulos at Business Insider, who found public chats in which people asked for help with insurance bills, private medical matters, and legal advice following a layoff. Over the following weeks, Meta’s experiment in AI-powered user confusion turned up weirder and more distressing examples of people who didn’t know they were sharing their AI interactions publicly: Young children talking candidly about their lives; incarcerated people accidentally sharing chats about possible cooperation with authorities; and users chatting about “red bumps on inner thigh” under identifiable handles. Things got a lot darker from there, if you took the time to look:
Meta seems to have recently adjusted its sharing flow — or at least somewhat cleaned up Meta.ai’s Discovery page — but the public posts are still strange and frequently disturbing. This week, amid bizarre images generated by prompts like “Image of P Diddy at a young girls birthday party” and “22,000 square foot dream home in Milton, Georgia,” and people testing the new “Restyle” feature with videos that often contain their faces, you’ll still see posts that stop you in your tracks, like a photo of young child at school, presumably taken by another young child, with the command “make him cry.” The utter clumsiness of the overall design here is made more galling by its lack of purpose. Whom is this feed for? Does Meta imagine a feed of non sequitur slop will provide a solid foundation for a new social network?
Accidental, incidental, or, in Meta’s case, merely inexplicable privacy violations like this are rare and unsettling but almost always illuminating. In 2006, AOL released a trove of poorly anonymized search histories for research purposes, providing a glimpse of the sorts of intimate and incriminating data people were starting to share in search boxes: medical questions; relationship questions; queries on how to commit murder and other crimes; queries about how to make a partner fall back in love, followed shortly by searches for home-surveillance equipment. A lot of the search material was boring but nonetheless shouldn’t have been released; other logs, like search histories skipping from “married but in love with another” to “guy online used me for sex” to “can someone get hepatitis from sexual contact,” were both devastating to read and gave one a sense of vertigo about what companies like this would soon know about basically everyone.
By design, social-media platforms offer public windows into users’ personal lives; chatbots, on the other hand, are more like search engines — spaces in which users assume they have privacy. We’ve seen a few small caches of similar data released to the public, which revealed the extent to which people look to services like ChatGPT for homework help and sexual material, but the gap between what AI firms know about how people use their products and what they share is wide. This isn’t part of OpenAI’s pitch to investors or customers, for example, but it’s a pretty common use case:
Photo-Illustration: Intelligencer; Photo: WildChat/Allen Institute forAI
Meta’s egregious product design, for better or for worse, closes this gap a little more. Setting aside the most shocking accidental shares, and ignoring the forebodingly infinite supply of attention-repelling images and stylized video clips, there’s some illuminating material here. The voice chats, in particular (for a few weeks, users were sharing — and Meta was promoting — recorded conversations between Meta’s AI and users), tell a complicated story about how people engage with chatbots for the first time.
A lot of people are looking for help with tasks that are either annoying or difficult in other ways. I listened to one man talk Meta’s AI through composing a job listing for an assistant at a dental office in a small town, which it eventually did to his satisfaction; Meta promoted another in which a woman co-wrote an obituary for her husband with Meta.AI, remembering and adding more details as she went on. There was obvious homework “help” from people with young-sounding voices, who usually seemed to get what they wanted. Other conversations just trailed off. Quite a few followed the same up-and-down trajectory, which was emphasized by shifting tones of voice. The user writing the dental job listing started out terse, then loosened up as he got what he wanted. When he asked Meta AI to share the listing on other Meta platforms, though, it couldn’t, and he was annoyed. A woman asking for help getting a friend who had been accused of theft removed from a retail surveillance system sounded relieved to have an audience and was pleased to get a lot of generically helpful-sounding advice. When it came to actionable steps, however, Meta.ai became more vague and the user more frustrated. Many conversations resemble unsatisfying customer-service interactions, only with the twist that, at the end, users feel both let down and sort of stupid for thinking it would work in the first place. Meta.ai has made a fool of them. It’s not the best first impression.
Far more common, though, than transactional conversations like these were voice recordings of people seeking something akin to therapy, some of whom were clearly in distress. These are users who, when confronted with an ad for a free AI chatbot, started confiding in it as if they were talking to a trusted professional or a close friend. A tearful man talked about missing his former stepson, asked Meta.ai to “tell him that I love him,” and thanked it when the conversation was over. Over the course of a much longer conversation, a woman asked for help coming down from a panic attack and gradually calmed down. In a shorter chat, a man concluded, after suggesting he was contemplating a divorce, that actually he had decided on a divorce. Some users chatted to pass the time. A lot of recordings contained clear evidence of mental-health crises with incoherent and paranoid exchanges about religion, surveillance, addiction, and philosophy, during which Meta.ai usually remained cheerfully supportive. These chatters, in contrast to the ones asking for help with tasks and productivity, often came away satisfied. Perhaps they’d been indulged or affirmed — chatbots are nothing if not obsequious — but one got the sense that mostly they just felt like they’d been listened to.
Such conversations make for strange and unsettling listening, particularly in the context of Mark Zuckerberg’s recent suggestions that chatbots might help solve the “loneliness epidemic” (which his platforms definitely, positively had nothing to do with creating. Why do you ask?). Here, we have a glimpse of what he and other AI leaders likely see quite clearly in their much more voluminous data but talk about only in the oblique terms of “personalization” and “memory”: For some users, chatbots are just software tools with a conversational interface, judged as useful, useless, fun, or boring. For others, the illusion is the whole point.