Success demands more success.

This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.
Earlier this week, The Verge reported that OpenAI is developing its own social network to compete with Meta and X. The product may never see the light of day, but the idea has a definite logic to it. People create data every time they post online, and generative-AI companies need a lot of data to train their products. Social networks are also sticky: If you got hooked on an OpenAI feed, you’d be less likely to use competing generative-AI products from Anthropic or Google. (OpenAI, which The Atlantic has a corporate partnership with, did not return my request for comment and has not, to my knowledge, commented on the report elsewhere.)
But, well, it doesn’t really make sense, does it? Twenty-one years after the creation of Facebook, social media has become the pond scum of the internet: everywhere, unremarkable, and a little bit gross. OpenAI, which says it’s trying to build an advanced superintelligence known as AGI, used to be a mission-oriented nonprofit that explicitly worked to “benefit humanity as a whole, unconstrained by a need to generate financial return.” The goals of starting a social-media product seem out of alignment, even considering the company’s decision last year to embrace a for-profit model. The same company that wants us to believe that it deserves the full blessing of the United States government to amass unfathomable resources for the sake of architecting an intelligence beyond human reckoning—lest China do it first—is also possibly interested in advancing the cause of brain rot?
To help this make sense, I reached out to my colleague Charlie Warzel, one of the most insightful minds on the technology beat, for a quick discussion earlier this week. It still seems like a strange idea, but I also understand more about what’s motivating OpenAI—whether it launches a social network or not.
This interview has been edited and condensed.
Damon Beres: How does an OpenAI social network sound to you?
Charlie Warzel: It’s one of the first things I’ve seen from OpenAI that feels like the brainchild of executives who aren’t necessarily building cutting-edge technology. It feels very akin to logic from Meta.
Damon: What do you mean by that?
Charlie: After Facebook’s success, it felt like there was this stagnation. Meta executives came up with new products that mimicked things that existed, or they brute-forced trends into their products. After OpenAI put out an update to ChatGPT that led to the Studio Ghibli meme craze, I imagine somebody there saw how it took over certain corners of the internet—especially on X—and they said, Wait a minute, everyone’s using our tool; what if we actually owned the rails, too? Here’s this emergent behavior of social media built around AI art or memes, and someone thought, Oh, maybe this is the gateway. I think that’s always a doomed idea, to take something that’s happening organically and try to retrofit a community around it.
Damon: Even if this particular social-network idea never happens, it’s clear that OpenAI is very interested in rapidly releasing new products, expanding its user base, and keeping those users hooked, which makes sense as the company attempts to restructure as a for-profit. It’s not enough just to have this defining generative-AI product. What’s the next big thing? How do we build out into a new product category? It’s a familiar path for tech giants, this pursuit of endless growth.
Charlie: It feels especially sad coming from OpenAI, because if you’re buying their marketing narrative, these are supposed to be the people who are creating God, or humanlike intelligence. When I think back to OpenAI two years ago—the summer and fall before the ousting and rejoining of CEO Sam Altman—it felt like the company was trying to position itself as this cryptic hub in the Bay Area working on things that are going to fundamentally shift the paradigm of tech and culture and just … everything. AGI has been the whole marketing play, and that’s really heady stuff, right? Are we going to destroy civilization? Will there be jobs for normal people when AI is, you know, super intelligent? To say now, We’re going to take a stab at a social network: Post-based social networks feel like such an aged-out technology. I’d almost respect it more if they said, We’re building a TikTok competitor and we have engineered the savviest content algorithm of all time.
Maybe this is just a ploy for them to get more training data. But to me, this signifies where OpenAI is right now. They’ve been working so hard on this AGI narrative. A lot of the success of the company depends on delivering on that, and they haven’t. The performance of the models is getting only so much better with each iteration. It feels like OpenAI is stuck in neutral, and trying to figure out ways to behave like any old tech company.
Damon: The major platforms that OpenAI would be competing with—Meta’s products, X, even YouTube—have had years and years of development. It seems like breaking into that ecosystem would be almost impossible now, even if you’re a company like OpenAI. Is there even demand for a new social network right now?
Charlie: The best that new entrants in the social-media space can hope for is peeling off some niche groups. In the past couple of years, with Elon Musk’s purchase of X, the creation of Bluesky, and the creation of Threads, those platforms have splintered off and taken certain groups of users with them. They don’t really coexist in the same spaces. I could see OpenAI creating some kind of social-media site that’s a version of LessWrong, the rationalist community that does a lot of posting about AI. That would make sense and feel natural, because it would be built around this idea of supporting what OpenAI is doing. But people aren’t going to discuss the New York Mets on OpenAI’s platform.
Damon: What’s the bigger picture here?
Charlie: I’m sure OpenAI looks at ChatGPT as this runaway success—which it is—and they’re trying to figure out ways to use it. They’re trying to figure out ways to innovate it, to make it better. And I think what they see and feel is that ChatGPT should be this wrapper for the internet, the thing that covers all of it and it is the guide for it. I think OpenAI wants to be the browser for the internet going forward—the interface to rule over all of it—and maybe they feel like this is a way in. If they can bring people in and get real-time discussion of news and culture, and not only have that information but use it because there’s a vibrant ecosystem there, that helps it be that wrapper layer for the internet.
But I think that would be misguided. Looking at this, I have this feeling of, what if ChatGPT was the worst thing to happen to OpenAI? What if they’re huge victims of the success of this product? ChatGPT wasn’t supposed to be a successful product. It was a proof of concept of these large language models being able to effectively spit out and mimic human prose and interactivity. To their great surprise, as we’ve reported at The Atlantic, it was a major success, and that goes to what you said earlier—that nothing’s ever enough for Silicon Valley. Once you’ve demonstrated some success, you must iterate on that. You must 10x it. Otherwise, the train is stalling.
And so if OpenAI’s original goal was to create this super intelligence, the success of ChatGPT could be looked at as this huge wrench in the gears of that operation. Now they have this consumer product that millions of people use that is making them a little bit of money—in the grand scheme of things, not really that much for them—and they need to figure out a way to continue that success. It feels to me like a big distraction. Because if you’re Sam, if you really were obsessed with AGI, wouldn’t you rather be quietly trying to build it?
P.S.
Speaking of OpenAI product launches, the company this week released two new models, o3 and o4-mini, which it called its “smartest” ever (in typical fashion). As usual, there’s a lot of hype. But this release may be an intriguing step for scientific research in particular, and we’ll have more to share on that in the next edition of Atlantic Intelligence.
— Damon
About the Author
Damon Beres is a senior editor at The Atlantic, where he oversees the Technology section.