作者:By Brittany Luse and Alexis Williams and Neena PathakSep 08, 2025 3:00am (NPR)
(SOUNDBITE OF MUSIC)
BRITTANY LUSE, HOST:
Hello, hello. I'm Brittany Luse, and you're listening to IT'S BEEN A MINUTE from NPR, a show about what's going on in culture and why it doesn't happen by accident.
(SOUNDBITE OF MUSIC)
LUSE: The other day, my friend sent me a song he wanted me to check out. It was a country song, pretty decent. Come to find out it was AI-generated.
(SOUNDBITE OF MUSIC)
LUSE: Then I open my phone, scroll through social media and see Luigi Mangione, who was indicted for killing the former CEO of UnitedHealthcare, Brian Thompson, on Shein. No, I'm not kidding. Luigi Mangione's likeness was used to model a shirt for fast-fashion retailer Shein. The photo has since been removed, but it has caused some to speculate that it may have been AI-generated.
According to a spokesperson from Shein, the image in question was provided by a third-party vendor. But it just felt like another indication that AI content has become a common part of our everyday lives.
DREW HARWELL: AI slop is kind of known for being uncanny in a way.
LUSE: That is Washington Post tech reporter Drew Harwell.
HARWELL: It can be jokey. It can be cringy. It can be weird. But the main attribute of it is that it's so prevalent that you basically can't escape it.
LUSE: In the past year or so, you've probably heard the term AI slop thrown around more and more. According to The Guardian, AI slop is usually low-quality, mass-produced, AI-generated content.
HARWELL: It's empty calories in a way because you start watching it because it gives you something that's kind of interesting but is kind of soulless in that way that we've come to know AI content is all about at its heart.
LUSE: AI slop has been compared to spam. It's everywhere and almost unavoidable. But it's also pretty profitable, so profitable that Drew wrote about it for the Post. His article "Making Cash Off AI Slop: The Surreal Video Business Taking Over The Web" dives deep into the gig economy AI slop has created.
HARWELL: You know, the people I talk to, these are people who are, you know, middle class, who have kind of, like, side jobs or who are college students and saw this as, like, a thing they could do at night to make a couple bucks. And some of them were surprised at how much money they were making from this.
LUSE: One creator Drew spoke to made close to $1,000 in four days from an AI video of an influencer eating glass fruit. But that's just one aspect of AI slop. According to freelance writer Emma Marris, it goes deeper than the uncanny valley.
EMMA MARRIS: I also would include AI-written product reviews, AI music, fake dating profiles that are AI or even people who use ChatGPT to write their flirtatious texts when they're texting back and forth within a dating app.
LUSE: Emma wrote "AI Slop Might Finally Cure Our Internet Addiction" for The Atlantic. And for her, AI slop has seeped into so many aspects of our digital lives that it's made logging off completely much more compelling.
MARRIS: Some of these slop videos are clearly, like, cartoonish, but a lot of them are meant to fool people or at least to be believable. 404 Media did a story about a lot of AI slop coming out of Washington, D.C., when the National Guard was there recently, or is still there, and people seeing these fake scenes of what was happening in D.C. all over the country and getting a very skewed idea of what was going on.
LUSE: A skewed idea in the sense that some videos, according to 404 Media, exaggerated the scale and disarray of the encampments in D.C., which could lead some viewers to believe that the immediate situation with the homeless encampments was worse than it actually was. We're already dealing with a lack of public trust in the media, and AI slop might be siloing us even more.
(SOUNDBITE OF MUSIC)
MARRIS: But at a sort of more serious level, it's made it just deeply untrustworthy. Like, you cannot trust that the news report you're seeing on TikTok is real footage. You cannot trust that the reviews you're reading of, like, the sneakers you want to buy are written by a human. The internet becomes less useful because you can't, like, get information from it in the same way that you used to be able to because of this trust problem.
LUSE: And according to NBC News, there are people being hired to make these AI slop videos even more realistic.
MARRIS: My worry is that we'll see a bifurcation of culture into sort of the very online and the increasingly offline. And then it'll be a new form of bubble that'll be difficult to communicate across.
(SOUNDBITE OF MUSIC)
LUSE: This is AI + U. As artificial intelligence becomes more prevalent in ways we can and can't see, for the next couple of weeks, I'm zeroing in on how AI shows up in our daily lives. Today on the show, Drew Harwell and Emma Marris join me to get into how AI slop complicates our ideas of creativity and challenges what the truth means to us on and offline.
(SOUNDBITE OF MUSIC)
MARRIS: What's interesting is that, like, a lot of this content isn't, like, necessarily feel-good. Some of it is. Like, the, you know, musicians rally around Phil Collins' bedside. Like, that's feel-good stuff. Like, and there's a lot of AI slop around celebrities rescuing people from floods or fires or whatever. So that's sort of feel-good.
But often it's really uneasy-making. Like, there's a lot about babies in danger. There's a lot about, like, kind of body horror, things changing. And I think it's basically just about tapping into our most primal emotions, and it just makes you hesitate, sometimes in a state of horror or shock, just long enough to monetize you.
Like, that's the whole goal here, is just to grab your attention long enough to make a buck out.
HARWELL: Yeah. I'll also say to that, I'm somebody who has added a bunch of, like, documentaries to my to-watch list, and then at night, I just end up watching the same reality TV garbage because...
LUSE: (Laughter) My husband does that, too (laughter).
HARWELL: You know, we have high aspirations for what we're going to consume at any moment, and then reality invades. And I'll say, like, with social media especially, this is, like - we don't watch the best thing we can watch in that moment. We watch what is presented to us. And if we go on TikTok, we're going to get one channel of content, and if it's good enough, we are going to watch it. And that is happening on a global scale.
And, you know, part of it is that these creators are very good at working the algorithms, churning out stuff en masse, so there's a huge volume of content that pushes everything out. So part of it is a supply thing, but the other is just that, you know, we go with it because it is good enough. And that's the worry, is, like, we're going to get a lot of this stuff that is good enough but doesn't sate us. And it's going to crowd everything out, it's going to take all the attention away, it's going to take all the money away.
LUSE: Well, and then you want more.
HARWELL: That's right.
LUSE: You know, there are some people - they might argue that they are, you know, using creativity to make this stuff. They're the ones who come up with the prompts, or they might nudge the prompts if they're not getting the exact results that they want. They're like, well, the AI just starts it, but I have to copy edit it. I have to, you know, blah, blah, blah. Like, is there any validity to the argument that there's, like, creativity in this, that there's any type of creation happening?
HARWELL: You know, I think there's a spectrum of creativity. I think there's a range, just like with any media, with any art form. And I definitely did talk to slop makers, slopsters, who basically just hit a button, and out it popped, and they posted it, and who cared? They moved on with their life. But there were other, you know - they might call themselves AI artists, who, the AI was a tool, and it was one piece of it, and they got the output from the tool, but then they did all sorts of other things, and that's where the creativity came from.
And I can actually see it. Like, I could see the workflow in their mind. They were doing stuff that I couldn't do, that I could tell that there was, like, those creative choices were really what kind of distinguished what they were doing. And so, you know, not to throw the baby out with the bathwater, like, there's a lot of bad AI content, but also, there are some people who are using it just like people have used Photoshop and recording before, as just, like, one part of the process.
And I think, you know, it's possible that in the future, you know, we're going to move past this kind of, like, infancy phase, where there are a lot of people just making puzzle books by typing one prompt into ChatGPT, and it's going to potentially be more of just, like, one element. And what makes the really good stuff different from the really bad stuff is the human kind of, like, nudging it in different ways to make something that's actually unique.
MARRIS: Yeah. I think that's possible. But the other thing to keep in mind, too, is that these models are all trained on, you know, a bunch of stuff that came before. So they don't really have the ability to produce something genuinely new. Like, if you were trying to come up with, like, a new visual style or, like, a new literary form, it wouldn't be able...
LUSE: Right.
MARRIS: ...To do that on its own because it can - all it can do is sort of remix what it's already been fed. So, I mean, my concern is that in creative fields or even product design, if people lean too heavily on these tools, then we will sort of see culture spin its wheels and not really come up with really new stuff, just kind of remixing it. We don't have a new vision for the future. We're just going to talk about the past.
LUSE: You know, some of these creators, though, even though they're not making content in the traditional way that influencers or online creators make content, they're still dealing with some of the things that content creators have to deal with more broadly, specifically threats. Some of these creators that you spoke to, Drew, have received threats or are afraid of harassment from people online. What do you think it is about this content that elicits such an extreme reaction from the audience?
HARWELL: Yeah. That was something I was actually surprised to learn. I kind of expected they would be getting mean messages because this is such, like, an emotionally fraught territory. You know, there are a lot of artists...
LUSE: Sure.
HARWELL: ...Real-world creators who hate anything AI, sometimes rightfully so, because it kind of crowds them out. But for some of these creators - actually this Infinite Unreality guy, he was saying, like, I posted a video with an elephant, you know, just doing some goofy AI thing, and I got these messages from around the world of people saying, like, elephants are sacred to me. What have you done? You've, like, invoked something from beyond our world that I'm really uncomfortable by.
And, you know, when you talk to AI people, like, the programmer types, they see this as, like, just ones and zeros, fancy math, a technical program. But it is so, you know, for - it's that old quote, like, indistinguishable from magic. Like, people invest so much mental energy, and it becomes this bigger thing than that, that everybody has these giant reactions to it.
And so, I think, you know, when we go online and we see these AI-generated, you know, horror films or satires or memes or people who are - you know, are fake, but we're - they're telling us that they're real, it just, like, unnerves us and disturbs us, and some people just have a giant reaction to that.
LUSE: You both have brought up an interesting point, which is that, like, AI slop, like, exists. But that doesn't mean human slop doesn't exist, either, right?
MARRIS: Oh, so true. So true.
LUSE: And, you know, one thing that gets attention, whether it's made by humans or AI, is rage bait. And we've just spent a lot of time on this show talking about rage bait. In your article, you report, the best-performing videos, Gadala-Maria said, have often relied on shock value, such as racist and sexist jokes depicting Black women as primates, as first reported by Wired, or joking about what young AI gals gone wild would do for cash.
Why do you think that content is so well performing?
HARWELL: Yeah. So rage bait, like the attention economy - if everybody is trying to do this and we have human influencers, and we have journalists, we have human makers of all kinds, they're all competing for the same finite amount of attention online and finite amount of money. So to stand out, they have to really piss somebody off, or they have to really be outrageous or - you know. So when you add AI into the operation, if you want to say, I want to make a racist joke or I want to make a sexist joke, all you have to do is spend two seconds typing something into an AI tool, and out it'll pop a video that looks realistic, takes you zero time, takes you zero effort. And for a lot of these creators I talk to and that I've seen, it's a numbers game. Not every video is going to do well. Some of them nobody will watch. But if you can create a hundred in a day, why not, you know? And maybe only one will do well, and that's all you really need.
LUSE: I wonder - we've seen what, like, human-generated, like, rage bait content has done to our society. What potential does AI slop rage bait have for, you know, the types of narratives, you know, it can fuel?
MARRIS: There's been some interesting studies on how AI can be used and is already being used for, like, serious propaganda efforts, you know, like state-sponsored stuff. And I've heard the term AI pasta thrown around. So there's this internet idea of copypasta, where you just sort of cut and paste some text, and you repost it so many times that eventually people - it just kind of oozes into people's consciousness. But with AI, you can rephrase it a whole bunch of different ways or make slightly different versions of the same image or the same video. And as people see it more and more times, they'll just sort of eventually come to accept it as part of reality. So it can be a really effective way to convince people of stuff that's not true for bad actors.
HARWELL: You use it because - I'm a bald white guy, so I can say this. If I want to make all bald white guys look stupid or look like they all believe the same thing or they all do the same thing that people find distasteful, I can just make the AI create a very realistic, lifelike video of them doing that and apply that to whatever group you want to, right? And so, you know, in the past and in current day, you'll watch videos that will upset you, and you're like, is this real? Is this a joke? Is this a prank? Is this, like, somebody just being an influencer? And now you extract that to...
LUSE: Yeah.
HARWELL: ...AI where you don't even need a film crew. You don't even need an actor. You can just do that from your phone through AI in a way that nobody would be able to tell. So I think that's - yeah, the state propaganda point is a really good one because you can see how a bad actor with enough incentive, with enough time, with the initiative can make people believe en masse something they may otherwise not believe. And when you have AI, it's like a force multiplier to the points you want to make. And you can use the AI to sort of put out that - put out those messages and that propaganda you agree with.
MARRIS: I mean, it doesn't solve the problem of this, but you as an individual can always choose not to give your attention to this stuff. Like, that's the point of my story, is that when the internet gives you slop, you can decide to just spend less time on the internet, you know? And there's something really freeing about just taking back some of your attention and time and just walking around, like, unmonetized. Like, it feels good to not be a profit center all day, every day.
LUSE: Gosh, the self is a profit center. You shook me just now, Emma (laughter). Oh, my gosh. I'm sorry - just processing all of this right now. Like, what is the end goal here?
HARWELL: You know, so I will say, I went so deep into the darkness on this that I think I emerged out of the other side. I actually have, like, a kernel of copium that I'm, like, clinging to that I feel like is positive. And it's Emma's point in that I watched so much AI slop that it just became white noise to me, and I was craving human media. I was craving the imperfections of people on a screen saying things that weren't perfectly algorithmically optimized for virality. And I kind of feel like this stuff could make us appreciate classic human creation. I'm a writer, so I'm biased. I - you know, so take that all with a grain of salt. But I do think, like, if this affects our society as much as I think it will, it might, like, yeah, reinspire us to appreciate the art that, you know, takes somebody pain and time and effort and thought to really create as opposed to just tapping a couple buttons.
MARRIS: Yeah. I mean, I - that's my hope, too. I guess my worry is that there'll be a bifurcation, that some people's response to all of this will be like, well, I can't trust the internet, and I don't really like what I'm seeing or how it makes me feel. So I'm going to spend less time online, and I'm going to do more stuff in - you know, with my real friends in my...
LUSE: Real life, yeah.
MARRIS: ...Real life and read books on paper and listen to the radio and go to the real store to buy things and try them on. And then other people are just going to be like, oh, this is fine. I love this. Oh, I think, you know, the diaper cat stories are adorable. And we'll see a bifurcation of culture into sort of the online - the very online and the increasingly offline. And then it'll be a new form of bubble that'll be difficult to communicate across.
(SOUNDBITE OF MUSIC)
LUSE: Oh, my gosh. Drew, Emma, thank you both so much. I really appreciate this conversation.
HARWELL: Yeah, thank you.
MARRIS: Thank you.
LUSE: That was Washington Post tech reporter Drew Harwell and freelance writer Emma Marris. And I'm going to put on my influencer hat for a second and ask you to please subscribe to this show on Spotify, Apple or wherever you're listening. Click follow so you know the latest in culture while it's still hot.
This episode of IT'S BEEN A MINUTE was produced by...
ALEXIS WILLIAMS, BYLINE: Alexis Williams.
LUSE: This episode was edited by...
NEENA PATHAK, BYLINE: Neena Pathak.
LUSE: Our supervising producer is...
BARTON GIRDWOOD, BYLINE: Barton Girdwood.
LUSE: Our executive producer is...
VERALYN WILLIAMS, BYLINE: Veralyn Williams.
LUSE: Our VP of programming is...
YOLANDA SANGWENI, BYLINE: Yolanda Sangweni.
LUSE: All right. That's all for this episode of IT'S BEEN A MINUTE from NPR. I'm Brittany Luse. Talk soon.
(SOUNDBITE OF MUSIC)