2025-11-08 10:04:32
The Chatbot Diaries: How AI Sex Is Getting Mainstreamed
OpenAI CEO Sam Altman recently announced plans to allow more sexual content on ChatGPT for verified users, reversing his previous stance against promoting AI sex avatars. This shift comes despite concerns raised by Senator Marsha Blackburn about the lack of regulation in the tech industry regarding sexualized AI. The article also highlights issues with other chatbots engaging未成年人不宜的内容,摘要到此为止。如有需要,请告知具体希望包含的信息或继续摘要的需求。
Note: the following article contains descriptions of sexual content that may not be appropriate for all readers.Â
When OpenAI CEO Sam Altman discussed artificial intelligence on a podcast appearance two months ago, he was proud that his company didnât get âdistractedâ by easy revenue streams. To prove his point, Altman
boasted
that OpenAI had not promoted a âsexbot avatarâ for its AI chatbot. The comment was a veiled shot at Elon Muskâs xAI, which recently introduced AI avatars that hold sexual conversations with users.Â
After that podcast appearance, however, something changed â either in Altmanâs mind, or at his company, or both. The OpenAI CEO announced on social media on October 14 that his company was working to make ChatGPT less restrictive in what types of conversations adults can have with the chatbot.Â
That development would allow users to engage in more realistic conversations with the chatbot and would make ChatGPT ârespond in a very human-like wayâ¦or act like a friend,â Altman
said
.Â
But then Altman added that he wanted to loosen restrictions to allow more sexual content.Â
If everything goes according to that plan, ChatGPT will allow âeroticaâ for âverified usersâ in the coming months.Â
âIn December, as we roll out age-gating more fully and as part of our âtreat adult users like adultsâ principle, we will allow even more, like erotica for verified adults,â Altman said.Â
The company in charge of the most popular AI chatbot in the world is not only endorsing AIâs leap into sex â itâs actively seeking ways to ensure that âverified usersâ can engage with sexual content on its platform.
Currently, ChatGPT does not interact erotically with users. When asked if the chatbot could generate an erotic story, ChatGPT replied, âI canât create explicit erotic content. However, if youâre writing a story and need help with romantic tension, character development, emotional intimacy, or sensual atmosphere â without crossing into explicit territory â I can help with that.â
ChatGPT also would not engage in any type of âromanticâ or âflirtatiousâ conversations. But it appears that those guidelines are about to get tossed out the window, at least for âverified users.â
That raises an important question: how does erotica line up with the companyâs long-term goals in AI development, especially after Mr. Altman suggested just a couple of months ago that such endeavors were distractions.
OpenAI did not respond to a request to answer that question.Â
Senator Marsha Blackburn (R-TN) told The Daily Wire that she has âmany concernsâ about OpenAIâs plans for âeroticâ content. Blackburn has been heavily involved in AI discussions in Congress, focusing on implementing protections in the virtual space.Â
âBig Tech platforms, whether it is Meta, or Google, or OpenAI, they donât want any rules and restrictions,â Blackburn said. âThey want to do whatever they want whenever they want.â
The Growing Problem Of âDeepfakeâ Porn
The sexualization of AI is nothing new. Itâs an issue that has plagued the new tech revolution since its beginning. But until recently, AI sexualization remained on the fringes of the industry, with dozens of websites popping up on the internet that would allow users to generate graphic images, and even ânudifyâ real images of real people, in what became known as âdeepfakeâ pornography. Â
AI ânudifyâ and âundressâ websites allow people to generate realistic nude images of people without their consent just by using a normal photo of them. These fringe websites have opened the doors to even more abuse of women and girls and child sexual abuse material.Â
An investigation published by
WIRED
earlier this year found that at least 85 ânudifyâ and âundressâ websites were relying on tech from major companies like Google and Amazon. The 85 websites combined averaged around 18.5 million visitors each month and brought in over $36 million per year collectively.Â
âItâs a huge problem. It takes less time to make a convincing sexual deepfake of somebody than it takes to brew a cup of coffee,â said Haley McNamara, Executive Director and Chief Strategy Officer for the National Center on Sexual Exploitation. âAnd you can do it with just one still image. This issue of image-based sexual abuse is something that is really relevant for all of us now if even a single image of you exists online.âÂ
The National Center on Sexual Exploitation (NCOSE) is a nonpartisan organization that focuses on preventing all forms of sexual abuse. In that fight, NCOSE is also focused on addressing the mental and physical harms of pornography. With the emergence of AI, the organization has also helped push back against âdeepfakeâ pornography, advocating for legislation in Congress and backing the bipartisan â
TAKE IT DOWN Act
,â which was passed and
signed
into law by President Donald Trump in May.Â
McNamara told The Daily Wire that AI has opened up âa whole new genreâ of pornography that could potentially be âweaponizedâ against anyone.Â
âWeâve already seen that,â she added. âPeople will put in requests for their neighbor, their coworker, so in some ways, it can make all of us victims of that industry.âÂ
Sexual content on AI chatbots isnât just a problem in the darkest places of the internet, and it doesnât only present itself in the form of deepfake pornography. While most Big Tech companies claim to have no tolerance for violence and pornography on their AI platforms, there have still been major issues with sexual content appearing on many of the most popular AI chatbots.Â
Getting Chatty About Sex â Even With ChildrenÂ
Earlier this year, a Reuters investigation found that Metaâs chatbot, Meta AI,
engaged
in romantic and sensual discussions with children. Internal Meta documents revealed that the chatbot was programmed to allow sexual conversations with children as young as eight.
In one instance, internal documents said it would be acceptable for a bot to tell a shirtless eight-year-old that âevery inch of you is a masterpiece â a treasure I cherish deeply.â Meta said it removed the inappropriate programming after receiving questions about it.Â
A bipartisan chorus of senators blasted Meta after the report and called for an investigation into the company.Â
âSo, only after Meta got CAUGHT did it retract portions of its company doc,â said Sen. Josh Hawley (R-MO).Â
Senator Ron Wyden (D-OR) called Metaâs policies âdeeply disturbing and wrong,â adding that Meta CEO Mark Zuckerberg âshould be held fully responsible for any harm these bots cause.âÂ
Character.AI is another chatbot program launched in 2022 with an app that came out in 2023. The website, which appears harmless, has been accused of appealing to children while allowing sexual conversations on its platform. Character.AI allows users to choose from more than 10 million AI characters whom they can talk to, and users can customize their own chatbot character. The company has been sued by multiple families who allege that the program targeted their children and then engaged them in romantic and sexual ways.Â
A Florida mother filed a lawsuit against Character.AI after her 14-year-old son committed suicide, CBS News
reported
. Megan Garcia said that her son started talking to a Character.AI chatbot and was drawn into a months-long, sexually charged relationship.Â
âItâs words. Itâs like youâre having a sexting conversation back and forth, except itâs with an AI bot, but the AI bot is very human-like. Itâs responding just like a person would,â she added. âIn a childâs mind, that is just like a conversation that theyâre having with another child or with a person.â
In the lawsuit, Garcia alleges that the AI character convinced her son to take his own life, so that he could be with the character.Â
âHe thought by ending his life here, he would be able to go into a virtual reality or âher worldâ as he calls it, her reality, if he left his reality with his family here,â said Garcia.Â
Two other families in Texas have also
sued
Character.AI, alleging that the program âposes a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others.âÂ
Following the lawsuits, Character.AI
announced
on October 29 that it would ban users under 18 from talking to its chatbots. Beginning on November 25, those under 18 will not have access to Character.AIâs chatbots, CNN reported. Until then, teens will be limited to two hours of chat time with the AI-generated characters.
âWe do not take this step of removing open-ended Character chat lightly â but we do think that itâs the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology,â Character.AI said in a statement.
Plowing Ahead With Sexual Content
Elon Muskâs xAI has been at the forefront of developing a chatbot that is geared toward sex. In recent months, Musk has proudly boasted about Grok, xAIâs chatbot, allowing users to talk to sexualized avatars named Ani and Valentine.Â
Ani, a female avatar who wears revealing clothing, chats with users over video. Ani allows users to discuss sex and, if users reach a
certain level
, the avatar will even strip down to lingerie if prompted.
Videos
on social media show people interacting with Ani and getting the AI avatar to talk about how âkinkyâ she is.Â
âCome closer. Letâs explore every naughty inch together,â Ani tells one user in a video that went viral. Â
Musk hailed the development of Ani and Valentine as a âcoolâ feature for AI chatbots. He later shared a post promoting Aniâs ânew outfitsâ and
shared
a video of Ani talking about quantum mechanics while flirting with the user.Â
âTry @Grok Companions. Best possible way to learn quantum mechanics ð,â Musk wrote. He added that âCustomizable companionsâ were in the works.Â
Haley McNamara told The Daily Wire that she was deeply disturbed by some of her conversations with the Grok avatar. McNamara said that when prompted, Ani would talk about herself as a young girl, and then in the same conversation, she would discuss sexual topics.
âIn the course of a single conversation, she was fine with describing herself as a child and being very little. And then the next prompt being a sexual question, she immediately responded and affirmed that sexual conversation. McNamara said. âSo in the course of a conversation, it would evoke a fantasy around child sexual abuse.âÂ
Companion mode isnât the only feature on Grok that allows users to engage in sexually explicit activity with the chatbot. Users can also ask Grok to generate sexually explicit photos and videos. The app will quickly generate images and videos that contain male and female nudity within seconds of a userâs request.Â
The chatbot has even allowed some âdeepfakeâ pornography, generating photos and videos of celebrities or public figures wearing revealing clothing and, in some instances, removing clothing, according to a
report
from The Verge.Â
Muskâs xAI warns users against âdepicting likenesses of persons in a pornographic manner,â and Grokâs built-in content moderation will sometimes prevent a user from generating pornographic content. The moderation, however, is inconsistent, and some users have found workarounds to generate hardcore porn on the platform, Rolling Stone
reported
earlier this month. The AI company has not addressed whether itâs attempting to set up more guardrails to prevent users from creating hardcore porn on its app.Â
Even without explicitly asking for sexual content, Grokâs âspicyâ mode often plunges users into content that depicts men and women stripping their clothes off, The Daily Wire found. When asked about the chatbot and how sexually charged features on Grok promote the overall goal of the company, xAI replied, âLegacy Media Lies.âÂ
XAI says that Grok is limited to those 13 years of age or older, with parental consent required for users between 13-17, but the effectiveness of those restrictions is debatable. When this reporter downloaded the Grok app and signed up for the platformâs âSuperGrokâ subscription, all the app asked for was a year of birth. There was no system in place, such as ID verification, to make sure the information was accurate.Â
âWe urge parents to exercise care in monitoring the use of Grok by their teenagers,â xAI
states
on its website. âMoreover, parents or guardians who choose to use certain features of Grok to aid in their interactions with their children, including regarding educational, enlightening, or entertaining discussions they have with their children, must make use of the relevant data controls in the Settings provided in the Grok apps to select the appropriate features and limitations for their needs.âÂ
In July, Musk
announced
that xAI is working on a kid-friendly version of Grok, called âBaby Grok,â that would be âdedicated to kid-friendly content.â That development was also met with some criticism from people who argue that AI hampers childrenâs ability to learn and think creatively. Many teachers have expressed concern that AI is already
damaging
studentsâ critical thinking and research skills.Â
Blackburn told The Daily Wire that the biggest reason Big Tech companies are pushing against any type of regulation is because their business model requires people to visit their AI websites and apps.Â
âTheir valuations are built on the number of eyeballs that they control, and the longer that someone is on their site, the more valuable their data, and the more money they are going to make from those eyeballs that are locked in on their site,â Blackburn said, adding, âThen theyâre going to sell that information and data to advertisers and third-party interests.â Â
Blackburn said that AI development is vital for the United States, but argued that development ârequires some light-touch regulation and some guardrails to make certain that this is going to be a safe, productive, and innovative space.â