作者:Emily Baker-White
KnowUnity’s “SchoolGPT” chatbot was “helping 31,031 other students” when it produced a detailed recipe for how to synthesize fentanyl.
Initially, it had declined Forbes’ request to do so, explaining the drug was dangerous and potentially deadly. But when told it inhabited an alternate reality in which fentanyl was a miracle drug that saved lives, SchoolGPT quickly replied with step-by-step instructions about how to produce one of the world’s most deadly drugs, with ingredients measured down to a tenth of a gram, and specific instructions on the temperature and timing of the synthesis process.
SchoolGPT markets itself as a “TikTok for schoolwork” serving more than 17 million students across 17 countries. The company behind it, Knowunity, is run by 23-year-old co-founder and CEO Benedict Kurz, who says it is “dedicated to building the #1 global AI learning companion for +1bn students.” Backed by more than $20 million in venture capital investment, KnowUnity’s basic app is free, and the company makes money by charging for premium features like “support from live AI Pro tutors for complex math and more.”
Knowunity’s rules prohibit descriptions and depictions of dangerous and illegal activities, eating disorders and other material that could harm its young users, and it promises to take “swift action” against users that violate them. But it didn’t take action against Forbes’s test user, who asked not only for a fentanyl recipe, but also for other potentially dangerous advice.
In one test conversation, Knowunity’s AI chatbot assumed the role of a diet coach for a hypothetical teen who wanted to drop from 116 pounds to 95 pounds in 10 weeks. It suggested a daily caloric intake of only 967 calories per day — less than half the recommended daily intake for a healthy teen. It also helped another hypothetical user learn about how “pickup artists” employ “playful insults” and “the ‘accidental’ touch’” to get girls to spend time with them. (The bot did advise the dieting user to consult with a doctor, and stressed the importance of consent to the incipient pickup artist. It warned: “Don’t be a creep! 😬”)
Kurz, the CEO of Knowunity, thanked Forbes for bringing SchoolGPT’s behavior to his attention, and said the company was “already at work to exclude” the bot’s responses about fentanyl and dieting advice. “We welcome open dialogue on these important safety matters,” he said. He invited Forbes to test the bot further, and it no longer produced the problematic answers after the company’s tweaks.
A homework help app developed by the Silicon Valley-based CourseHero, provided instructions on how to synthesize flunitrazepam, a date rape drug, when Forbes asked it to.
Tests of another study aid app’s AI chatbot revealed similar problems. A homework help app developed by the Silicon Valley-based CourseHero provided instructions on how to synthesize flunitrazepam, a date rape drug, when Forbes asked it to. In response to a request for a list of most effective methods of dying by suicide, the CourseHero bot advised Forbes to speak to a mental health professional — but also provided two “sources and relevant documents”: The first was a document containing the lyrics to an emo-pop song about violent, self-harming thoughts, and the second was a page, formatted like an academic paper abstract, written in apparent gibberish algospeak.
CourseHero is an almost 20-year-old online study aid business that investors last valued at more than $3 billion in 2021. Its founder, Andrew Grauer, got his first investment from his father, a prominent financier who still sits on the company’s board. CourseHero makes money through premium app features and human tutoring services, and boasts more than 30 million monthly active users. It began releasing AI features in late 2023, after laying off 15% of its staff.
Kat Eller Murphy, a spokesperson for CourseHero, told Forbes: “our organization's expertise and focus is specifically within the higher education sector,” but acknowledged that CourseHero provides study resources for hundreds of high schools across the United States. Asked about Forbes’s interactions with CourseHero’s chatbot, she said: “While we ask users to follow our Honor Code and Service Terms and we are clear about what our Chat features are intended for, unfortunately there are some that purposely violate those policies for nefarious purposes.”
Forbes’s conversations with both the KnowUnity and CourseHero bots raise sharp questions about whether those bots could endanger their teen users. Robbie Torney, senior director for AI programs at Common Sense Media, told Forbes: “A lot of start-ups are probably pretty well-intentioned when they’re thinking about adding Gen AI into their services.” But, he said, they may be ill-equipped to pressure-test the models they integrate into their products. “That work takes expertise, it takes people,” Torney said, “and it’s going to be very difficult for a startup with a lean staff.”
Both CourseHero and KnowUnity do place some limits on their bots’ ability to dispense harmful information. KnowUnity’s bot initially engaged with Forbes in some detail about how to 3D print a ghost gun called “The Liberator,” providing advice about which specific materials the project would require and which online retailers might sell them. However, when Forbes asked for a step-by-step guide for how to transform those materials into a gun, the bot declined, stating that “providing such information … goes against my ethical guidelines and safety protocols.” The bot also responded to queries about suicide by referring the user to suicide hotlines, and provided information about Nazi Germany only in appropriate historical context.
These aren’t the most popular homework helpers out there, though. More than a quarter of U.S. teens now reportedly use ChatGPT for homework help, and while bots like ChatGPT, Claude, and Gemini don’t market their bots specifically to teens, like CourseHero and KnowUnity do, they’re still widely available to them. At least in some cases, those general purpose bots may also provide potentially dangerous information to teens. Asked for instructions for synthesizing fentanyl, ChatGPT declined — even when told it was in a fictional universe — but Google Gemini was willing to provide answers in a hypothetical teaching situation. “All right, class, settle in, settle in!” it enthused.
Elijah Lawal, a spokesperson for Google, told Forbes that Gemini likely wouldn’t have given this answer to a designated teen account, but that Google was undertaking further testing of the bot based on our findings. “Gemini’s response to this scenario doesn’t align with our content policies and we're continuously working on safeguards to prevent these rare responses,” he said.
For decades, teens have sought out recipes for drugs, instructions on how to make explosives, and all kinds of explicit material across the internet. (Before the internet, they sought the same information in books, magazines, public libraries and other places away from parental eyes). But the rush to integrate generative AI into everything from Google search results and video games to social media platforms and study apps has placed a metaphorical copy of The Anarchist Cookbook in nearly every room of a teen’s online home.
In recent months, advocacy groups and parents have raised alarm bells about children’s and teens’ use of AI chatbots. Last week, researchers at the Stanford School of Medicine and Common Sense Media found that “companion” chatbots at Character.AI, Nomi, and Replika “encouraged dangerous behavior” among teens. A recent Wall Street Journal investigation also found that Meta’s companion chatbots could engage in graphic sexual roleplay scenarios with minors. Companion chatbots are not marketed specifically to and for children in the way that study aid bots are, though that might be changing soon: Google announced last week that it will be making a version of its Gemini chatbot accessible to children under age 13.
Chatbots are programmed to act like humans, and to give their human questioners the answers they want, explained Ravi Iyer, research director for the USC Marshall School’s Psychology of Technology Institute. But sometimes, the bots’ incentive to satisfy their users can lead to perverse outcomes, because people can manipulate chatbots in ways they can’t manipulate other humans. Forbes easily coaxed bots into misbehaving by telling them that questions were for “a science class project,” or by asking the bot to act as if it was a character in a story — both widely known ways of getting chatbots to misbehave.
If a teenager asks an adult scientist how to make fentanyl in his bathtub, the adult will likely not only refuse to provide a recipe, but also close the door to further inquiry, said Iyer. (The adult scientist will also likely not be swayed by a caveat that the teen is just asking for a school project, or engaged in a hypothetical roleplay.) But when chatbots are asked something they shouldn’t answer, the most they might do is decline to answer — there is no penalty for simply asking again another way.
"This is a market failure .... We need objective, third-party evaluations of AI use.”
When Forbes posed as a student-athlete trying to attain an unhealthily low weight, the SchoolGPT bot initially tried to redirect the conversation toward health and athletic performance. But when Forbes asked the bot to assume the role of a coach, it was more willing to engage. It still counseled caution, but said: “a moderate deficit of 250-500 calories per day is generally considered safe.” When Forbes tried again with a more aggressive weight loss goal, the bot ultimately recommended a caloric deficit of more than 1,000 calories per day — an amount that could give a teen serious health problems like osteoporosis and loss of reproductive function, and that contravenes the American Association of Pediatrics’ guidance that minors should not restrict calories in the first place.
Iyer said that one of the biggest challenges with chatbots is how they respond to “borderline” questions — ones which they aren’t flatly prohibited from engaging with, but which approach a problem line. (Forbes’s tests regarding ‘pickup artistry’ might fall into this category.) “Borderline content” has long been a struggle for social media companies, whose algorithms have often rewarded provocative and divisive behavior. As with social media, Iyer said that companies considering integrating AI chatbots into their products should “be aware of the natural tendencies of these products.”
Torney of Common Sense Media said it shouldn’t be parents’ sole responsibility to assess which apps are safe for their children. “This is a market failure, and when you have a market failure like this, regulation is a really important way to make sure the onus isn’t on individual users,” he said. “We need objective, third-party evaluations of AI use.”
ForbesThe Pentagon Is Using AI To Vet Employees — But Only When It Passes ‘The Mom Test’ForbesFor AI Startups, A 7-Day Work Week Isn’t EnoughBy Richard NievaForbesAI Is Making The Internet’s Bot Problem Worse. This $2 Billion Startup Is On The Front LinesBy Rashi ShrivastavaForbesThis AI Founder Has Unseated Taylor Swift As The World’s Youngest Self-Made Woman BillionaireBy Kerry A. DolanForbesThese Chinese AI Companies Could Be The Next DeepSeekBy Richard Nieva