For 20 years, a computational complexity theory researcher has posted updates about his work on his blog, Shtetl-Optimized. Last week, a Sept. 27 blog post he made about his recent findings began circulating among an audience of students, researchers and colleagues, largely due to its inclusion of a single word — “AI.”
Scott Aaronson, Schlumberger Centennial Chair of Computer Science, wrote in the blog that he used a reasoning model from one of the latest OpenAI models, GPT-5 thinking, to assist with a “key technical step” in the proof of the paper, which was written with Freek Witteveen, a researcher with Centrum Wiskunde & Informatica in the Netherlands. The paper posted in arXiv, a non-peer reviewed platform, built upon research in quantum physics that Aaronson started in 2008.
“It was something that I personally wanted to know the answer to, and it’s the first time in my career that I have seen AI be helpful to me in the actual research process,” Aaronson said.
GPT-5 thinking helped Aaronson with a complex math question, which was a single step in the research process. Aaronson said the first answer it had given him was wrong. After some back and forth, it gave him a better answer.
“If you’re going to use ChatGPT for research, you need to know enough about the subject that you can tell when it’s wrong,” Aaronson said. “That is absolutely crucial.”
Aaronson said Phillip Harris, a postdoctoral researcher at the University of Bonn in Germany, commented on the blog post with a better answer than the one provided by GPT-5 thinking.
“Psychologically, when you see GPT do something, there’s definitely sort of an urge to nitpick it,” Harris said. “Everyone’s a little bit on edge, like they want to find fault with it somehow.”
Harris said AI has advanced to the point where it can help, and that every mathematician should be using AI “at least a little bit.” His focus is pure math, which is the study of abstract concepts in math rather than its immediate applications.
“(Pure math) is very removed from real life,” Harris said. “That’s what a lot of people like about it, but then simultaneously, that’s sort of the reason AI is so good at it.”
Aaronson had tried using AI to answer similar problems last year, but the results were not as accurate as they are now, according to the blog post. Though he and his colleague could have eventually solved the problem themselves, he said using GPT-5 thinking saved time.
“Everyone basically knows that AI is useful, right?” said Andrew McAlinden, a computer science and mathematics senior.
McAlinden is a student in Aaronson’s Introduction to Quantum Information Science class, an honors undergraduate elective that is open to students across disciplines.
“If I get stuck, and I really don’t know what to do, my choices are: go to office hours, ask a friend or ask AI,” McAlinden said. “No matter what, I’m asking a third party. The AI just makes it faster.”
In the computer science department, Aaronson said a committee will meet this semester to discuss how AI should change the department’s operations.
Aaronson said the department should offer courses where students are expected to use AI, as well as courses that allow students to learn the material for themselves.
“Let’s try to steer this toward better futures rather than worse ones,” Aaronson said. “We need to teach all of the foundational skills, even if AI can now do them.”