作者:Written by
Following the recent launch of a new family of GPT-4.1 models, OpenAI released o3 and o4-mini on Wednesday, the latest addition to its existing line of reasoning models. The o3 model, previewed in December, is OpenAI's most advanced reasoning model to date, while o4-mini is a smaller, cheaper, and faster model.
Simply put, reasoning models are trained to "think before they speak," meaning they take more time to process the prompt but provide higher-quality responses. As a result, like older models, o3 and o4-mini show strong and even improved performance in coding, math, and science tasks. However, they also have an important new addition: visual understanding.
Also: How to use ChatGPT: A beginner's guide to the most popular AI chatbot
OpenAI o3 and o4-mini are OpenAI's first models to "think with images." OpenAI explains that this means the models don't just see an image; they can actually use the visual information in their reasoning process. Users can also now upload images that are low quality or blurry, and the model will still be able to understand them.
Another major first is that o3 and o4-mini can independently or agentically use all ChatGPT tools, including web browsing, Python, image understanding, and image generation, to better resolve complex, multi-step problems. OpenAI says this ability allows the new models to take "a step toward a more agentic ChatGPT that can independently execute tasks on your behalf."
Also: The top 20 AI tools of 2025 - and the #1 thing to remember when you use them
In the livestream launching the models, the team explained that the same way a person may use a calculator to deliver better results, the new models can now employ all of OpenAI's advanced tools to deliver better results.
For example, in a demo, the researcher fed a scientific research poster to o3. He prompted it to analyze the image and draw a conclusion that wasn't included in the poster. To get the answer, o3 knew to browse the internet and zoomed into the image's different elements to generate a conclusive answer, showcasing both its ability to use multiple tools on its own and to analyze images in detail.
According to OpenAI, the o3 and o4-mini models are better overall than the previous generations, with improved instruction following and more useful verifiable responses. On benchmarks across the board, the new models outperformed their predecessors even without using the additional tools they have access to. You can see a quick glimpse below, or visit the blog post for a closer look.
A recent report from The Information claimed the new models would synthesize information from different fields and subject expertise and then use that knowledge to suggest new, innovative experiments. Insider sources who have tested the model said these experiments would encompass many complex topics, such as nuclear fission or pathogen detection, according to the report. OpenAI has yet to directly comment on this.
OpenAI o3 and o4-mini are available today to subscribers, including ChatGPT Plus, Pro, and Team users. The models will appear in the model picker as o3, o4-mini, and o4-mini-high, replacing o1, o3-mini, and o3-mini-high (referring to the models' three reasoning options, low, medium, and high, which determine performance).
Pro users will be able to access o3-pro in a few weeks, but until then, they will still have access to o1-pro. The models are also available for developers via the API.
Also: ChatGPT just made it easy to find and edit all the AI images you've ever generated
To ease concerns about model safety, OpenAI shared that both of the new releases were stress-tested under its safety program and evaluated under its updated Preparedness Framework. To find the details of the evaluations, you can read the complete system card.
OpenAI has also launched Codex CLI, an open-source coding agent that runs locally in users' terminals. It is meant to provide users with a simple and clear way to connect AI models, including o3 and o4-mini (with support for GPT-4.1 coming soon), to their own code and tasks running on their computer. Codex CLI is open-source, and you can access it on GitHub now.
Also: OpenAI to launch AI models that can think up their own experiments, says report
OpenAI also announced the launch of a $1 million initiative. It's meant to support early projects by awarding grants in $25K increments via API credits. Proposals can be submitted via this form on the OpenAI site.
Want more stories about AI? Sign up for Innovation, our weekly newsletter.