OpenAI Releases Advanced o3 and o4-mini Reasoning AI Models
April 2025
Back to News

OpenAI Releases Advanced o3 and o4-mini Reasoning AI Models

This launch introduces what the company describes as its most advanced reasoning model to date, o3.

OpenAI has released two new artificial intelligence models—o3 and o4-mini—marking a significant advancement in reasoning, visual understanding, and tool use. The models were made available on April 16, 2025, with o3 positioned as OpenAI’s most capable model to date and o4-mini serving as a smaller, faster, and more cost-effective alternative.

These models are part of the o-series, which prioritize internal "thinking" or deliberation steps before responding. This design improves performance on complex tasks that require logical reasoning, multi-step problem-solving, and accurate interpretation. The o3 model, in particular, builds on the earlier o3-mini preview released in January 2025. While OpenAI initially suggested the o3 technology would debut within GPT-5, it changed course to release o3 and o4-mini directly, likely to respond to market competition and to accelerate feedback and adoption.

A major highlight of the models is their advanced multimodal capability. Both o3 and o4-mini can process and reason with images—interpreting charts, analyzing handwritten notes, understanding visual distortions, and even manipulating images (like rotating or zooming) during problem-solving. This visual reasoning is integrated with text-based inputs, enabling a more holistic approach to tasks.

Additionally, these models represent OpenAI’s first reasoning systems with autonomous tool use. They can independently decide when to use built-in tools like web search, Python code execution, and image generation. For example, they can fetch online data, analyze it using code, and present the results in visual form—all without step-by-step user instructions. This development signals a move toward AI systems that act with more independence in complex workflows.

In terms of performance, o3 sets new records on key AI benchmarks, such as Codeforces (for programming), SWE-bench (software engineering), and MMMU (multimodal reasoning). It also shows strong results in scientific and logical reasoning tests. Meanwhile, o4-mini performs exceptionally well given its smaller size, achieving near-perfect scores on the AIME 2025 math benchmark when paired with Python tools. It outperforms its predecessor o3-mini even in non-technical fields like data science.

OpenAI's release strategy targets a wide range of users. The models are available to paying users on ChatGPT Plus, Pro, and Team plans, with broader access expected for Education and Enterprise users. Free users can try o3 under certain conditions, possibly requiring a special “Think” mode. Developers can also access the models via API.

To support developers further, OpenAI launched Codex CLI, a new open-source tool that allows developers to connect AI models like o3 and o4-mini to local coding environments. Running in the terminal, this tool supports integration with local repositories, aiming to boost developer adoption and streamline AI-assisted software development.

Overall, o3 and o4-mini represent a major step in OpenAI’s efforts to build more intelligent, versatile, and independent AI systems. They not only improve reasoning and visual capabilities but also introduce tool-enabled autonomy—paving the way for AI that can manage increasingly complex tasks with less human guidance.