GPT-5.4 mini and nano – faster, better, as always

OpenAI is launching two new models. GPT-5.4 mini and nano are said to be particularly efficient and fast.

listen Print view
The OpenAI logo on a glass facade

(Image: Novikov Aleksey/Shutterstock.com)

2 min. read

New week, new model. This time, it's Groundhog Day with OpenAI. They are launching two new models that, as is almost always the case with new models, are said to be more efficient and powerful than their predecessors. GPT-5.4 nano and mini are also supposed to be particularly fast.

According to OpenAI, the capabilities of the smaller models are almost on par with the strengths of their larger predecessor, which is GPT-5.4. Compared to GPT-5 mini, the new version is said to be better at coding, reasoning, multimodal understanding, and tool usage. “It also achieves performance in several evaluations, including SWE-Bench Pro and OSWorld-Verified, that is nearly on par with the larger GPT-5.4 model,” writes OpenAI. SWE-Bench Pro measures coding capabilities; OSWorld tests agentic abilities. OpenAI lists further benchmark results in the blog post here.

However, the focus is on the latency of the new models – that is, the speed at which tasks are completed. GPT-5.4 nano is the smallest and fastest model, making it the most cost-effective as well. OpenAI recommends it for tasks such as classification, data extraction, and subagents.

GPT-5.4 mini is said to be “particularly effective in coding workflows that benefit from rapid iterations.” Here too, the rule applies: fast and cost-effective. It is also said to consistently perform better in benchmarks than GPT-5 mini.

OpenAI suggests that in the future, the large GPT-5.4 version could be used for planning or final evaluation. GPT-5.4 mini could take on the role of sub-agents that handle individual sub-tasks. Such sub-agents run via OpenAI's coding platform, Codex.

Videos by heise

GPT-5.4 mini is available immediately via API, Codex, and ChatGPT. With a context window of 400,000 tokens, usage in Codex costs $0.75 per million input tokens and $4.50 per million output tokens. In ChatGPT, GPT-5.4 mini is free via “Thinking,” but limited.

GPT-5.4 nano is only available via API and costs $0.20 per million input tokens and $1.25 per million output tokens.

(emw)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.