Requesty

o3-mini

o3-mini is OpenAI's most recent small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini. o3-mini also supports key developer features, like Structured Outputs, function calling, Batch API, and more. Like other models in the o-series, it is designed to excel at science, math, and coding tasks.

🧠ReasoningπŸ”§Tool calling⚑Caching

Specifications

Context window200K tokens
Max output100K tokens
API typechat
AddedJan 31, 2025
Model IDopenai/o3-mini
Data retentionYes (30 days)
Used for trainingNo
Provider locationπŸ‡ΊπŸ‡Έ US

Benchmarks

Released 2025-01
SWE-Bench Verifiedcoding
61.0%

Resolving real GitHub issues from 12 popular Python repositories.

GPQA Diamondreasoning
74.8%

Graduate-level physics, chemistry & biology questions designed to resist Googling.

MMLU Proknowledge
80.2%

Massive Multitask Language Understanding across 57 academic subjects.

Scores are sourced from official model cards, Artificial Analysis, and public leaderboards. Benchmarks measure specific skills and do not capture every aspect of model quality β€” always test on your own workload.

Pricing

Input / 1M
$1.10
Output / 1M
$4.40
Cache write / 1M
$1.10
Cache read / 1M
$0.55
Estimated cost
100K input + 10K output$0.15
1M input + 100K output$1.54
10M input + 1M output$15.40

Requesty charges exactly what the upstream provider charges β€” no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to openai/o3-mini.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="openai/o3-mini", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other OpenAI Inc. models

Frequently asked questions

How much does o3-mini cost?
o3-mini is priced at $1.10 per million input tokens and $4.40 per million output tokens when accessed via Requesty. Prompt caching is supported, which can cut effective input cost by up to 90% on repeated context. Requesty charges exactly what the upstream provider charges β€” we don't add markup.
What is the context window of o3-mini?
o3-mini has a context window of 200K tokens, with a maximum output of 100K tokens per response. That's roughly 267 words of input you can fit in a single prompt.
How does o3-mini perform on benchmarks?
o3-mini scores 92.4% on MATH, 92.1% on HumanEval, 83.2% on AIME 2024. See the full benchmark chart above for results across MMLU Pro, GPQA Diamond, SWE-Bench Verified, HumanEval, MATH, AIME, MMMU, and LiveBench.
What can o3-mini do?
o3-mini supports tool calling, extended reasoning, prompt caching. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use o3-mini with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "openai/o3-mini". The Quickstart above shows Python, JavaScript and cURL snippets.

Access o3-mini through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.