gpt-5-mini:priority
GPT-5 is OpenAI's flagship model for coding, reasoning, and agentic tasks across domains.
πVisionπ§ Reasoningπ§Tool callingβ‘Caching
Specifications
Context window400K tokens
Max output128K tokens
API typechat
AddedAug 7, 2025
Model IDopenai/gpt-5-mini:priority
Data retentionYes (30 days)
Used for trainingNo
Provider locationπΊπΈ US
Privacy policyOpenAI Privacy Policy β
Benchmarks
Benchmarks haven't been published yet for this exact variant.
Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β check the base model page or the OpenAI Inc. models overview.
Pricing
Input / 1M
$0.45
Output / 1M
$3.60
Cache write / 1M
$3.60
Cache read / 1M
$0.04
Estimated cost
100K input + 10K output$0.0810
1M input + 100K output$0.81
10M input + 1M output$8.10
Requesty charges exactly what the upstream provider charges β no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.
Quickstart
Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to openai/gpt-5-mini:priority.
123456789101112131415from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="openai/gpt-5-mini:priority", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)
Other OpenAI Inc. models
Frequently asked questions
How much does gpt-5-mini:priority cost?
gpt-5-mini:priority is priced at $0.45 per million input tokens and $3.60 per million output tokens when accessed via Requesty. Prompt caching is supported, which can cut effective input cost by up to 90% on repeated context. Requesty charges exactly what the upstream provider charges β we don't add markup.
What is the context window of gpt-5-mini:priority?
gpt-5-mini:priority has a context window of 400K tokens, with a maximum output of 128K tokens per response. That's roughly 533 words of input you can fit in a single prompt.
What can gpt-5-mini:priority do?
gpt-5-mini:priority supports vision input, tool calling, extended reasoning, prompt caching. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use gpt-5-mini:priority with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "openai/gpt-5-mini:priority". The Quickstart above shows Python, JavaScript and cURL snippets.
Access gpt-5-mini:priority through Requesty
One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.

