Requesty

claude-opus-4-20250514

Claude Opus 4 is Anthropic's most powerful model yet and the best coding model in the world, leading on SWE-bench (72.5%) and Terminal-bench (43.2%). It delivers sustained performance on long-running tasks that require focused effort and thousands of steps, with the ability to work continuously for several hours—dramatically outperforming all Sonnet models and significantly expanding what AI agents can accomplish.

👁Vision🧠Reasoning🔧Tool callingCaching🖥Computer use

Specifications

Context window200K tokens
Max output32K tokens
API typechat
AddedMay 22, 2025
Model IDcoding/claude-opus-4-20250514
Data retentionYes
Used for trainingNo
Provider location🌍 Global

Benchmarks

Benchmarks haven't been published yet for this exact variant.

Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model — check the base model page or the Coding API models overview.

Pricing

Input / 1M
$15.00
Output / 1M
$75.00
Cache write / 1M
$18.75
Cache read / 1M
$1.50
Estimated cost
100K input + 10K output$2.25
1M input + 100K output$22.50
10M input + 1M output$225.00

Requesty charges exactly what the upstream provider charges — no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to coding/claude-opus-4-20250514.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="coding/claude-opus-4-20250514", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other Coding API models

Frequently asked questions

How much does claude-opus-4-20250514 cost?
claude-opus-4-20250514 is priced at $15.00 per million input tokens and $75.00 per million output tokens when accessed via Requesty. Prompt caching is supported, which can cut effective input cost by up to 90% on repeated context. Requesty charges exactly what the upstream provider charges — we don't add markup.
What is the context window of claude-opus-4-20250514?
claude-opus-4-20250514 has a context window of 200K tokens, with a maximum output of 32K tokens per response. That's roughly 267 words of input you can fit in a single prompt.
What can claude-opus-4-20250514 do?
claude-opus-4-20250514 supports vision input, tool calling, extended reasoning, prompt caching, computer use. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use claude-opus-4-20250514 with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "coding/claude-opus-4-20250514". The Quickstart above shows Python, JavaScript and cURL snippets.

Access claude-opus-4-20250514 through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.