Requesty

deepseek-v4-pro

DeepSeek V4 Pro is a flagship open source Mixture of Experts model designed for frontier reasoning, advanced coding, and long context intelligence at scale (up to 1M tokens). It introduces a hybrid attention architecture that dramatically improves long context efficiency while reducing KV and compute overhead, along with stability and training enhancements for deep multi step reasoning. It represents a top tier open source system for complex agentic workflows, high precision reasoning, and demanding production workloads.

🧠ReasoningπŸ”§Tool calling⚑Caching

Specifications

Context window1M tokens
Max output131K tokens
API typechat
AddedApr 29, 2026
Model IDfireworks/deepseek-v4-pro
Data retentionNo
Used for trainingNo
Provider locationπŸ‡ΊπŸ‡Έ US

Benchmarks

Benchmarks haven't been published yet for this exact variant.

Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β€” check the base model page or the Fireworks AI models overview.

Pricing

Input / 1M
$1.74
Output / 1M
$3.48
Cache write
β€”
Cache read / 1M
$0.15
Estimated cost
100K input + 10K output$0.21
1M input + 100K output$2.09
10M input + 1M output$20.88

Requesty charges exactly what the upstream provider charges β€” no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to fireworks/deepseek-v4-pro.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="fireworks/deepseek-v4-pro", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other Fireworks AI models

Frequently asked questions

How much does deepseek-v4-pro cost?
deepseek-v4-pro is priced at $1.74 per million input tokens and $3.48 per million output tokens when accessed via Requesty. Prompt caching is supported, which can cut effective input cost by up to 90% on repeated context. Requesty charges exactly what the upstream provider charges β€” we don't add markup.
What is the context window of deepseek-v4-pro?
deepseek-v4-pro has a context window of 1M tokens, with a maximum output of 131K tokens per response. That's roughly 1,333 words of input you can fit in a single prompt.
What can deepseek-v4-pro do?
deepseek-v4-pro supports tool calling, extended reasoning, prompt caching. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use deepseek-v4-pro with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "fireworks/deepseek-v4-pro". The Quickstart above shows Python, JavaScript and cURL snippets.

Access deepseek-v4-pro through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.