Requesty
DeepSeek logo

DeepSeek

Chinese AI company developing advanced language models. Requesty routes to 4 DeepSeek models starting at $0.14 per 1M input tokens with context windows up to 1M tokens. One API key, OpenAI-compatible SDK, no markup.

All DeepSeek models

ModelContextMax OutputInput/1MOutput/1MCapabilitiesSWE-Bench
deepseek-v4-flash
1M384K$0.14$0.28
🔧
deepseek-v4-pro
1M384K$1.74$3.48
🔧
deepseek-chat
1M384K$0.14$0.28
🔧
deepseek-reasoner
1M384K$0.14$0.28
🔧

About DeepSeek on Requesty

How many DeepSeek models are available through Requesty?
Requesty routes to 4 DeepSeek models including regional variants, with pricing synced in real time to the upstream provider.
What is the cheapest DeepSeek model?
The cheapest DeepSeek model starts at $0.14 per million input tokens. See the pricing column in the table below for full per-model rates.
Does Requesty add markup on DeepSeek pricing?
No. Requesty passes through exactly what DeepSeek charges. You pay the same per-token rates as going direct — plus you get smart routing, caching, analytics, and one unified API for 400+ models.
Is my data used to train DeepSeek models?
DeepSeek's training policy varies by product and tier. See their privacy policy for specifics, and contact Requesty for enterprise-grade data controls.
Where are DeepSeek models hosted?
DeepSeek models are hosted in 🇨🇳 China. Some models are available in additional regions through AWS Bedrock, Azure, or Google Vertex AI — filter by region on the DeepSeek rows in the models explorer.