Requesty

deepseek-ai/DeepSeek-R1-Distill-Llama-70B

DeepSeek-R1-Distill-Qwen-7B is a 7 billion parameter dense language model distilled from DeepSeek-R1, leveraging reinforcement learning-enhanced reasoning data generated by DeepSeek's larger models. The distillation process transfers advanced reasoning, math, and code capabilities into a smaller, more efficient model architecture based on Qwen2.5-Math-7B. This model demonstrates strong performance across mathematical benchmarks (92.8% pass@1 on MATH-500), coding tasks (Codeforces rating 1189), and general reasoning (49.1% pass@1 on GPQA Diamond), achieving competitive accuracy relative to larger models while maintaining smaller inference costs.

Specifications

Context window64K tokens
Max output8K tokens
API typechat
AddedJan 30, 2025
Model IDdeepinfra/deepseek-ai/DeepSeek-R1-Distill-Llama-70B
Data retentionNo
Used for trainingNo
Provider locationπŸ‡ΊπŸ‡Έ US

Benchmarks

Benchmarks haven't been published yet for this exact variant.

Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β€” check the base model page or the DeepInfra Inc. models overview.

Pricing

Input / 1M
$0.23
Output / 1M
$0.69
Cache write
β€”
Cache read
β€”
Estimated cost
100K input + 10K output$0.0299
1M input + 100K output$0.30
10M input + 1M output$2.99

Requesty charges exactly what the upstream provider charges β€” no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to deepinfra/deepseek-ai/DeepSeek-R1-Distill-Llama-70B.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="deepinfra/deepseek-ai/DeepSeek-R1-Distill-Llama-70B", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other DeepInfra Inc. models

Frequently asked questions

How much does deepseek-ai/DeepSeek-R1-Distill-Llama-70B cost?
deepseek-ai/DeepSeek-R1-Distill-Llama-70B is priced at $0.23 per million input tokens and $0.69 per million output tokens when accessed via Requesty. Requesty charges exactly what the upstream provider charges β€” we don't add markup.
What is the context window of deepseek-ai/DeepSeek-R1-Distill-Llama-70B?
deepseek-ai/DeepSeek-R1-Distill-Llama-70B has a context window of 64K tokens, with a maximum output of 8K tokens per response. That's roughly 85 words of input you can fit in a single prompt.
What can deepseek-ai/DeepSeek-R1-Distill-Llama-70B do?
deepseek-ai/DeepSeek-R1-Distill-Llama-70B is a text-generation model you can call through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use deepseek-ai/DeepSeek-R1-Distill-Llama-70B with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "deepinfra/deepseek-ai/DeepSeek-R1-Distill-Llama-70B". The Quickstart above shows Python, JavaScript and cURL snippets.

Access deepseek-ai/DeepSeek-R1-Distill-Llama-70B through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.