Requesty

deepseek-r1-distill-qwen-14b

DeepSeek R1 Distill Qwen 14B is a distilled large language model based on Qwen 2.5 14B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. Other benchmark results include: AIME 2024 pass@1: 69.7 MATH-500 pass@1: 93.9 CodeForces Rating: 1481 The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

πŸ”§Tool calling

Specifications

Context window128K tokens
Max outputβ€”
API typechat
AddedJan 30, 2025
Model IDnovita/deepseek/deepseek-r1-distill-qwen-14b
Data retentionYes
Used for trainingUnknown
Provider locationπŸ‡ΊπŸ‡Έ US

Benchmarks

Benchmarks haven't been published yet for this exact variant.

Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β€” check the base model page or the Novita AI models overview.

Pricing

Input / 1M
$0.15
Output / 1M
$0.15
Cache write
β€”
Cache read
β€”
Estimated cost
100K input + 10K output$0.0165
1M input + 100K output$0.16
10M input + 1M output$1.65

Requesty charges exactly what the upstream provider charges β€” no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to novita/deepseek/deepseek-r1-distill-qwen-14b.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="novita/deepseek/deepseek-r1-distill-qwen-14b", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other Novita AI models

Frequently asked questions

How much does deepseek-r1-distill-qwen-14b cost?
deepseek-r1-distill-qwen-14b is priced at $0.15 per million input tokens and $0.15 per million output tokens when accessed via Requesty. Requesty charges exactly what the upstream provider charges β€” we don't add markup.
What is the context window of deepseek-r1-distill-qwen-14b?
deepseek-r1-distill-qwen-14b has a context window of 128K tokens. That's roughly 171 words of input you can fit in a single prompt.
What can deepseek-r1-distill-qwen-14b do?
deepseek-r1-distill-qwen-14b supports tool calling. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use deepseek-r1-distill-qwen-14b with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "novita/deepseek/deepseek-r1-distill-qwen-14b". The Quickstart above shows Python, JavaScript and cURL snippets.

Access deepseek-r1-distill-qwen-14b through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.