Requesty

meta-llama/Llama-3.3-70B-Instruct

A lightweight and ultra-fast variant of Llama 3.3 70B, for use when quick response times are needed most.

πŸ”§Tool calling

Specifications

Context window131K tokens
Max outputβ€”
API typechat
AddedFeb 6, 2025
Model IDdeepinfra/meta-llama/Llama-3.3-70B-Instruct
Data retentionNo
Used for trainingNo
Provider locationπŸ‡ΊπŸ‡Έ US

Benchmarks

Released 2024-12
SWE-Bench Verifiedcoding
23.3%

Resolving real GitHub issues from 12 popular Python repositories.

GPQA Diamondreasoning
50.5%

Graduate-level physics, chemistry & biology questions designed to resist Googling.

MMLU Proknowledge
68.9%

Massive Multitask Language Understanding across 57 academic subjects.

Scores are sourced from official model cards, Artificial Analysis, and public leaderboards. Benchmarks measure specific skills and do not capture every aspect of model quality β€” always test on your own workload.

Pricing

Input / 1M
$0.23
Output / 1M
$0.40
Cache write
β€”
Cache read
β€”
Estimated cost
100K input + 10K output$0.0270
1M input + 100K output$0.27
10M input + 1M output$2.70

Requesty charges exactly what the upstream provider charges β€” no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to deepinfra/meta-llama/Llama-3.3-70B-Instruct.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="deepinfra/meta-llama/Llama-3.3-70B-Instruct", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other DeepInfra Inc. models

Frequently asked questions

How much does meta-llama/Llama-3.3-70B-Instruct cost?
meta-llama/Llama-3.3-70B-Instruct is priced at $0.23 per million input tokens and $0.40 per million output tokens when accessed via Requesty. Requesty charges exactly what the upstream provider charges β€” we don't add markup.
What is the context window of meta-llama/Llama-3.3-70B-Instruct?
meta-llama/Llama-3.3-70B-Instruct has a context window of 131K tokens. That's roughly 175 words of input you can fit in a single prompt.
How does meta-llama/Llama-3.3-70B-Instruct perform on benchmarks?
meta-llama/Llama-3.3-70B-Instruct scores 88.4% on HumanEval, 77.0% on MATH, 68.9% on MMLU Pro. See the full benchmark chart above for results across MMLU Pro, GPQA Diamond, SWE-Bench Verified, HumanEval, MATH, AIME, MMMU, and LiveBench.
What can meta-llama/Llama-3.3-70B-Instruct do?
meta-llama/Llama-3.3-70B-Instruct supports tool calling. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use meta-llama/Llama-3.3-70B-Instruct with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "deepinfra/meta-llama/Llama-3.3-70B-Instruct". The Quickstart above shows Python, JavaScript and cURL snippets.

Access meta-llama/Llama-3.3-70B-Instruct through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.