meta-llama/llama-4-maverick-17b-128e-instruct-fp8
A lightweight and ultra-fast variant of Llama 3.3 70B, for use when quick response times are needed most.
Specifications
Benchmarks
Benchmarks haven't been published yet for this exact variant.
Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β check the base model page or the Novita AI models overview.
Pricing
Requesty charges exactly what the upstream provider charges β no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.
Quickstart
Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to novita/meta-llama/llama-4-maverick-17b-128e-instruct-fp8.
123456789101112131415from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="novita/meta-llama/llama-4-maverick-17b-128e-instruct-fp8", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)
Other Novita AI models
Frequently asked questions
How much does meta-llama/llama-4-maverick-17b-128e-instruct-fp8 cost?
What is the context window of meta-llama/llama-4-maverick-17b-128e-instruct-fp8?
What can meta-llama/llama-4-maverick-17b-128e-instruct-fp8 do?
How do I use meta-llama/llama-4-maverick-17b-128e-instruct-fp8 with the OpenAI SDK?
Access meta-llama/llama-4-maverick-17b-128e-instruct-fp8 through Requesty
One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.

