Requesty

glm-5

GLM-5 is Z.ai's SOTA model targeting complex systems engineering and long-horizon agentic tasks. It uses a mixture of experts architecture, so it only activates 40 billion of its 744 billion parameters. This model uses Deepseek Sparse Attention to select only the most relevant tokens for attention, reducing the cost of long-context processing. GLM-5 continues improving on top of GLM-4.7 for coding and agentic use cases, and it's also great for document generation for enterprise workloads.

πŸ”§Tool calling⚑Caching

Specifications

Context window203K tokens
Max output25K tokens
API typechat
AddedApr 2, 2026
Model IDfireworks/glm-5
Data retentionNo
Used for trainingNo
Provider locationπŸ‡ΊπŸ‡Έ US

Benchmarks

Benchmarks haven't been published yet for this exact variant.

Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β€” check the base model page or the Fireworks AI models overview.

Pricing

Input / 1M
$1.00
Output / 1M
$3.20
Cache write
β€”
Cache read / 1M
$0.20
Estimated cost
100K input + 10K output$0.13
1M input + 100K output$1.32
10M input + 1M output$13.20

Requesty charges exactly what the upstream provider charges β€” no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to fireworks/glm-5.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="fireworks/glm-5", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other Fireworks AI models

Frequently asked questions

How much does glm-5 cost?
glm-5 is priced at $1.00 per million input tokens and $3.20 per million output tokens when accessed via Requesty. Prompt caching is supported, which can cut effective input cost by up to 90% on repeated context. Requesty charges exactly what the upstream provider charges β€” we don't add markup.
What is the context window of glm-5?
glm-5 has a context window of 203K tokens, with a maximum output of 25K tokens per response. That's roughly 270 words of input you can fit in a single prompt.
What can glm-5 do?
glm-5 supports tool calling, prompt caching. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use glm-5 with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "fireworks/glm-5". The Quickstart above shows Python, JavaScript and cURL snippets.

Access glm-5 through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.