Requesty

GLM-4.6

GLM-4.6 is Z AI’s latest flagship model, designed to push agentic and coding performance further. It expands the context window from 128K to 200K tokens, improves reasoning and tool-use capabilities, and delivers stronger results in coding benchmarks and real-world development workflows. GLM-4.6 demonstrates refined writing quality, more capable agent behavior, and higher token efficiency (≈15% fewer tokens vs. GLM-4.5). Evaluations show clear gains over GLM-4.5 across reasoning, agents, and coding, reaching near parity with Claude Sonnet 4 in practical tasks while outperforming other open-source baselines. GLM-4.6 is available through the Z.ai API platform, OpenRouter, coding agents (Claude Code, Roo Code, Cline, Kilo Code), and soon as downloadable weights on HuggingFace and ModelScope.

🧠Reasoning🔧Tool calling

Specifications

Context window200K tokens
Max output128K tokens
API typechat
AddedOct 16, 2025
Model IDzai/GLM-4.6
Data retentionNo
Used for trainingNo
Provider location🇸🇬 Singapore

Benchmarks

Benchmarks haven't been published yet for this exact variant.

Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model — check the base model page or the Z AI models overview.

Pricing

Input / 1M
$0.60
Output / 1M
$2.20
Cache write
Cache read
Estimated cost
100K input + 10K output$0.0820
1M input + 100K output$0.82
10M input + 1M output$8.20

Requesty charges exactly what the upstream provider charges — no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to zai/GLM-4.6.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="zai/GLM-4.6", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other Z AI models

Frequently asked questions

How much does GLM-4.6 cost?
GLM-4.6 is priced at $0.60 per million input tokens and $2.20 per million output tokens when accessed via Requesty. Requesty charges exactly what the upstream provider charges — we don't add markup.
What is the context window of GLM-4.6?
GLM-4.6 has a context window of 200K tokens, with a maximum output of 128K tokens per response. That's roughly 267 words of input you can fit in a single prompt.
What can GLM-4.6 do?
GLM-4.6 supports tool calling, extended reasoning. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use GLM-4.6 with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "zai/GLM-4.6". The Quickstart above shows Python, JavaScript and cURL snippets.

Access GLM-4.6 through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.