Requesty

MiniMax-M2.7

MiniMax-M2.7 is a next-generation large language model designed for autonomous, real-world productivity and continuous improvement. Built to actively participate in its own evolution, M2.7 integrates advanced agentic capabilities through multi-agent collaboration, enabling it to plan, execute, and refine complex tasks across dynamic environments. Trained for production-grade performance, M2.7 handles workflows such as live debugging, root cause analysis, financial modeling, and full document generation across Word, Excel, and PowerPoint. It delivers strong results on benchmarks including 56.2% on SWE-Pro and 57.0% on Terminal Bench 2, while achieving a 1495 ELO on GDPval-AA, setting a new standard for multi-agent systems operating in real-world digital workflows.

πŸ‘Vision🧠ReasoningπŸ”§Tool calling⚑Caching

Specifications

Context window200K tokens
Max output128K tokens
API typechat
AddedMar 19, 2026
Model IDminimaxi/MiniMax-M2.7
Data retentionYes
Used for trainingUnknown
Provider locationπŸ‡ΈπŸ‡¬ Singapore

Benchmarks

Benchmarks haven't been published yet for this exact variant.

Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β€” check the base model page or the MiniMax models overview.

Pricing

Input / 1M
$0.30
Output / 1M
$1.20
Cache write / 1M
$1.20
Cache read / 1M
$0.06
Estimated cost
100K input + 10K output$0.0420
1M input + 100K output$0.42
10M input + 1M output$4.20

Requesty charges exactly what the upstream provider charges β€” no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to minimaxi/MiniMax-M2.7.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="minimaxi/MiniMax-M2.7", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other MiniMax models

Frequently asked questions

How much does MiniMax-M2.7 cost?
MiniMax-M2.7 is priced at $0.30 per million input tokens and $1.20 per million output tokens when accessed via Requesty. Prompt caching is supported, which can cut effective input cost by up to 90% on repeated context. Requesty charges exactly what the upstream provider charges β€” we don't add markup.
What is the context window of MiniMax-M2.7?
MiniMax-M2.7 has a context window of 200K tokens, with a maximum output of 128K tokens per response. That's roughly 267 words of input you can fit in a single prompt.
What can MiniMax-M2.7 do?
MiniMax-M2.7 supports vision input, tool calling, extended reasoning, prompt caching. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use MiniMax-M2.7 with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "minimaxi/MiniMax-M2.7". The Quickstart above shows Python, JavaScript and cURL snippets.

Access MiniMax-M2.7 through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.