MiniMax-M2
MiniMax-M2 is a compact, high-efficiency large language model optimized for end-to-end coding and agentic workflows. With 10 billion activated parameters (230 billion total), it delivers near-frontier intelligence across general reasoning, tool use, and multi-step task execution while maintaining low latency and deployment efficiency.
Specifications
Benchmarks
Released 2025-10Strong real-world coding via agentic training.
Resolving real GitHub issues from 12 popular Python repositories.
Graduate-level physics, chemistry & biology questions designed to resist Googling.
Massive Multitask Language Understanding across 57 academic subjects.
Scores are sourced from official model cards, Artificial Analysis, and public leaderboards. Benchmarks measure specific skills and do not capture every aspect of model quality β always test on your own workload.
Pricing
Requesty charges exactly what the upstream provider charges β no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.
Quickstart
Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to minimaxi/MiniMax-M2.
123456789101112131415from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="minimaxi/MiniMax-M2", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)
Other MiniMax models
Frequently asked questions
How much does MiniMax-M2 cost?
What is the context window of MiniMax-M2?
How does MiniMax-M2 perform on benchmarks?
What can MiniMax-M2 do?
How do I use MiniMax-M2 with the OpenAI SDK?
Access MiniMax-M2 through Requesty
One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.
