gemini-2.5-flash
Google's first hybrid reasoning model which supports a 1M token context window and has thinking budgets. Most balanced Gemini model, optimized for low latency use cases.
Specifications
Benchmarks
Released 2025-04Resolving real GitHub issues from 12 popular Python repositories.
Graduate-level physics, chemistry & biology questions designed to resist Googling.
Massive Multitask Language Understanding across 57 academic subjects.
Scores are sourced from official model cards, Artificial Analysis, and public leaderboards. Benchmarks measure specific skills and do not capture every aspect of model quality — always test on your own workload.
Pricing
Requesty charges exactly what the upstream provider charges — no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.
Quickstart
Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to google/gemini-2.5-flash.
123456789101112131415from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="google/gemini-2.5-flash", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)
Other Google LLC (Gemini API) models
Frequently asked questions
How much does gemini-2.5-flash cost?
What is the context window of gemini-2.5-flash?
How does gemini-2.5-flash perform on benchmarks?
What can gemini-2.5-flash do?
How do I use gemini-2.5-flash with the OpenAI SDK?
Access gemini-2.5-flash through Requesty
One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.

