Requesty

gemini-3.1-flash-lite-preview

Gemini 3.1 Flash Lite Preview is the most cost-efficient model in the Gemini family, optimized for high-volume, low-latency tasks. It delivers fast responses with solid quality for everyday use cases including summarization, classification, and simple reasoning.

👁Vision🔧Tool callingCaching

Specifications

Context window1.0M tokens
Max output66K tokens
API typechat
AddedMar 3, 2026
Model IDgoogle/gemini-3.1-flash-lite-preview
Data retentionYes
Used for trainingUnknown
Provider location🌍 Global
Privacy policyGemini API Terms

Benchmarks

Benchmarks haven't been published yet for this exact variant.

Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model — check the base model page or the Google LLC (Gemini API) models overview.

Pricing

Input / 1M
$0.25
Output / 1M
$1.50
Cache write / 1M
$0.08
Cache read / 1M
$0.02
Estimated cost
100K input + 10K output$0.0400
1M input + 100K output$0.40
10M input + 1M output$4.00

Requesty charges exactly what the upstream provider charges — no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.

Quickstart

Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to google/gemini-3.1-flash-lite-preview.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="google/gemini-3.1-flash-lite-preview", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)

Other Google LLC (Gemini API) models

Frequently asked questions

How much does gemini-3.1-flash-lite-preview cost?
gemini-3.1-flash-lite-preview is priced at $0.25 per million input tokens and $1.50 per million output tokens when accessed via Requesty. Prompt caching is supported, which can cut effective input cost by up to 90% on repeated context. Requesty charges exactly what the upstream provider charges — we don't add markup.
What is the context window of gemini-3.1-flash-lite-preview?
gemini-3.1-flash-lite-preview has a context window of 1.0M tokens, with a maximum output of 66K tokens per response. That's roughly 1,398 words of input you can fit in a single prompt.
What can gemini-3.1-flash-lite-preview do?
gemini-3.1-flash-lite-preview supports vision input, tool calling, prompt caching. You can call it through any OpenAI-compatible client by pointing base_url to Requesty.
How do I use gemini-3.1-flash-lite-preview with the OpenAI SDK?
Install the OpenAI SDK, set base_url to "https://router.requesty.ai/v1", set your API key to your Requesty key, and set the model to "google/gemini-3.1-flash-lite-preview". The Quickstart above shows Python, JavaScript and cURL snippets.

Access gemini-3.1-flash-lite-preview through Requesty

One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.