
Perplexity AI
AI-powered search and research assistant. Requesty routes to 3 Perplexity AI models starting at $1.00 per 1M input tokens with context windows up to 205K tokens. One API key, OpenAI-compatible SDK, no markup.
All Perplexity AI models
| Model | Context | Max Output | Input/1M | Output/1M | Capabilities | SWE-Bench |
|---|---|---|---|---|---|---|
sonar-reasoning-pro | 131K | 8K | $2.00 | $8.00 | β | |
sonar | 131K | 8K | $1.00 | $1.00 | β | |
sonar-pro | 205K | 8K | $3.00 | $15.00 | β |
About Perplexity AI on Requesty
How many Perplexity AI models are available through Requesty?
Requesty routes to 3 Perplexity AI models including regional variants, with pricing synced in real time to the upstream provider.
What is the cheapest Perplexity AI model?
The cheapest Perplexity AI model starts at $1.00 per million input tokens. See the pricing column in the table below for full per-model rates.
Does Requesty add markup on Perplexity AI pricing?
No. Requesty passes through exactly what Perplexity AI charges. You pay the same per-token rates as going direct β plus you get smart routing, caching, analytics, and one unified API for 400+ models.
Is my data used to train Perplexity AI models?
Perplexity AI's training policy varies by product and tier. See their privacy policy for specifics, and contact Requesty for enterprise-grade data controls.
Where are Perplexity AI models hosted?
Perplexity AI models are hosted in πΊπΈ US. Some models are available in additional regions through AWS Bedrock, Azure, or Google Vertex AI β filter by region on the Perplexity AI rows in the models explorer.
