phi-4
Phi-4-reasoning-plus is an enhanced 14B parameter model from Microsoft, fine-tuned from Phi-4 with additional reinforcement learning to boost accuracy on math, science, and code reasoning tasks. It uses the same dense decoder-only transformer architecture as Phi-4, but generates longer, more comprehensive outputs structured into a step-by-step reasoning trace and final answer. While it offers improved benchmark scores over Phi-4-reasoning across tasks like AIME, OmniMath, and HumanEvalPlus, its responses are typically ~50% longer, resulting in higher latency. Designed for English-only applications, it is well-suited for structured reasoning workflows where output quality takes priority over response speed.
Specifications
Benchmarks
Benchmarks haven't been published yet for this exact variant.
Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β check the base model page or the DeepInfra Inc. models overview.
Pricing
Requesty charges exactly what the upstream provider charges β no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.
Quickstart
Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to deepinfra/microsoft/phi-4.
123456789101112131415from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="deepinfra/microsoft/phi-4", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)
Other DeepInfra Inc. models
Frequently asked questions
How much does phi-4 cost?
What is the context window of phi-4?
What can phi-4 do?
How do I use phi-4 with the OpenAI SDK?
Access phi-4 through Requesty
One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.

