gryphe/mythomax-l2-13b
The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time).
Specifications
Benchmarks
Benchmarks haven't been published yet for this exact variant.
Some variants (region-specific deployments, highspeed tiers) share benchmarks with their base model β check the base model page or the Novita AI models overview.
Pricing
Requesty charges exactly what the upstream provider charges β no markup, no per-request fees. Prompt caching and smart routing can reduce effective cost by 30-80%.
Quickstart
Drop-in compatible with the OpenAI SDK. Change the base URL, swap in your Requesty API key, and set the model to novita/gryphe/mythomax-l2-13b.
123456789101112131415from openai import OpenAI client = OpenAI( api_key="YOUR_REQUESTY_API_KEY", base_url="https://router.requesty.ai/v1", ) response = client.chat.completions.create( model="novita/gryphe/mythomax-l2-13b", messages=[ {"role": "user", "content": "Explain quantum computing in one paragraph."}, ], ) print(response.choices[0].message.content)
Other Novita AI models
Frequently asked questions
How much does gryphe/mythomax-l2-13b cost?
What is the context window of gryphe/mythomax-l2-13b?
What can gryphe/mythomax-l2-13b do?
How do I use gryphe/mythomax-l2-13b with the OpenAI SDK?
Access gryphe/mythomax-l2-13b through Requesty
One API key, 400+ models, OpenAI-compatible. No markup on provider prices, automatic failover, and smart caching built-in.

