What we are building
We are building the LLM platform that will power tomorrow's AI-first companies.
A single platform that allows any LLM user to integrate, scale, secure and optimize their LLM inference across 200+ providers in a few clicks.
A highly scalable, reliable and secure infra that exposes simple, OpenAI-compatible APIs, that every product or tool can use. The API allows you to access any of the LLM providers, taking care of API compatibility while adding telemetry, analytics, load balancing, MCP integrations, prompt management, data loss protection and much more out of the box.
Why join us
You understand that the best development teams need the best tooling and processes.
You understand how hard it is to manage Kubernetes clusters and self-hosted deployments in different environments.
You understand that how to build security practices into systems in a way that protects the users but doesnβt slow them down.
And you are excited to tackle these massive challenges and build an infra product that will be the backbone for the best AI products, and power the next AI coding assistants, agent orchestration systems or any other AI-based solution.
Work directly with the founders to influence the company's direction and make a tangible impact. If you're looking to build your own startup someday or want to thrive in the dynamic environment of a fast-growing company, this is the ideal place to accelerate your impact and growth.
A generous TC package with significant equity.
Our stack
We running a pragmatic micro-service architecture:
You should expect this list to evolve. And most probably, you will be the one redefining and building it.