> MODEL_REGISTRY

Hosted models, ready to route.

Robi exposes a focused lineup of hosted models. Pick the tier that matches your latency, quality and cost requirements, then call it through the same https://api.robiai.com/v1 surface.

Robi model lineup

Start with a sensible default, then move workloads up or down the stack as you learn more about quality vs. spend.

Every model is addressable by name and can be swapped without touching your product code — just update the mapping in Robi.

lexa-mml

Flagship

Most capable hosted model for complex reasoning, tools and analysis.

Ideal for

  • Agent-style workflows and orchestration
  • High stakes answers and decision support
  • Multi-step tools and function calling
"model": "lexa-mml"
Input: $n/a/1M tokens
Output: $n/a/1M tokens

lexa-x1

Default

Fast, cost-effective model tuned for everyday workloads.

Ideal for

  • Support assistants and helpdesk flows
  • Internal tools and dashboards
  • Summaries, rewrites and light reasoning
"model": "lexa-x1"
Input: $n/a/1M tokens
Output: $n/a/1M tokens

lime

Multimodal

Large-context model built for long inputs and richer context.

Ideal for

  • Long document understanding
  • Knowledge base and retrieval flows
  • Analytics-style prompts over larger spans
"model": "lime"
Input: $n/a/1M tokens
Output: $n/a/1M tokens

Swap models without redeploying

Keep your applications pointed at a stable alias and let Robi handle the mapping to underlying hosted models as they evolve.

  • > Use named presets for chat, tools and batch jobs
  • > Migrate to new versions behind the same alias
  • > Track usage and cost per alias or workspace

Example request

curl -X POST https://api.robiai.com/v1/chat/completions \
  -H "Authorization: Bearer robi_sk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "lexa-x1",
    "messages": [{ "role": "user", "content": "Which Robi model should I use?" }]
  }'