Models

ModelBridge routes your requests to leading AI models from Anthropic, OpenAI, and more. All models are accessible through a single OpenAI-compatible endpoint.

Available models

These models are currently available through ModelBridge. The list updates automatically from the live routing table.

Loading models...

Listing models programmatically

List models

curl https://api.modelbridge.dev/v1/models \
  -H "Authorization: Bearer $OPENAI_API_KEY"

Model routing

When you send a request with a model field, ModelBridge:

  1. Looks up the model in its routing table
  2. Resolves the upstream backend (Anthropic, OpenAI, etc.)
  3. Translates the request format if needed (e.g., OpenAI format to Anthropic format)
  4. Forwards the request and streams the response back

You don't need to worry about provider-specific APIs or authentication. ModelBridge handles format translation transparently.

Try it out

Select a model and language to see a working code example:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.modelbridge.dev/v1",
    api_key="mb_live_your_key_here",
)

response = client.chat.completions.create(
    model="",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello! What can you help me with?"},
    ],
)

print(response.choices[0].message.content)

Pricing

Model pricing varies by provider and capability. Each request's cost is calculated based on input and output tokens, then deducted from your balance. See your billing dashboard for current rates and usage breakdown.

Token costs include a discount based on your plan tier. Check the Billing & Usage docs for details.

Was this page helpful?