All Tutorials
Integration5 minPython

LiteLLM + CreativeAI

Add AI image generation to your LiteLLM proxy or Python app. CreativeAI is fully OpenAI-compatible β€” use the openai/ prefix in LiteLLM and access GPT Image 1, Seedream 3, and more. No custom provider needed.

Why CreativeAI + LiteLLM?

Zero Custom Code

CreativeAI uses the OpenAI format. LiteLLM routes to it with just config β€” no custom provider needed.

Multi-Model Access

GPT Image 1, Seedream 3, DALL-E 3. Access all models through one proxy with a single config file.

Double Fallback

LiteLLM retries across models + CreativeAI auto-failovers within each model. Two layers of reliability.

Team-Ready Proxy

LiteLLM Proxy gives you API key management, usage tracking, and rate limits for your whole team.

Pay-Per-Image

No subscription needed. GPT Image 1 starts at ~2 credits per image. 50 free credits on signup.

Content-Policy Fallback

If one model rejects a prompt, CreativeAI automatically retries with a more permissive model.

Already routing OpenAI image calls through LiteLLM?

Add one line to your config. Change api_key and add api_base. Your proxy callers don't change a single line.

litellm_config.yaml β€” 1-line migration
# Before: LiteLLM β†’ OpenAI directly
- model_name: gpt-image-1
  litellm_params:
    model: openai/gpt-image-1
    api_key: os.environ/OPENAI_API_KEY

# After: LiteLLM β†’ CreativeAI (same format, more models)
- model_name: gpt-image-1
  litellm_params:
    model: openai/gpt-image-1
    api_base: https://api.creativeai.run/v1    # ← add this
    api_key: os.environ/CREATIVEAI_API_KEY     # ← swap key

Step-by-Step Integration

1

Install LiteLLM

If you don't already have LiteLLM installed, grab it from pip. If you're already running a LiteLLM proxy, skip to step 3.

terminal
pip install litellm
2

Get Your CreativeAI API Key

Sign up at creativeai.run, grab your API key from the dashboard, and set it as an environment variable. You get 50 free credits on signup β€” no credit card required.

.env
# .env
CREATIVEAI_API_KEY=your_api_key_here
3

Option A: Use the LiteLLM Python SDK

For quick scripts or single-service apps, call CreativeAI directly through the LiteLLM SDK. Since CreativeAI is OpenAI-compatible, use the openai/ prefix.

generate.py
import litellm

# Generate an image via CreativeAI β€” no proxy needed
response = litellm.image_generation(
    model="openai/gpt-image-1",
    prompt="A serene mountain lake at sunset, photorealistic",
    api_base="https://api.creativeai.run/v1",
    api_key="your_creativeai_api_key",
)

image_url = response.data[0].url
print(image_url)
4

Option B: Configure LiteLLM Proxy

For team use or microservice architectures, add CreativeAI models to your LiteLLM proxy config. Each model maps to a CreativeAI endpoint using the OpenAI-compatible format.

litellm_config.yaml
# litellm_config.yaml
model_list:
  # Image generation via CreativeAI
  - model_name: gpt-image-1
    litellm_params:
      model: openai/gpt-image-1
      api_base: https://api.creativeai.run/v1
      api_key: os.environ/CREATIVEAI_API_KEY

  - model_name: seedream-3
    litellm_params:
      model: openai/seedream-3
      api_base: https://api.creativeai.run/v1
      api_key: os.environ/CREATIVEAI_API_KEY

  - model_name: dall-e-3
    litellm_params:
      model: openai/dall-e-3
      api_base: https://api.creativeai.run/v1
      api_key: os.environ/CREATIVEAI_API_KEY
5

Start the Proxy & Generate

Start LiteLLM proxy with your config, then generate images through it using any OpenAI-compatible client. Your application code doesn't know or care that CreativeAI is behind the proxy.

terminal + client.py
litellm --config litellm_config.yaml --port 4000

import openai

# Point your OpenAI client at the LiteLLM proxy
client = openai.OpenAI(
    api_key="sk-anything",         # proxy handles auth
    base_url="http://localhost:4000"
)

response = client.images.generate(
    model="gpt-image-1",
    prompt="A cyberpunk cityscape at night, neon lights",
    size="1024x1024",
)

print(response.data[0].url)
6

Add Multi-Model Fallback

Map multiple CreativeAI models to the same model_name. LiteLLM automatically load-balances and retries across them β€” if GPT Image 1 is slow, it falls back to Seedream 3.

litellm_config.yaml + client.py
# litellm_config.yaml β€” with fallback routing
model_list:
  # Primary: GPT Image 1 via CreativeAI
  - model_name: image-gen
    litellm_params:
      model: openai/gpt-image-1
      api_base: https://api.creativeai.run/v1
      api_key: os.environ/CREATIVEAI_API_KEY

  # Fallback: Seedream 3 via CreativeAI
  - model_name: image-gen
    litellm_params:
      model: openai/seedream-3
      api_base: https://api.creativeai.run/v1
      api_key: os.environ/CREATIVEAI_API_KEY

router_settings:
  routing_strategy: "simple-shuffle"  # Load-balance across models
  num_retries: 2
  retry_after: 5

# Your code stays the same β€” LiteLLM handles routing
response = client.images.generate(
    model="image-gen",  # Routes to best available model
    prompt="Product photo of a minimalist watch on marble",
    size="1024x1024",
)
7

Shortcut: Auto-Generate Config

CreativeAI exposes a /v1/provider-config/litellm endpoint that generates a ready-to-use config from the live model registry. One curl and you're running.

terminal
# Auto-generate a LiteLLM config from live models
curl -s https://api.creativeai.run/v1/provider-config/litellm > litellm_config.yaml

# Then start the proxy
export CREATIVEAI_API_KEY=your_key
litellm --config litellm_config.yaml --port 4000
8

Bonus: Video Generation

CreativeAI also exposes an OpenAI-compatible /v1/video/generations endpoint. Submit a job and poll for completion β€” works with Kling v3, Seedance 1.5, and more.

video_generate.py
import openai

client = openai.OpenAI(
    api_key="YOUR_CREATIVEAI_KEY",
    base_url="https://api.creativeai.run/v1"
)

# Submit a video generation job
response = client.post(
    "/video/generations",
    body={
        "model": "kling-v3",
        "prompt": "A drone shot over a coastal city at golden hour",
        "duration": "5",
        "aspect_ratio": "16:9",
    },
    cast_to=object,
)

# Poll for completion
generation_id = response["id"]
print(f"Video generation started: {generation_id}")
print(f"Status: {response['status']}")

Why Multi-Model Fallback Matters

Single-provider image APIs fail in predictable ways: 429 rate limits during peak traffic, content-policy false positives on legitimate prompts, and surprise deprecations that break your pipeline overnight.

LiteLLM + CreativeAI gives you two layers of resilience:

Layer 1: LiteLLM Routing

LiteLLM load-balances across multiple model entries and retries failed requests on alternate models. You control the strategy: round-robin, lowest-latency, or simple shuffle.

Layer 2: CreativeAI Failover

Within each model call, CreativeAI auto-failovers across upstream providers. If one provider returns 429 or a content-policy block, CreativeAI retries with an equivalent model β€” transparently.

FAQ

Do I need a custom LiteLLM provider for CreativeAI?

No. CreativeAI implements the OpenAI /v1/images/generations endpoint exactly. Use the openai/ prefix in LiteLLM and set api_base to https://api.creativeai.run/v1.

Does this work with litellm.image_generation()?

Yes. Both the LiteLLM Python SDK (litellm.image_generation()) and the LiteLLM Proxy (/images/generations endpoint) work with CreativeAI.

Can I mix CreativeAI models with other providers?

Absolutely. LiteLLM can route text completions to Anthropic/OpenAI while routing image generation to CreativeAI. Each model entry is independent.

How does pricing work?

Pay-per-generation with no monthly minimum. GPT Image 1 starts at ~2 credits per image. You get 50 free credits on signup β€” no credit card required.

What if CreativeAI is down?

CreativeAI has built-in multi-provider failover. On top of that, you can configure LiteLLM fallbacks to route to a different provider entirely if needed.

See the full integration overview

Visit the LiteLLM integration page for a product overview, available models, current limitations, and the auto-config endpoint.

Ready to Build?

Get your API key and add CreativeAI to your LiteLLM config. 50 free credits β€” no credit card required.