LiteLLM + CreativeAI
Add AI image generation to your LiteLLM proxy or Python app. CreativeAI is fully OpenAI-compatible β use the openai/ prefix in LiteLLM and access GPT Image 1, Seedream 3, and more. No custom provider needed.
Why CreativeAI + LiteLLM?
Zero Custom Code
CreativeAI uses the OpenAI format. LiteLLM routes to it with just config β no custom provider needed.
Multi-Model Access
GPT Image 1, Seedream 3, DALL-E 3. Access all models through one proxy with a single config file.
Double Fallback
LiteLLM retries across models + CreativeAI auto-failovers within each model. Two layers of reliability.
Team-Ready Proxy
LiteLLM Proxy gives you API key management, usage tracking, and rate limits for your whole team.
Pay-Per-Image
No subscription needed. GPT Image 1 starts at ~2 credits per image. 50 free credits on signup.
Content-Policy Fallback
If one model rejects a prompt, CreativeAI automatically retries with a more permissive model.
Already routing OpenAI image calls through LiteLLM?
Add one line to your config. Change api_key and add api_base. Your proxy callers don't change a single line.
# Before: LiteLLM β OpenAI directly
- model_name: gpt-image-1
litellm_params:
model: openai/gpt-image-1
api_key: os.environ/OPENAI_API_KEY
# After: LiteLLM β CreativeAI (same format, more models)
- model_name: gpt-image-1
litellm_params:
model: openai/gpt-image-1
api_base: https://api.creativeai.run/v1 # β add this
api_key: os.environ/CREATIVEAI_API_KEY # β swap keyStep-by-Step Integration
Install LiteLLM
If you don't already have LiteLLM installed, grab it from pip. If you're already running a LiteLLM proxy, skip to step 3.
pip install litellm
Get Your CreativeAI API Key
Sign up at creativeai.run, grab your API key from the dashboard, and set it as an environment variable. You get 50 free credits on signup β no credit card required.
# .env CREATIVEAI_API_KEY=your_api_key_here
Option A: Use the LiteLLM Python SDK
For quick scripts or single-service apps, call CreativeAI directly through the LiteLLM SDK. Since CreativeAI is OpenAI-compatible, use the openai/ prefix.
import litellm
# Generate an image via CreativeAI β no proxy needed
response = litellm.image_generation(
model="openai/gpt-image-1",
prompt="A serene mountain lake at sunset, photorealistic",
api_base="https://api.creativeai.run/v1",
api_key="your_creativeai_api_key",
)
image_url = response.data[0].url
print(image_url)Option B: Configure LiteLLM Proxy
For team use or microservice architectures, add CreativeAI models to your LiteLLM proxy config. Each model maps to a CreativeAI endpoint using the OpenAI-compatible format.
# litellm_config.yaml
model_list:
# Image generation via CreativeAI
- model_name: gpt-image-1
litellm_params:
model: openai/gpt-image-1
api_base: https://api.creativeai.run/v1
api_key: os.environ/CREATIVEAI_API_KEY
- model_name: seedream-3
litellm_params:
model: openai/seedream-3
api_base: https://api.creativeai.run/v1
api_key: os.environ/CREATIVEAI_API_KEY
- model_name: dall-e-3
litellm_params:
model: openai/dall-e-3
api_base: https://api.creativeai.run/v1
api_key: os.environ/CREATIVEAI_API_KEYStart the Proxy & Generate
Start LiteLLM proxy with your config, then generate images through it using any OpenAI-compatible client. Your application code doesn't know or care that CreativeAI is behind the proxy.
litellm --config litellm_config.yaml --port 4000
import openai
# Point your OpenAI client at the LiteLLM proxy
client = openai.OpenAI(
api_key="sk-anything", # proxy handles auth
base_url="http://localhost:4000"
)
response = client.images.generate(
model="gpt-image-1",
prompt="A cyberpunk cityscape at night, neon lights",
size="1024x1024",
)
print(response.data[0].url)Add Multi-Model Fallback
Map multiple CreativeAI models to the same model_name. LiteLLM automatically load-balances and retries across them β if GPT Image 1 is slow, it falls back to Seedream 3.
# litellm_config.yaml β with fallback routing
model_list:
# Primary: GPT Image 1 via CreativeAI
- model_name: image-gen
litellm_params:
model: openai/gpt-image-1
api_base: https://api.creativeai.run/v1
api_key: os.environ/CREATIVEAI_API_KEY
# Fallback: Seedream 3 via CreativeAI
- model_name: image-gen
litellm_params:
model: openai/seedream-3
api_base: https://api.creativeai.run/v1
api_key: os.environ/CREATIVEAI_API_KEY
router_settings:
routing_strategy: "simple-shuffle" # Load-balance across models
num_retries: 2
retry_after: 5
# Your code stays the same β LiteLLM handles routing
response = client.images.generate(
model="image-gen", # Routes to best available model
prompt="Product photo of a minimalist watch on marble",
size="1024x1024",
)Shortcut: Auto-Generate Config
CreativeAI exposes a /v1/provider-config/litellm endpoint that generates a ready-to-use config from the live model registry. One curl and you're running.
# Auto-generate a LiteLLM config from live models curl -s https://api.creativeai.run/v1/provider-config/litellm > litellm_config.yaml # Then start the proxy export CREATIVEAI_API_KEY=your_key litellm --config litellm_config.yaml --port 4000
Bonus: Video Generation
CreativeAI also exposes an OpenAI-compatible /v1/video/generations endpoint. Submit a job and poll for completion β works with Kling v3, Seedance 1.5, and more.
import openai
client = openai.OpenAI(
api_key="YOUR_CREATIVEAI_KEY",
base_url="https://api.creativeai.run/v1"
)
# Submit a video generation job
response = client.post(
"/video/generations",
body={
"model": "kling-v3",
"prompt": "A drone shot over a coastal city at golden hour",
"duration": "5",
"aspect_ratio": "16:9",
},
cast_to=object,
)
# Poll for completion
generation_id = response["id"]
print(f"Video generation started: {generation_id}")
print(f"Status: {response['status']}")Why Multi-Model Fallback Matters
Single-provider image APIs fail in predictable ways: 429 rate limits during peak traffic, content-policy false positives on legitimate prompts, and surprise deprecations that break your pipeline overnight.
LiteLLM + CreativeAI gives you two layers of resilience:
Layer 1: LiteLLM Routing
LiteLLM load-balances across multiple model entries and retries failed requests on alternate models. You control the strategy: round-robin, lowest-latency, or simple shuffle.
Layer 2: CreativeAI Failover
Within each model call, CreativeAI auto-failovers across upstream providers. If one provider returns 429 or a content-policy block, CreativeAI retries with an equivalent model β transparently.
Available Models
GPT Image 1
PopularOpenAI's latest β photorealistic, text rendering, complex compositions
model: "openai/gpt-image-1"Seedream 3
FastFast, high-quality generation with excellent prompt adherence
model: "openai/seedream-3"DALL-E 3
DefaultRoutes to best available model β great default choice
model: "openai/dall-e-3"Seedance 1.5
VideoVideo generation model β use with video endpoint
model: "openai/seedance-1.5"Kling v3
VideoProfessional video generation with cinematic quality
model: "openai/kling-v3"Veo 3.1
VideoGoogle's latest video model β high-fidelity output
model: "openai/veo-3.1"FAQ
Do I need a custom LiteLLM provider for CreativeAI?
No. CreativeAI implements the OpenAI /v1/images/generations endpoint exactly. Use the openai/ prefix in LiteLLM and set api_base to https://api.creativeai.run/v1.
Does this work with litellm.image_generation()?
Yes. Both the LiteLLM Python SDK (litellm.image_generation()) and the LiteLLM Proxy (/images/generations endpoint) work with CreativeAI.
Can I mix CreativeAI models with other providers?
Absolutely. LiteLLM can route text completions to Anthropic/OpenAI while routing image generation to CreativeAI. Each model entry is independent.
How does pricing work?
Pay-per-generation with no monthly minimum. GPT Image 1 starts at ~2 credits per image. You get 50 free credits on signup β no credit card required.
What if CreativeAI is down?
CreativeAI has built-in multi-provider failover. On top of that, you can configure LiteLLM fallbacks to route to a different provider entirely if needed.
Ready to Build?
Get your API key and add CreativeAI to your LiteLLM config. 50 free credits β no credit card required.