LiteLLM + CreativeAI
Add AI image and video generation to your LiteLLM proxy. Auto-generate config from the live model registry with one curl. Zero custom code.
CreativeAI is fully OpenAI-compatible β use the openai/ prefix in LiteLLM and route GPT Image 1, Seedream 3, Kling v3, and more through your existing proxy.
Auto-config endpoint
Hit /v1/provider-config/litellm to get a ready-to-use YAML config generated from the live model registry. No manual editing. Always up to date with the latest models and endpoints.
- Generated from live model registry β always current
- Includes all image models with correct api_base
- Just set your API key env var and start the proxy
# Auto-generate a complete LiteLLM config curl -s https://api.creativeai.run/v1/provider-config/litellm \ > litellm_config.yaml # Set your key and start the proxy export CREATIVEAI_API_KEY=your_key litellm --config litellm_config.yaml --port 4000
Why use CreativeAI with LiteLLM?
OpenAI compatibility means zero integration friction. Multi-model access means you ship faster.
One curl, full config
Auto-generate a ready-to-use litellm_config.yaml from the live model registry. No manual YAML editing.
Zero custom code
CreativeAI uses the OpenAI format natively. LiteLLM routes to it with config alone β no custom provider plugin.
Images + video in one proxy
Route GPT Image 1, Seedream 3, DALL-E 3, Kling v3, Seedance 1.5, and Veo 3.1 through a single LiteLLM Proxy.
Double fallback
LiteLLM retries across model entries. CreativeAI auto-failovers within each model. Two layers of reliability.
Team-ready
LiteLLM Proxy provides API key management, per-key budgets, usage tracking, and rate limits for your team.
Pay-per-generation
No subscription lock-in. GPT Image 1 from ~2 credits. 50 free credits on signup, no credit card required.
Get started in under 3 minutes
Two integration paths depending on your architecture.
LiteLLM Proxy
Teams & microservices
Auto-generate config, start the proxy, and call it with any OpenAI SDK. Your app code never changes.
import openai
# Point any OpenAI client at your LiteLLM proxy
client = openai.OpenAI(
api_key="sk-anything", # proxy handles auth
base_url="http://localhost:4000"
)
# Generate an image β same code as OpenAI
response = client.images.generate(
model="gpt-image-1",
prompt="A serene mountain lake at sunset",
size="1024x1024",
)
print(response.data[0].url)LiteLLM Python SDK
Scripts & single-service apps
Call CreativeAI directly through the LiteLLM SDK. No proxy needed β just pip install and go.
import litellm
# No proxy needed β call CreativeAI directly
response = litellm.image_generation(
model="openai/gpt-image-1",
prompt="Product photo of a minimalist watch",
api_base="https://api.creativeai.run/v1",
api_key="your_creativeai_api_key",
)
print(response.data[0].url)Video generation included
CreativeAI exposes an OpenAI-compatible /v1/video/generations endpoint. Submit a job and poll for completion β works with Kling v3, Seedance 1.5, Veo 3.1, and more.
Video generation is always async: submit, poll, download. Typical generation takes 30 seconds to 3 minutes depending on model and duration.
import openai
client = openai.OpenAI(
api_key="YOUR_CREATIVEAI_KEY",
base_url="https://api.creativeai.run/v1"
)
# Submit a video generation job
response = client.post(
"/video/generations",
body={
"model": "kling-v3",
"prompt": "Drone shot over a coastal city at golden hour",
"duration": "5",
"aspect_ratio": "16:9",
},
cast_to=object,
)
print(f"Video ID: {response['id']}, Status: {response['status']}")Available models
All models accessible through one proxy with a single API key.
Current limitations
Video generation uses the /v1/video/generations endpoint (not the LiteLLM image_generation call). Submit and poll β async by design.
LiteLLM's built-in image_generation() routing works for image models. Video models require direct API calls or a custom LiteLLM plugin.
The auto-config endpoint returns image models only. Video models are documented separately and called via the OpenAI SDK directly.
CreativeAI does not serve LLM text completions. LiteLLM can route text to Anthropic/OpenAI while routing images to CreativeAI.
Frequently asked questions
Do I need a custom LiteLLM provider?
No. CreativeAI implements the OpenAI /v1/images/generations spec exactly. Use the openai/ prefix in LiteLLM and set api_base to https://api.creativeai.run/v1.
Can I mix CreativeAI with other providers in the same proxy?
Yes. LiteLLM can route text completions to Anthropic or OpenAI, image generation to CreativeAI, and any other provider β all in one config file.
How does the auto-config endpoint work?
GET /v1/provider-config/litellm returns a YAML config generated from the live model registry. It includes all available image models with correct api_base and model IDs. Save it, set your API key env var, and start the proxy.
What about rate limits?
CreativeAI applies per-key rate limits. LiteLLM Proxy can layer its own rate limits on top. If a model returns 429, LiteLLM retries on the next model entry automatically.
Is there a free tier?
50 free credits on signup, no credit card required. After that, top up with credit packages or a subscription β pay-per-generation with no minimums.