Developer Quick Starts

Starter Examples

One page, every integration path. Find your starting point, copy working code, and ship in minutes β€” not hours.

OpenAI SDK compatible6 models, 1 API key50 free credits on signup

Image Generation

POST /v1/images/generations β€” OpenAI SDK compatible

python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.creativeai.run/v1"
)

response = client.images.generate(
    model="gpt-image-1",  # or seedream-3, dall-e-3
    prompt="A product photo on white background",
    size="1024x1024"
)

print(response.data[0].url)

Video Generation (async)

POST /v1/video/generations β€” submit, then poll or use webhooks

python
import httpx

API_KEY = "YOUR_API_KEY"
BASE = "https://api.creativeai.run"

# 1. Submit job
resp = httpx.post(
    f"{BASE}/v1/video/generations",
    headers={"Authorization": f"Bearer {API_KEY}"},
    json={
        "model": "kling-v3",  # or seedance-1.5, veo-3.1
        "prompt": "Product reveal animation, slow dolly in",
        "duration": 5,
    },
)
job = resp.json()
task_id = job["id"]

# 2. Poll for result
import time
while True:
    status = httpx.get(
        f"{BASE}/v1/video/generations/{task_id}",
        headers={"Authorization": f"Bearer {API_KEY}"},
    ).json()
    if status["status"] == "completed":
        print(status["video_url"])
        break
    time.sleep(5)

Framework integrations

Vercel AI SDK

typescript
import { generateImage } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';

const creativeai = createOpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://api.creativeai.run/v1',
});

const { image } = await generateImage({
  model: creativeai.image('gpt-image-1'),
  prompt: 'A minimalist tech startup logo',
});

LiteLLM Proxy

yaml
# litellm_config.yaml β€” add to your existing proxy
model_list:
  - model_name: creativeai/gpt-image-1
    litellm_params:
      model: openai/gpt-image-1
      api_key: YOUR_API_KEY
      api_base: https://api.creativeai.run/v1

# Then generate images via your proxy:
# litellm --config litellm_config.yaml
# curl http://localhost:4000/v1/images/generations \
#   -d '{"model": "creativeai/gpt-image-1", "prompt": "..."}'

Webhook (fire-and-forget)

Submit a video job and receive the result via HMAC-signed callback

python
import httpx

resp = httpx.post(
    "https://api.creativeai.run/v1/video/generations",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "model": "kling-v3",
        "prompt": "Product reveal animation",
        "duration": 5,
        "webhook_url": "https://your-app.com/api/webhook",
        "webhook_secret": "whsec_your_secret",
    },
)
# CreativeAI will POST the result to your webhook_url
# with an X-Webhook-Signature HMAC-SHA256 header

Available models

ModelTypemodel parameter
GPT Image 1Imagegpt-image-1
Seedream 3Imageseedream-3
DALL-E 3Imagedall-e-3
Kling v3Videokling-v3
Seedance 1.5Videoseedance-1.5
Veo 3.1Videoveo-3.1

Use model: "auto" for automatic provider selection with content-filter failover.

Ready to build?

Create your API key and start generating in under two minutes.