Migrating from Sora or DALL-E? Use promo code DALLE1000 for $10 in free API credits!
API Reference

CreativeAI API

Integrate powerful AI generation capabilities into your applications.

OpenAPI Spec (JSON)
Base URL:https://api.creativeai.run
v1
Builder / API Buyer Proof

Fastest public proof for pricing, reliability, and API shape

For buyers evaluating CreativeAI as a production media API: the public story is simple โ€” fixed per-generation pricing, async jobs with webhook fallback, and bounded retry behavior. Send this page when they want code, then pair it with transparent pricing and reliability details below.

Predictable pricing
Video starts at $0.36โ€“$0.50 and image generation at $0.21โ€“$0.30 depending on credit tier.
/transparent-pricing
Reliability proof
Public overload handling, 429/503 guidance, and standard webhook retries at 0s, 5s, 30s.
/performance#reliability
Shortest API example
Need the quickest buyer-safe request/response proof for async video?
/api-docs#image-to-video-minimal
n8n buyer proof

Minimal n8n image โ†’ video workflow buyers can copy today

For automation buyers, the clean story is: n8n sends a single HTTP request, CreativeAI returns a generation ID immediately, and a webhook completes the workflow when the video is ready. No custom render queue, no polling loop required.

Request shape
One HTTP Request node to POST /v1/video/generations with image_url, prompt, and webhook_url.
Async handling
Store generation_id in Airtable, Sheets, Shopify, or your DB, then let the webhook finish the job.
Best proof links
Pair this with pricing, webhooks, and import-ready templates for faster close cycles.
n8n workflow skeleton
1. Webhook / schedule trigger in n8n
   โ†“
2. HTTP Request โ†’ POST https://api.creativeai.run/v1/video/generations
   {
     "model": "auto",
     "prompt": "Turn this product image into a short vertical promo clip",
     "image_url": "{{$json.product_image_url}}",
     "duration": 5,
     "aspect_ratio": "9:16",
     "webhook_url": "https://your-n8n-host/webhook/creativeai-video"
   }
   โ†“
3. Save generation_id + status=processing to Airtable / Sheets / your DB
   โ†“
4. Webhook node receives completion โ†’ update record + post video URL to Shopify/Slack/CRM

Quick Start

# OpenAI-compatible โ€” drop-in DALL-E 3 replacement
curl -X POST https://api.creativeai.run/v1/images/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "dall-e-3",
    "prompt": "A serene Japanese garden at sunset",
    "size": "1024x1024",
    "n": 1,
    "character_references": ["https://example.com/char1.png", "https://example.com/char2.png"],
    "camera_angle": "isometric",
    "style_reference_url": "https://example.com/style.jpg"
  }'

Image Generation Parameters

model โ€” dall-e-3 (recommended), gpt-image-1
prompt โ€” Text description of the image
size โ€” 1024x1024, 1536x1024, 1024x1536
background โ€” auto (default) or transparent for PNG with alpha channel
n โ€” Number of images (1-4)
quality โ€” standard, hd

Transparent PNG Use Cases

๐ŸŽฎ Game sprites โ€” Characters, items, tiles with clean alpha
๐Ÿท๏ธ Stickers & logos โ€” No background removal needed
๐Ÿ“ฑ App icons โ€” Ready for any background color
๐Ÿ›’ Product cutouts โ€” E-commerce ready images
๐ŸŽจ Design assets โ€” Layer-ready for Figma, Photoshop

Transparent PNG Generation

Generate images with transparent backgrounds โ€” no post-processing needed. Set "background": "transparent" and get a PNG with a proper alpha channel. Perfect for game sprites, stickers, logos, and design assets.

# Generate a transparent PNG โ€” perfect for game sprites, stickers, logos
curl -X POST https://api.creativeai.run/v1/images/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "dall-e-3",
    "prompt": "Pixel art warrior character, 64x64 sprite, clean edges",
    "size": "1024x1024",
    "background": "transparent",
    "n": 1
  }'

๐Ÿ’ก How It Works

When background is set to "transparent", the output format is automatically forced to PNG (since JPEG and WebP don't support alpha channels). The resulting image has a proper alpha channel โ€” no white backgrounds, no post-processing, ready to composite directly into your app, game, or design tool.

Advanced Image Editing

Fix hands, swap backgrounds, expand your canvas, or transfer styles โ€” without regenerating from scratch. Uses the OpenAI-compatible /v1/images/edits endpoint with multipart form upload. Now supports inpainting, outpainting, generative fill, and style transfer.

# Advanced Editing: Inpainting, Outpainting, Style Transfer
curl -X POST https://api.creativeai.run/v1/images/edits \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -F model="seedream-v4.5-edit" \
  -F image=@product-shot.png \
  -F mask=@hand-mask.png \
  -F prompt="A natural human hand with five fingers holding the product in an expanded cinematic scene" \
  -F outpaint_direction="all" \
  -F style_reference_url="https://example.com/style.jpg" \
  -F size="1024x1024"

Mask Format

Masks should be PNG images matching the dimensions of the input image. Use white (255,255,255) to mark areas to edit and black (0,0,0) to mark areas to preserve. Alternatively, PNG images with an alpha channel are supported โ€” transparent areas will be edited, opaque areas preserved.

Utility EndpointsPopular

Background removal, upscaling, and image restoration โ€” the utility operations that e-commerce and automation workflows depend on. All available through the same unified API with async polling support.

Background Removal

Remove backgrounds from product photos, portraits, or any image. Returns a transparent PNG ready for compositing.

POST /api/generate/background-removal
โšก ~3 seconds ยท 2 credits

Super Resolution

Upscale images up to 4x with AI-enhanced detail recovery. Perfect for print, large displays, or restoring low-res assets.

POST /api/generate/upscale
โšก ~5 seconds ยท 3 credits

Image Restoration

Fix damaged, noisy, or low-quality images. Restore old photos, clean up artifacts, and enhance clarity.

POST /api/generate/restore
โšก ~4 seconds ยท 3 credits

๐Ÿ’ก Why Utility Endpoints Matter

Competitive research shows background removal has 11+ dedicated models on Replicate, and upscaling has 33+ models. These aren't edge cases โ€” they're core workflow operations for e-commerce, marketing, and automation platforms.

E-commerce CatalogsProduct PhotographyMarketing AutomationBatch Processing

Video Generation

Generate AI videos from text prompts. Videos use an async job pattern: submit a job, then either poll for status or receive a signed webhook when the render finishes. Supports live Kling and Seedance model aliases via POST /v1/video/generations.
๐Ÿ”„ Migrating from Sora? Just change your base URL and model name โ€” same API pattern.

# Video Generation โ€” async with polling
# Step 1: Submit video generation job
curl -X POST https://api.creativeai.run/v1/video/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "auto",
    "prompt": "A drone shot over a coastal city at golden hour, cinematic",
    "duration": 5,
    "aspect_ratio": "16:9",
    "character_reference_url": "https://example.com/character.png",
    "character_references": ["https://example.com/char1.png", "https://example.com/char2.png"],
    "camera_angle": "isometric",
    "style_reference_url": "https://example.com/style.jpg",
    "webhook_url": "https://your-app.com/webhooks/creativeai/video"
  }'

# Response: { "id": "abc123", "status": "processing", ... }

# Step 2: Poll for completion
curl https://api.creativeai.run/v1/video/generations/abc123 \
  -H "Authorization: Bearer YOUR_API_KEY"

Request Parameters

model โ€” auto (default), kling-o3-pro, kling-v3, seedance-1.5-pro
prompt โ€” Text description of the video
image_url โ€” Optional source image. Switches the job to image-to-video.
duration โ€” Length in seconds (3-15)
aspect_ratio โ€” 21:9, 16:9, 4:3, 1:1, 3:4, 9:16
resolution โ€” 480p, 720p (default), 1080p
failover โ€” Auto-failover to alt providers on error (default: true)
webhook_url โ€” Optional callback URL for completed/failed jobs

Auto Mode Today

1. model: "auto" is the safest starting point and the default request model.
2. If quality routing is not enabled for your environment, auto resolves to Kling O3 Pro and auto-switches between -t2v and -i2v based on your inputs.
3. When quality routing is enabled, CreativeAI can choose another live provider using quality, reliability, and speed scores.
4. Conservative guardrails keep the legacy default unless the preferred provider clears internal score and margin thresholds.

Status Flow

1. POST โ†’ 202 Accepted with job ID
2. GET โ†’ Poll with job ID every 10s, or wait for signed webhook delivery
3. Status: pending/processing โ†’ completed or failed
4. On success: output_url is available via polling and included in the completion payload
โฑ Typical generation: 2-5 minutes

Customer-safe batch workflow guidance

For bulk product-video or catalog runs, keep the public integration story simple: submit independent jobs with bounded concurrency and attach a webhook_url per request.
Persist generation_id per SKU or asset so one slow render never blocks the rest of the queue.
Need a concrete implementation? Shopify product-video webhook playbook ยท webhook delivery guide

Image-to-Video

Animate any still image into a video. Pass image_url alongside your prompt โ€” the model uses the image as the first frame and generates motion from it.

# Image-to-Video โ€” animate a still image
curl -X POST https://api.creativeai.run/v1/video/generations \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "auto",
    "prompt": "Camera slowly zooms in, the character turns and smiles",
    "image_url": "https://example.com/portrait.png",
    "duration": 5,
    "aspect_ratio": "9:16"
  }'
Minimal Request
// POST /v1/video/generations โ€” minimal image-to-video request
{
  "model": "auto",
  "image_url": "https://example.com/source-image.png",
  "prompt": "Subtle camera push in, natural motion, cinematic lighting",
  "duration": 5,
  "aspect_ratio": "16:9"
}
202 Accepted
// POST /v1/video/generations โ€” 202 Accepted
{
  "id": "vgen_i2v_abc123",
  "object": "video.generation",
  "model": "auto",
  "model_actual": "kling-o3-pro-i2v",
  "status": "processing",
  "prompt": "Subtle camera push in, natural motion, cinematic lighting",
  "image_url": "https://example.com/source-image.png",
  "credits": 30,
  "output_url": null,
  "created_at": "2026-04-10T00:15:00Z"
}
Completed Response
// GET /v1/video/generations/vgen_i2v_abc123 โ€” 200 OK
{
  "id": "vgen_i2v_abc123",
  "object": "video.generation",
  "model": "auto",
  "model_actual": "kling-o3-pro-i2v",
  "status": "completed",
  "prompt": "Subtle camera push in, natural motion, cinematic lighting",
  "image_url": "https://example.com/source-image.png",
  "credits": 30,
  "output_url": "https://cdn.creativeai.run/output/i2v-abc123.mp4",
  "error": null,
  "completed_at": "2026-04-10T00:17:32Z"
}

Image Requirements

image_url โ€” Publicly accessible HTTPS URL to a JPEG or PNG image.
Recommended resolution: at least 720p. The aspect ratio of the input image is used if aspect_ratio is omitted.
Combine with prompt to control the motion direction, camera movement, and scene evolution.
Need the shortest buyer-safe proof link? Use /api-docs#image-to-video-minimal.

Batch Video Generation New

Generate up to 20 videos in a single API call. Perfect for A/B testing, product video catalogs, marketing campaign variations, and bulk content production. Each video in the batch runs independently โ€” one slow render never blocks the others.

# Batch Video Generation โ€” generate up to 20 videos in one request
curl -X POST https://api.creativeai.run/v1/video/generations/batch \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompts": [
      {"prompt": "Product showcase video, rotating 360 degrees", "image_url": "https://example.com/product1.png"},
      {"prompt": "Social media clip, dynamic transitions"},
      {"prompt": "Aerial drone shot over mountains at sunset"}
    ],
    "model": "auto",
    "aspect_ratio": "16:9",
    "duration": 5,
    "webhook_url": "https://your-app.com/webhooks/creativeai/batch"
  }'

# Response: { "batch_id": "video_batch_abc123", "total_jobs": 3, ... }

# Check batch status
curl https://api.creativeai.run/v1/video/generations/batch/video_batch_abc123 \
  -H "Authorization: Bearer YOUR_API_KEY"

Request Parameters

prompts โ€” Array of 1-20 prompt configs (required)
prompts[].prompt โ€” Text description for this video
prompts[].image_url โ€” Optional: source image for I2V
model โ€” auto (default), kling-o3-pro, etc.
aspect_ratio โ€” 21:9, 16:9, 4:3, 1:1, 3:4, 9:16
duration โ€” Length in seconds (3-15)
webhook_url โ€” Callback URL for batch completion

Batch Status Tracking

1. POST โ†’ 202 Accepted with batch_id
2. GET /batch/{batch_id} โ†’ Check progress
3. Track counts: completed, failed, processing, pending
4. Each job returns its own output_url when done
๐Ÿ’ก Jobs run in parallel โ€” no blocking
Batch Submitted (202)
// POST /v1/video/generations/batch โ€” 202 Accepted
{
  "batch_id": "video_batch_abc123",
  "total_jobs": 3,
  "jobs": [
    {"id": "vid_xyz1", "prompt": "Product showcase video...", "status": "pending", "credits": 30},
    {"id": "vid_xyz2", "prompt": "Social media clip...", "status": "pending", "credits": 30},
    {"id": "vid_xyz3", "prompt": "Aerial drone shot...", "status": "pending", "credits": 30}
  ],
  "credits_charged": 90
}
Batch Status (200)
// GET /v1/video/generations/batch/{batch_id} โ€” 200 OK
{
  "batch_id": "video_batch_abc123",
  "total_jobs": 3,
  "completed": 2,
  "failed": 0,
  "pending": 0,
  "processing": 1,
  "jobs": [
    {
      "id": "vid_xyz1",
      "prompt": "Product showcase video...",
      "image_url": "https://example.com/product1.png",
      "status": "completed",
      "output_url": "https://cdn.creativeai.run/output/video-1.mp4",
      "error_message": null,
      "credits": 30
    },
    {
      "id": "vid_xyz2",
      "prompt": "Social media clip...",
      "image_url": null,
      "status": "completed",
      "output_url": "https://cdn.creativeai.run/output/video-2.mp4",
      "error_message": null,
      "credits": 30
    },
    {
      "id": "vid_xyz3",
      "prompt": "Aerial drone shot...",
      "image_url": null,
      "status": "processing",
      "output_url": null,
      "error_message": null,
      "credits": 30
    }
  ]
}

๐Ÿ’ก Use Cases

A/B Testing: Generate multiple video variations to test which performs best with your audience.
Product Catalogs: Create 50+ product videos at once for e-commerce stores.
Marketing Campaigns: Scale social content with multiple variations for different platforms.
Agency Workflows: Serve multiple clients efficiently with batch processing.

Video Response Format

Job Submitted (202)
// POST /v1/video/generations โ€” 202 Accepted
{
  "id": "vgen_abc123",
  "object": "video.generation",
  "model": "auto",
  "model_actual": "kling-o3-pro-t2v",
  "status": "processing",
  "prompt": "A drone shot over a coastal city at golden hour, cinematic",
  "credits": 60,
  "failover_used": false,
  "output_url": null,
  "webhook_url": "https://your-app.com/webhooks/creativeai/video",
  "created_at": "2026-03-09T16:10:00Z"
}
Completed (200)
// GET /v1/video/generations/vgen_abc123 โ€” 200 OK (completed)
{
  "id": "vgen_abc123",
  "object": "video.generation",
  "model": "auto",
  "model_actual": "kling-o3-pro-t2v",
  "status": "completed",
  "prompt": "A drone shot over a coastal city at golden hour, cinematic",
  "credits": 60,
  "output_url": "https://cdn.creativeai.run/output/video-abc123.mp4",
  "error": null,
  "failover_used": false,
  "debug": {
    "provider_requested": "kling-o3-pro-t2v",
    "provider_actual": "kling-o3-pro-t2v",
    "submission_failover_used": false,
    "submission_failover_reason": null,
    "routing_safe_fallback_applied": true,
    "routing_fallback_reason": "insufficient_margin_over_legacy",
    "routing_preferred_provider": "seedance-t2v",
    "routing_legacy_provider": "kling-o3-pro-t2v",
    "routing_confidence_margin": 5.0,
    "async_capacity_failover_attempted": false,
    "async_capacity_failover_reason": null,
    "prediction_id_original": null,
    "prediction_id_current": "pred_auto_1"
  },
  "created_at": "2026-03-09T16:10:00Z",
  "completed_at": "2026-03-09T16:12:43Z"
}
Failed (200)
// GET /v1/video/generations/vgen_abc123 โ€” 200 OK (failed)
{
  "id": "vgen_abc123",
  "object": "video.generation",
  "model": "auto",
  "model_actual": "seedance-t2v",
  "status": "failed",
  "prompt": "A drone shot over a coastal city at golden hour, cinematic",
  "credits": 30,
  "output_url": null,
  "error": {
    "message": "Video generation timed out after 10 minutes",
    "type": "server_error",
    "code": "generation_failed"
  },
  "failover_used": true,
  "created_at": "2026-03-09T16:10:00Z",
  "completed_at": "2026-03-09T16:20:02Z"
}

Auto Routing, Failover & Debug

Public video responses always tell you what you asked for and what actually ran. When failover is enabled (default), submissions automatically retry on an alternative provider if the first one fails.

model โ€” The model you requested. If you send "auto", the status response keeps model: "auto".
model_actual โ€” The concrete provider key that is currently serving or finished the job.
failover_used โ€” true if submission failover moved the request to another provider.
debug โ€” Optional status-only block. When enabled for your environment, it includes routing and failover metadata such as provider_requested, provider_actual, routing_safe_fallback_applied, routing_fallback_reason, routing_preferred_provider, routing_legacy_provider, and routing_confidence_margin.

Quality-based auto routing is a gated rollout. If it is not enabled for your environment, auto stays on the conservative Kling O3 Pro default path.

Kling V3 Multi-Shot Storyboarding Early Access

The upcoming Kling V3 and O3 integrations offer advanced multi-shot storyboarding and cinematic AI director capabilities directly via our unified API. This enables high-volume, multi-scene video generation without jumping between different prompt workflows.

Multi-shot Generation Pipeline

  • Sequential Prompting: Submit an array of prompts to generate connected scenes in a single API call.
  • Persistent Fidelity: Kling V3 maintains character identity and background consistency across multiple generated shots.
  • Automatic Transitions: Choose between hard cuts, crossfades, or continuous camera motion between prompt segments.
  • Reference Anchor (O3 Pro): Provide a static keyframe (image) as an anchor that acts as the starting point for shot 1 and reference for subsequent shots.

AI Director Mode Parameters

  • Camera Control: Explicit controls for pan, tilt, zoom, truck, pedestal, and roll for each shot.
  • Motion Intensity: Variable cfg_scale and motion weighting per shot (e.g., fast action in shot 1, slow motion in shot 2).
  • Audio & Lip-sync: sound: true generates native atmospheric audio that aligns with sequences.
  • Extended Durations: Generate seamless clips up to 10s (V3) and 15s (O3), which can be stitched for long-form ad creatives.

Multi-Shot Example Request (Python)

import requests

response = requests.post(
    "https://api.creativeai.run/v1/video/generations",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "model": "kling-v3",
        "prompt": "Camera pans right over a dense neon-lit cyberpunk city. Flying cars zip past.",
        "multi_shot_prompts": [
            "Cut to a close-up of a cyborg detective lighting a cigarette.",
            "The camera pulls back quickly to reveal the detective is standing on the edge of a skyscraper."
        ],
        "ai_director_mode": True,
        "aspect_ratio": "16:9",
        "character_reference_url": "https://example.com/character.png",
        "character_references": ["https://example.com/char1.png", "https://example.com/char2.png"],
        "camera_angle": "isometric",
        "style_reference_url": "https://example.com/style.jpg",
        "duration": 10,
        "webhook_url": "https://your-server.com/webhooks/video"
    }
)

print(response.json())

One API for Image + Video + Voice

Unlike Replicate or fal.ai where you need separate integrations for each model type, CreativeAI gives you images, videos, and voice narration through a single unified API. No more stitching together multiple vendors.

Predictable Pricing
Fixed credits per generation โ€” no per-second model costs or surprise bills
Simpler Integration
One SDK, one auth, one webhook pattern for all asset types
Multimodal Workflows
Generate image โ†’ animate โ†’ narrate โ†’ compose in one pipeline

Voice Generation

Generate natural speech from text in 10 languages with 6 voice presets. Uses the same async job pattern as video: submit via POST /api/generate/voice, then poll or receive a webhook.
๐ŸŽง 8 credits per generation. Pair with Compose to add narration to any video.

# Voice Generation (Text-to-Speech) โ€” async with polling
# Step 1: Submit voice generation job (8 credits)
curl -X POST https://api.creativeai.run/api/generate/voice \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Welcome to CreativeAI. Your product video is ready.",
    "voice": "alloy",
    "language": "en",
    "speed": 1.0,
    "webhook_url": "https://your-app.com/webhooks/creativeai/voice"
  }'

# Response: { "id": "voice_abc123", "status": "PENDING", ... }

# Step 2: Poll for completion
curl https://api.creativeai.run/api/generate/status/voice_abc123 \
  -H "Authorization: Bearer YOUR_API_KEY"

Request Parameters

text โ€” Text to synthesize (1โ€“5,000 characters)
voice โ€” alloy (default), echo, fable, onyx, nova, shimmer
language โ€” en (default), zh, ja, ko, es, fr, de, pt, ru, ar
speed โ€” Playback speed 0.5โ€“2.0 (default 1.0)
webhook_url โ€” Optional callback URL for completed/failed jobs
Status Response
// GET /api/generate/status/{id} โ€” 200 OK
{
  "id": "voice_abc123",
  "type": "VOICE",
  "model": "creative-voice",
  "status": "COMPLETED",
  "output_url": "https://cdn.creativeai.run/output/voice-abc123.mp3",
  "credits": 8,
  "parameters": {
    "voice": "alloy",
    "language": "en",
    "speed": 1.0
  },
  "created_at": "2026-03-27T18:30:00Z",
  "completed_at": "2026-03-27T18:30:12Z"
}

Status Flow

1. POST โ†’ 202 Accepted with job ID
2. GET โ†’ Poll /api/generate/status/{id} every 3s, or wait for signed webhook
3. Status: PENDING/PROCESSING โ†’ COMPLETED or FAILED
4. On failure, credits are refunded automatically
โฑ Typical generation: under 30 seconds

Video Webhooks

For production workloads, pass webhook_url in your create request and process callback events instead of heavy polling loops. Webhooks are retried with exponential backoff and deduplicated per generation ID + terminal status.

Delivery Payload
# Outbound webhook from CreativeAI to your webhook_url
POST /webhooks/creativeai/video
Content-Type: application/json
X-CreativeAI-Event: video.generation.completed
X-CreativeAI-Delivery-Id: vgen_abc123
X-CreativeAI-Signature: sha256=<hmac-hex>   # present when signed

{
  "id": "vgen_abc123",
  "object": "video.generation",
  "status": "completed",
  "model": "auto",
  "model_actual": "seedance-t2v",
  "prompt": "A drone shot over a coastal city at golden hour, cinematic",
  "output_url": "https://cdn.creativeai.run/output/video-abc123.mp4",
  "error_message": null,
  "credits": 30,
  "failover_used": true,
  "completed_at": "2026-03-09T16:12:43Z"
}
Node Signature Verification
// Node.js signature verification
import crypto from 'crypto';

function verifyCreativeAiWebhook(rawBody, signatureHeader, signingSecret) {
  if (!signatureHeader?.startsWith('sha256=')) return false;
  const expected = crypto
    .createHmac('sha256', signingSecret)
    .update(rawBody)
    .digest('hex');
  const provided = signatureHeader.slice('sha256='.length);
  return crypto.timingSafeEqual(Buffer.from(expected), Buffer.from(provided));
}

Make.com Webhook Integration

To eliminate polling in Make.com, use a Custom Webhook module and map its URL to the webhook_url parameter. Combine this with webhook_auto_retry to natively handle transient model failures.

1. Set Up the Trigger
  1. Add a Webhooks โ†’ Custom Webhook module
  2. Create a webhook and copy the unique URL
  3. Wait for the module to start listening for data
2. Call the API
  1. Add an HTTP โ†’ Make an API Key Auth request module
  2. Pass the webhook URL into "webhook_url"
  3. Pass "webhook_auto_retry": true to enable automatic retry if the generation job fails

Webhook Delivery Behavior

1. Terminal events are delivered on completed and failed states.
2. Retries use delays of 0s, 5s, and 30s.
3. Duplicate sends are skipped per generation_id + status.
4. X-CreativeAI-Signature is included when a signing secret is configured.
5. If delivery still fails after all attempts, the final result remains available via the status API.

Webhook Payload Examples Integration Planning

Complete webhook payload examples for all event types. Use these to plan your integration, build handlers, and test your webhook endpoints.

video.generation.completedVideo completed successfully
# Outbound webhook from CreativeAI to your webhook_url
POST /webhooks/creativeai/video
Content-Type: application/json
X-CreativeAI-Event: video.generation.completed
X-CreativeAI-Delivery-Id: vgen_abc123
X-CreativeAI-Signature: sha256=<hmac-hex>   # present when signed

{
  "id": "vgen_abc123",
  "object": "video.generation",
  "status": "completed",
  "model": "auto",
  "model_actual": "seedance-t2v",
  "prompt": "A drone shot over a coastal city at golden hour, cinematic",
  "output_url": "https://cdn.creativeai.run/output/video-abc123.mp4",
  "error_message": null,
  "credits": 30,
  "failover_used": true,
  "completed_at": "2026-03-09T16:12:43Z"
}
video.generation.failedVideo generation failed with error
# Webhook delivery for failed video generation
POST /webhooks/creativeai/video
Content-Type: application/json
X-CreativeAI-Event: video.generation.failed
X-CreativeAI-Delivery-Id: vgen_xyz789
X-CreativeAI-Signature: sha256=<hmac-hex>

{
  "id": "vgen_xyz789",
  "object": "video.generation",
  "status": "failed",
  "model": "auto",
  "model_actual": "kling-o3-pro-t2v",
  "prompt": "A drone shot over mountains at sunset",
  "output_url": null,
  "error": {
    "message": "Video generation timed out after 10 minutes",
    "type": "server_error",
    "code": "generation_timeout"
  },
  "error_message": "Video generation timed out after 10 minutes",
  "credits": 0,
  "credits_refunded": 60,
  "failover_used": true,
  "completed_at": "2026-03-09T16:20:02Z"
}
video.batch.completedBatch video generation completed
# Webhook delivery for batch video completion
POST /webhooks/creativeai/batch
Content-Type: application/json
X-CreativeAI-Event: video.batch.completed
X-CreativeAI-Delivery-Id: video_batch_abc123
X-CreativeAI-Signature: sha256=<hmac-hex>

{
  "batch_id": "video_batch_abc123",
  "object": "video.batch",
  "status": "completed",
  "total_jobs": 3,
  "completed": 3,
  "failed": 0,
  "jobs": [
    {
      "id": "vid_xyz1",
      "prompt": "Product showcase video, rotating 360 degrees",
      "status": "completed",
      "output_url": "https://cdn.creativeai.run/output/video-1.mp4",
      "credits": 30
    },
    {
      "id": "vid_xyz2",
      "prompt": "Social media clip, dynamic transitions",
      "status": "completed",
      "output_url": "https://cdn.creativeai.run/output/video-2.mp4",
      "credits": 30
    },
    {
      "id": "vid_xyz3",
      "prompt": "Aerial drone shot over mountains at sunset",
      "status": "completed",
      "output_url": "https://cdn.creativeai.run/output/video-3.mp4",
      "credits": 30
    }
  ],
  "total_credits": 90,
  "completed_at": "2026-03-09T16:15:00Z"
}
image.generation.completedAsync image generation completed
# Webhook delivery for async image generation (native API)
POST /webhooks/creativeai/image
Content-Type: application/json
X-CreativeAI-Event: image.generation.completed
X-CreativeAI-Delivery-Id: img_abc123
X-CreativeAI-Signature: sha256=<hmac-hex>

{
  "id": "img_abc123",
  "object": "image.generation",
  "status": "completed",
  "model": "seedream-3.0",
  "prompt": "A minimalist product photo on white background",
  "output_url": "https://cdn.creativeai.run/output/image-abc123.png",
  "revised_prompt": "A minimalist product photo on white background, professional lighting",
  "width": 1024,
  "height": 1024,
  "credits": 2,
  "completed_at": "2026-03-09T16:05:12Z"
}
voice.generation.completedVoice/narration generation completed
# Webhook delivery for voice generation
POST /webhooks/creativeai/voice
Content-Type: application/json
X-CreativeAI-Event: voice.generation.completed
X-CreativeAI-Delivery-Id: voice_abc123
X-CreativeAI-Signature: sha256=<hmac-hex>

{
  "id": "voice_abc123",
  "object": "voice.generation",
  "status": "completed",
  "text": "Welcome to CreativeAI. Your launch assets are rendering now.",
  "voice": "alloy",
  "language": "en",
  "output_url": "https://cdn.creativeai.run/output/voice-abc123.mp3",
  "duration_seconds": 4.2,
  "credits": 1,
  "completed_at": "2026-03-09T16:03:45Z"
}

๐Ÿ’ก Integration Planning Tips

Event Routing: Use X-CreativeAI-Event header to route different event types to different handlers.
Deduplication: Use X-CreativeAI-Delivery-Id + status as your dedup key.
Verification: Always verify X-CreativeAI-Signature before processing. Use your API key as the HMAC secret.
Retry Safety: Implement idempotent handlers. Same delivery may be received multiple times during retries.

Getting Started

1

Create an account

Sign up at creativeai.run โ€” free tier includes starter credits.

2

Get your API key

Go to Dashboard โ†’ API Keys โ†’ Create New Key. Copy and store it securely.

3

Make your first request

Use any OpenAI SDK or plain HTTP. Point base_url to api.creativeai.run and start generating.

Authentication

All API requests require a Bearer token in the Authorization header.

Authorization: Bearer YOUR_API_KEY

Create API keys in your Dashboard โ†’ API Keys. Keys are scoped to your account and inherit your credit balance.

Commerce enhancement โ†’ motion โ†’ voice workflow

This is the smallest standard-endpoint proof for a commerce buyer who wants one product image turned into a cleaned-up, background-ready, narrated short-form asset without stitching together multiple vendors. Start with /v1/images/edits, then chain /v1/video/generations, /api/generate/voice, and /api/generate/compose.

Step 1 โ€” Clean up the still

โ€ข Use /v1/images/edits to relight, sharpen, or place the item onto a clean studio or lifestyle background.
โ€ข Keep the subject fixed while changing the background with prompt text instead of manual retouch tooling.
โ€ข Best fit for buyer questions around enhancement, cleanup, and background replacement.

Step 2 โ€” Generate motion + narration

โ€ข Submit the improved still to /v1/video/generations for a 5โ€“10 second motion clip.
โ€ข Generate the voice track separately with /api/generate/voice.
โ€ข Both jobs can run in parallel and follow the same async pattern used elsewhere in the API.

Step 3 โ€” Deliver the final asset

โ€ข Use /api/generate/compose to merge the hosted MP3 onto the hosted MP4.
โ€ข Final output is returned as a direct asset URL that your app can download, relay, or store in its own bucket.
โ€ข Webhook delivery uses 3 attempts with exponential backoff (0s, 5s, 30s). If delivery still fails, the final result remains available via the status API.
# Commerce workflow: cleanup -> motion -> voice -> final MP4
# 1) Enhance the source product image for marketplace-safe lighting/background
curl -X POST https://api.creativeai.run/v1/images/edits   -H "Authorization: Bearer YOUR_API_KEY"   -F model="seedream-v4.5-edit"   -F image=@product.jpg   -F prompt="Clean up this product photo, sharpen details, improve lighting, and place the item on a clean white studio background with a soft natural shadow"   -F size="1024x1024"

# 2) Turn the cleaned-up still into a short motion clip
curl -X POST https://api.creativeai.run/v1/video/generations   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "model": "auto",
    "image_url": "https://cdn.creativeai.run/output/clean-product-shot.png",
    "prompt": "Slow premium turntable shot of this product, subtle camera drift, soft studio reflections, ecommerce ad style",
    "duration": 5,
    "webhook_url": "https://your-app.com/webhooks/creativeai/video"
  }'

# 3) Generate the narration track
curl -X POST https://api.creativeai.run/api/generate/voice   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "text": "Meet the travel mug built for commutes, workouts, and long desk days. Leak-resistant, lightweight, and ready to go.",
    "voice": "alloy",
    "language": "en",
    "speed": 1.0,
    "webhook_url": "https://your-app.com/webhooks/creativeai/voice"
  }'

# 4) Merge the hosted voice track onto the hosted MP4
curl -X POST https://api.creativeai.run/api/generate/compose   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "video_url": "https://cdn.creativeai.run/output/product-motion.mp4",
    "audio_url": "https://cdn.creativeai.run/output/product-voice.mp3",
    "replace_audio": true,
    "webhook_url": "https://your-app.com/webhooks/creativeai/compose"
  }'
Final compose status example
// GET /api/generate/status/{id} โ€” 200 OK
{
  "id": "comp_commerce123",
  "status": "COMPLETED",
  "output_url": "https://cdn.creativeai.run/output/travel-mug-story.mp4",
  "credits": 5,
  "created_at": "2026-03-27T21:40:00Z",
  "completed_at": "2026-03-27T21:40:22Z"
}

Buyer-safe implementation notes

โ€ข The examples intentionally use public standard endpoints, so Sales can share them without implying custom or fragile internal tooling.
โ€ข Image cleanup can cover enhancement, relighting, and background-ready product presentation in one step.
โ€ข Motion, voice, and compose stay modular, so teams can reuse only the parts they need inside an existing storefront, CMS, or media pipeline.

Voice + Video Composition

Use POST /api/generate/compose to merge a hosted audio file onto a hosted MP4. It is the simplest public workflow proof for demo, documentation, and API-platform buyers who want one vendor for visuals, narration, and final delivery.

Required Inputs

video_url โ€” Hosted MP4 to update
audio_url โ€” Hosted narration or soundtrack
replace_audio โ€” true to swap the soundtrack, false to mix audio over the original track
webhook_url โ€” Optional callback when composition finishes

Typical Workflow

1. Generate narration with POST /api/generate/voice
2. Compose the returned audio URL onto your MP4
3. Poll /api/generate/status/{id} or wait for webhook delivery
4. Deliver the final composed video URL to your user or downstream automation

Strong Buyer Fit

โ€ข Product demos with narrated walkthroughs
โ€ข Documentation clips with multilingual voiceover
โ€ข Commerce explainers with generated stills, motion, and audio
โ€ข API platforms that need a clean overflow or backend partner path
# Merge generated narration onto an existing video
# Step 1: submit a composition job
curl -X POST https://api.creativeai.run/api/generate/compose   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "video_url": "https://cdn.creativeai.run/output/product-demo.mp4",
    "audio_url": "https://cdn.creativeai.run/output/voice-abc123.mp3",
    "replace_audio": true,
    "webhook_url": "https://your-app.com/webhooks/creativeai/compose"
  }'

# Step 2: poll the shared async status endpoint if you are not using webhooks
curl https://api.creativeai.run/api/generate/status/comp_xyz789   -H "Authorization: Bearer YOUR_API_KEY"
Status Response
// GET /api/generate/status/{id} โ€” 200 OK
{
  "id": "comp_xyz789",
  "status": "COMPLETED",
  "output_url": "https://cdn.creativeai.run/output/product-demo-with-voice.mp4",
  "credits": 5,
  "created_at": "2026-03-27T18:30:00Z",
  "completed_at": "2026-03-27T18:30:18Z"
}

Customer-safe implementation notes

โ€ข Composition follows the same async model as video: submit once, then poll or receive a webhook.
โ€ข The final asset is returned as a direct CDN URL, so teams can download or relay it without standing up a separate render worker.
โ€ข Webhook delivery uses the standard 3 attempts with exponential backoff (0s, 5s, 30s). If delivery still fails, the final result remains available via the status API.
โ€ข This pairs cleanly with generated voice clips and existing still/video pipelines for one-vendor proof.

Chart-to-Video API

Use POST /api/generate/chart-to-video to turn static charts into animated videos with Ken Burns-style effects, transitions, and zooms. Perfect for financial research agents, investor presentations, and automated report generation.

Input Options

chart_image_url โ€” Existing chart image to animate
chart_data โ€” Data-driven config (line/bar/pie/area)
animation_style โ€” ken_burns, build_up, reveal, morph
title โ€” Overlay title for the video

Animation Styles

โ€ข Ken Burns โ€” Smooth pan and zoom across the chart
โ€ข Build-up โ€” Data points animate in sequence
โ€ข Reveal โ€” Chart emerges from a mask
โ€ข Morph โ€” Transitions between data states

Strong Buyer Fit

โ€ข Financial research automation (dexter-style agents)
โ€ข Automated investor presentations
โ€ข Quarterly report video generation
โ€ข Chart-heavy documentation workflows
# Chart-to-Video: Turn financial charts into animated investor videos
# Step 1: Submit chart-to-video job with image or data
curl -X POST https://api.creativeai.run/api/generate/chart-to-video   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "chart_image_url": "https://example.com/charts/q4-revenue.png",
    "animation_style": "ken_burns",
    "title": "Q4 Revenue Growth",
    "duration": 10,
    "webhook_url": "https://your-app.com/webhooks/creativeai/chart-video"
  }'

# Alternative: Data-driven chart generation
curl -X POST https://api.creativeai.run/api/generate/chart-to-video   -H "Authorization: Bearer YOUR_API_KEY"   -H "Content-Type: application/json"   -d '{
    "chart_data": {
      "type": "line",
      "labels": ["Q1", "Q2", "Q3", "Q4"],
      "datasets": [{"label": "Revenue", "data": [1.2, 1.8, 2.4, 3.1]}]
    },
    "animation_style": "build_up",
    "title": "Revenue Trajectory",
    "duration": 8
  }'

# Step 2: Poll the shared async status endpoint
curl https://api.creativeai.run/api/generate/status/chart_xyz123   -H "Authorization: Bearer YOUR_API_KEY"
Status Response
// GET /api/generate/status/{id} โ€” 200 OK
{
  "id": "chart_xyz123",
  "status": "COMPLETED",
  "output_url": "https://cdn.creativeai.run/output/animated-chart-q4-revenue.mp4",
  "credits": 15,
  "created_at": "2026-03-28T20:30:00Z",
  "completed_at": "2026-03-28T20:30:45Z"
}

Customer-safe implementation notes

โ€ข Chart-to-video follows the same async model as video generation: submit once, then poll or receive a webhook.
โ€ข Pricing: 15 credits per video (premium feature for financial research use cases).
โ€ข Webhook delivery uses the standard 3 attempts with exponential backoff (0s, 5s, 30s). If delivery still fails, the final result remains available via the status API.
โ€ข LangChain integration examples available for research agents that need automated chart visualization.

Available Models

Access 15+ AI models through a single API. Use GET /v1/models for the full live list.

Image Generation

dall-e-3OpenAI-compatibleDrop-in DALL-E 3 replacement via /v1/images/generations
seedream-3.0ByteDanceHigh-quality image generation with excellent prompt adherence
flux-proBlack Forest LabsFast, high-fidelity image synthesis
stable-diffusion-3Stability AIIndustry-standard open model

Image Editing (Inpainting, Outpainting, Style Transfer)

seedream-v4.5-editByteDanceHigh-quality inpainting and selective image editing via /v1/images/edits
wan-2.6-image-editAlibabaMulti-reference image editing with mask support

Video Generation

autoCreativeAI routingDefault video alias. Routes to a live provider and keeps requested vs actual model visible in responses.
kling-o3-proKuaishouHighest-quality explicit video model alias for text-to-video and image-to-video via /v1/video/generations
kling-v3KuaishouKling v3 Pro shorthand alias for OpenAI-compatible video generation
seedance-1.5-proByteDanceSeedance 1.5 alias for explicit text-to-video and image-to-video generation

Audio / Text-to-Speech

alloyCreativeAIBalanced, versatile voice for general use
echoCreativeAIWarm, natural-sounding male voice
fableCreativeAIExpressive, storytelling-style voice
onyxCreativeAIDeep, authoritative male voice
novaCreativeAIFriendly, conversational female voice
shimmerCreativeAIClear, upbeat female voice

LLM Chat

claude-*AnthropicClaude family models via OpenAI-compatible endpoint
gemini-*GoogleGemini family models
gpt-*OpenAIGPT family models

Response Format

POST /v1/images/generations โ€” 200 OK
{
  "created": 1709550000,
  "data": [
    {
      "url": "https://cdn.creativeai.run/output/abc123.png",
      "revised_prompt": "A serene Japanese garden at sunset..."
    }
  ]
}

Image Edit Response

POST /v1/images/edits โ€” 200 OK
{
  "created": 1709550000,
  "data": [
    {
      "url": "https://cdn.creativeai.run/output/edited-abc123.png",
      "revised_prompt": "A natural human hand with five fingers..."
    }
  ]
}

Endpoints

POST/v1/images/generations

Generate images โ€” OpenAI DALL-E 3 compatible (drop-in replacement)

POST/v1/images/edits

Edit images with advanced editing (inpainting, outpainting, style transfer) โ€” OpenAI compatible

POST/api/v1/model/generateImage

Generate images โ€” native API with async polling support

POST/v1/video/generations

Generate videos โ€” async, returns 202 with job ID for polling

GET/v1/video/generations/{generation_id}

Poll video status โ€” includes requested vs actual model plus optional debug metadata

POST/v1/video/generations/batch

Generate 1-20 videos in a single request โ€” ideal for A/B testing, product catalogs, marketing campaigns

GET/v1/video/generations/batch/{batch_id}

Check batch video status โ€” returns counts and individual job results

POST/api/generate/voice

Generate speech from text โ€” supports 6 voices, 10 languages, adjustable speed

POST/api/generate/compose

Merge hosted audio onto a hosted video โ€” ideal for narrated demos, explainers, and product clips

GET/api/generate/status/{generation_id}

Poll shared async status for voice, compose, background-removal, and other generation jobs

POST/api/v1/model/generateVideo

Generate videos โ€” native API (legacy, use /v1/video/generations instead)

GET/api/v1/model/prediction/:id

Poll task status for async image/video generation (native API)

GET/v1/models

List available models and capabilities

POST/v1/chat/completions

LLM chat completions โ€” OpenAI compatible

POST/v1/completions

Text completions โ€” OpenAI compatible (legacy)

POST/api/v1/model/calculate

Calculate pricing for a generation request before running it

Video Polling Best Practices

Video generation is async โ€” here's how to poll efficiently and avoid common pitfalls.

โฑ Recommended Polling Strategy

1. Wait 10 seconds before first poll (jobs need setup time)
2. Poll every 10 seconds for the first 2 minutes
3. Back off to 30 seconds between polls after 2 minutes
4. Set a maximum timeout of 10 minutes โ€” if not done, treat as failed
โšก Most videos complete in 2-5 minutes. 5-second videos are typically faster.

๐Ÿšซ Common Mistakes

โŒ Polling every 1 second โ€” wastes requests, no benefit
โŒ No timeout โ€” jobs can get stuck; always set a max wait
โŒ Retrying on failed status โ€” failed is terminal, submit a new job instead
โŒ Not checking failover_used โ€” you may be charged less if failover occurred
๐Ÿ’ก Use webhook_url to receive completion/failure callbacks and reduce polling traffic.

๐Ÿ’ก Video Output Hosting

Completed videos are hosted on our CDN โ€” you get a direct URL in data[0].url. No need to provision storage or pay for egress. URLs are valid for 7 days after generation. Download and store on your end if you need permanent access.

Error Handling

All errors follow a consistent JSON format. Content moderation errors include actionable detail.

Error Response Format
{
  "error": {
    "code": "content_policy_violation",
    "message": "Your request was rejected because it may generate content that violates our usage policy.",
    "type": "invalid_request_error",
    "param": "prompt"
  }
}
400
invalid_request_error

Missing or invalid parameters. Check param field for which one.

400
content_policy_violation

Prompt or image was flagged by content moderation. Rephrase and retry. Not billed.

401
authentication_error

Invalid or missing API key. Check the Authorization header.

402
insufficient_credits

Not enough credits. Top up at Dashboard โ†’ Billing. Request not billed.

429
rate_limit_exceeded

Too many requests. Back off and retry. Check Retry-After header for wait time in seconds.

503
provider_unavailable

Upstream provider is down. With failover: true (default), we auto-retry on another provider โ€” you only see this if ALL providers are down.

๐Ÿ’ก Content Moderation Policy

Content moderation runs before generation โ€” you are never billed for blocked requests. If your prompt is blocked, you'll get a 400 with content_policy_violation and a human-readable message explaining why. Rephrase your prompt and retry. For SaaS builders: handle this error gracefully in your UI so your end-users get clear feedback.

Gemini Migration Guide

Experiencing Gemini 503 errors, quota issues, or alias changes? CreativeAI's multi-model failover means your app keeps working even when Google doesn't.

โŒ The Gemini Problem

โ€ข 503 "No capacity available" on paid Vertex tier
โ€ข Failed requests counted toward daily quota
โ€ข Model alias changes breaking production code
โ€ข Status page out of date during outages
โ€ข Hours-long outages with no ETA

โœ… How CreativeAI Handles It

โ€ข Auto-failover: Gemini 503 โ†’ instant switch to Claude or GPT
โ€ข No quota burn: failed requests are never billed
โ€ข Transparency: model_actual tells you exactly what served your request
โ€ข Cost protection: if failover model costs less, adjusted_credits reflects the lower price
โ€ข Zero code changes: just set base_url and go
Migration: 2 lines of code
# Before (direct Gemini โ€” breaks on 503)
client = OpenAI(api_key="GEMINI_KEY", base_url="https://generativelanguage.googleapis.com/v1beta/openai/")

# After (CreativeAI โ€” auto-failover, never goes down)
client = OpenAI(api_key="CREATIVEAI_KEY", base_url="https://api.creativeai.run/v1")

# Same code, same models โ€” now with automatic failover
response = client.chat.completions.create(
    model="gemini-2.0-flash",  # works as-is
    messages=[{"role": "user", "content": "Hello!"}]
)

# Check if failover was used:
# response headers include x-model-actual, x-failover-used

Use promo code GEMINI2026 for 500 free credits to test the migration.

Rate Limits

Transparent rate limits โ€” no surprises, no hidden caps. All limits are per API key.

Free Tier

10

requests per minute

Pro Tier

100

requests per minute

Enterprise

1,000+

requests per minute

Rate limit headers are included in every response: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset. When exceeded, you'll get a 429 with a Retry-After header. Need higher limits? Contact us.

Video-Specific Limits

Creation (POST): 5 requests/minute โ€” video jobs are GPU-intensive, so creation is rate-limited separately from polling.
Polling (GET): 120 requests/minute โ€” generous limit so you can poll frequently without hitting rate limits.
Concurrent jobs: Max 3 active video jobs per user at a time. Submit more after a current job completes or fails.
Daily cap: 100 video generations per rolling 24-hour window. Prevents runaway automation or scripts from exhausting shared GPU capacity.

Enterprise API keys can have custom rate limits and higher concurrency caps. Contact us to upgrade.

Technical Integration FAQ

Answers to common integration questions for our latest API capabilities.

How do I use the API as a Proxy Layer for n8n/Blotato?

To serve automation builders, we natively support /v1/video/generate as an exact alias to /v1/video/generations. You can plug this directly into your n8n or Blotato workflows as a lower-cost video fallback generation endpoint.

Our unified API acts as a built-in proxy layerโ€”automatically routing requests to the best available provider (Kling, Seedance, etc.) without requiring you to rewrite any code to manage costs or handle rate limits.

How do I use Kling V3 Early Access features?

Pass multi_shot_prompts (array of strings) and ai_director_mode: true in your /v1/video/generations request body.

How does Multi-Character Consistency work?

You can pass an array of up to 3 image URLs in the character_references field. Set camera_angle: "isometric" (or other angles) for consistent game asset generation. Available on both Image and Video generation endpoints.

Use character_references when you need multiple distinct characters to maintain visual consistency across generations. Each URL should point to a clear, single-character reference image. You can also combine it with a single character_reference_url for a primary character.

POST /v1/images/generations โ€” Multi-Character Example
{
  "model": "dall-e-3",
  "prompt": "Three fantasy heroes standing on a cliff at sunrise, epic wide shot",
  "size": "1792x1024",
  "n": 1,
  "character_references": [
    "https://your-cdn.com/assets/hero_warrior.png",
    "https://your-cdn.com/assets/hero_mage.png",
    "https://your-cdn.com/assets/hero_rogue.png"
  ],
  "camera_angle": "low-angle",
  "style_reference_url": "https://your-cdn.com/assets/fantasy_style.jpg"
}

Supported fields: character_references (array of up to 3 URLs) ยท character_reference_url (single URL for a primary character) ยท camera_angle (e.g. "isometric", "low-angle", "bird-eye") ยท style_reference_url (single URL for style transfer). Works on both /v1/images/generations and /v1/video/generations.

How to redeem the DALLE1000 Promo?

New API keys can submit the promo code DALLE1000 in the billing dashboard to receive $10 in free migration credits. Seamlessly switch from OpenAI by changing the base_url.

How does DALL-E 3 Fallback Insurance work?

When you use our unified API with model: "dall-e-3", our backend automatically provides fallback insurance. If the upstream provider experiences an outage, your requests are seamlessly routed to Stable Diffusion XL or DALL-E 2 with sub-second failover.

Zero code changes required: Simply replace your existing base_url with ours and keep your current OpenAI SDK implementation. Built-in per-key analytics provide complete transparency on model failover events and generation spend.