CreativeAI API
Integrate powerful AI generation capabilities into your applications.
OpenAPI Spec (JSON)https://api.creativeai.runFastest public proof for pricing, reliability, and API shape
For buyers evaluating CreativeAI as a production media API: the public story is simple โ fixed per-generation pricing, async jobs with webhook fallback, and bounded retry behavior. Send this page when they want code, then pair it with transparent pricing and reliability details below.
Minimal n8n image โ video workflow buyers can copy today
For automation buyers, the clean story is: n8n sends a single HTTP request, CreativeAI returns a generation ID immediately, and a webhook completes the workflow when the video is ready. No custom render queue, no polling loop required.
POST /v1/video/generations with image_url, prompt, and webhook_url.generation_id in Airtable, Sheets, Shopify, or your DB, then let the webhook finish the job.1. Webhook / schedule trigger in n8n
โ
2. HTTP Request โ POST https://api.creativeai.run/v1/video/generations
{
"model": "auto",
"prompt": "Turn this product image into a short vertical promo clip",
"image_url": "{{$json.product_image_url}}",
"duration": 5,
"aspect_ratio": "9:16",
"webhook_url": "https://your-n8n-host/webhook/creativeai-video"
}
โ
3. Save generation_id + status=processing to Airtable / Sheets / your DB
โ
4. Webhook node receives completion โ update record + post video URL to Shopify/Slack/CRMQuick Start
# OpenAI-compatible โ drop-in DALL-E 3 replacement
curl -X POST https://api.creativeai.run/v1/images/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "dall-e-3",
"prompt": "A serene Japanese garden at sunset",
"size": "1024x1024",
"n": 1,
"character_references": ["https://example.com/char1.png", "https://example.com/char2.png"],
"camera_angle": "isometric",
"style_reference_url": "https://example.com/style.jpg"
}'Image Generation Parameters
model โ dall-e-3 (recommended), gpt-image-1prompt โ Text description of the imagesize โ 1024x1024, 1536x1024, 1024x1536background โ auto (default) or transparent for PNG with alpha channeln โ Number of images (1-4)quality โ standard, hdTransparent PNG Use Cases
Transparent PNG Generation
Generate images with transparent backgrounds โ no post-processing needed. Set "background": "transparent" and get a PNG with a proper alpha channel. Perfect for game sprites, stickers, logos, and design assets.
# Generate a transparent PNG โ perfect for game sprites, stickers, logos
curl -X POST https://api.creativeai.run/v1/images/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "dall-e-3",
"prompt": "Pixel art warrior character, 64x64 sprite, clean edges",
"size": "1024x1024",
"background": "transparent",
"n": 1
}'๐ก How It Works
When background is set to "transparent", the output format is automatically forced to PNG (since JPEG and WebP don't support alpha channels). The resulting image has a proper alpha channel โ no white backgrounds, no post-processing, ready to composite directly into your app, game, or design tool.
Advanced Image Editing
Fix hands, swap backgrounds, expand your canvas, or transfer styles โ without regenerating from scratch. Uses the OpenAI-compatible /v1/images/edits endpoint with multipart form upload. Now supports inpainting, outpainting, generative fill, and style transfer.
# Advanced Editing: Inpainting, Outpainting, Style Transfer
curl -X POST https://api.creativeai.run/v1/images/edits \
-H "Authorization: Bearer YOUR_API_KEY" \
-F model="seedream-v4.5-edit" \
-F image=@product-shot.png \
-F mask=@hand-mask.png \
-F prompt="A natural human hand with five fingers holding the product in an expanded cinematic scene" \
-F outpaint_direction="all" \
-F style_reference_url="https://example.com/style.jpg" \
-F size="1024x1024"Mask Format
Masks should be PNG images matching the dimensions of the input image. Use white (255,255,255) to mark areas to edit and black (0,0,0) to mark areas to preserve. Alternatively, PNG images with an alpha channel are supported โ transparent areas will be edited, opaque areas preserved.
Utility EndpointsPopular
Background removal, upscaling, and image restoration โ the utility operations that e-commerce and automation workflows depend on. All available through the same unified API with async polling support.
Background Removal
Remove backgrounds from product photos, portraits, or any image. Returns a transparent PNG ready for compositing.
Super Resolution
Upscale images up to 4x with AI-enhanced detail recovery. Perfect for print, large displays, or restoring low-res assets.
Image Restoration
Fix damaged, noisy, or low-quality images. Restore old photos, clean up artifacts, and enhance clarity.
๐ก Why Utility Endpoints Matter
Competitive research shows background removal has 11+ dedicated models on Replicate, and upscaling has 33+ models. These aren't edge cases โ they're core workflow operations for e-commerce, marketing, and automation platforms.
Video Generation
Generate AI videos from text prompts. Videos use an async job pattern: submit a job, then either poll for status or receive a signed webhook when the render finishes. Supports live Kling and Seedance model aliases via POST /v1/video/generations.
๐ Migrating from Sora? Just change your base URL and model name โ same API pattern.
# Video Generation โ async with polling
# Step 1: Submit video generation job
curl -X POST https://api.creativeai.run/v1/video/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"prompt": "A drone shot over a coastal city at golden hour, cinematic",
"duration": 5,
"aspect_ratio": "16:9",
"character_reference_url": "https://example.com/character.png",
"character_references": ["https://example.com/char1.png", "https://example.com/char2.png"],
"camera_angle": "isometric",
"style_reference_url": "https://example.com/style.jpg",
"webhook_url": "https://your-app.com/webhooks/creativeai/video"
}'
# Response: { "id": "abc123", "status": "processing", ... }
# Step 2: Poll for completion
curl https://api.creativeai.run/v1/video/generations/abc123 \
-H "Authorization: Bearer YOUR_API_KEY"Request Parameters
model โ auto (default), kling-o3-pro, kling-v3, seedance-1.5-proprompt โ Text description of the videoimage_url โ Optional source image. Switches the job to image-to-video.duration โ Length in seconds (3-15)aspect_ratio โ 21:9, 16:9, 4:3, 1:1, 3:4, 9:16resolution โ 480p, 720p (default), 1080pfailover โ Auto-failover to alt providers on error (default: true)webhook_url โ Optional callback URL for completed/failed jobsAuto Mode Today
model: "auto" is the safest starting point and the default request model.auto resolves to Kling O3 Pro and auto-switches between -t2v and -i2v based on your inputs.Status Flow
POST โ 202 Accepted with job IDGET โ Poll with job ID every 10s, or wait for signed webhook deliverypending/processing โ completed or failedoutput_url is available via polling and included in the completion payloadCustomer-safe batch workflow guidance
webhook_url per request.generation_id per SKU or asset so one slow render never blocks the rest of the queue.Image-to-Video
Animate any still image into a video. Pass image_url alongside your prompt โ the model uses the image as the first frame and generates motion from it.
# Image-to-Video โ animate a still image
curl -X POST https://api.creativeai.run/v1/video/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"prompt": "Camera slowly zooms in, the character turns and smiles",
"image_url": "https://example.com/portrait.png",
"duration": 5,
"aspect_ratio": "9:16"
}'// POST /v1/video/generations โ minimal image-to-video request
{
"model": "auto",
"image_url": "https://example.com/source-image.png",
"prompt": "Subtle camera push in, natural motion, cinematic lighting",
"duration": 5,
"aspect_ratio": "16:9"
}// POST /v1/video/generations โ 202 Accepted
{
"id": "vgen_i2v_abc123",
"object": "video.generation",
"model": "auto",
"model_actual": "kling-o3-pro-i2v",
"status": "processing",
"prompt": "Subtle camera push in, natural motion, cinematic lighting",
"image_url": "https://example.com/source-image.png",
"credits": 30,
"output_url": null,
"created_at": "2026-04-10T00:15:00Z"
}// GET /v1/video/generations/vgen_i2v_abc123 โ 200 OK
{
"id": "vgen_i2v_abc123",
"object": "video.generation",
"model": "auto",
"model_actual": "kling-o3-pro-i2v",
"status": "completed",
"prompt": "Subtle camera push in, natural motion, cinematic lighting",
"image_url": "https://example.com/source-image.png",
"credits": 30,
"output_url": "https://cdn.creativeai.run/output/i2v-abc123.mp4",
"error": null,
"completed_at": "2026-04-10T00:17:32Z"
}Image Requirements
image_url โ Publicly accessible HTTPS URL to a JPEG or PNG image.aspect_ratio is omitted.prompt to control the motion direction, camera movement, and scene evolution.Batch Video Generation New
Generate up to 20 videos in a single API call. Perfect for A/B testing, product video catalogs, marketing campaign variations, and bulk content production. Each video in the batch runs independently โ one slow render never blocks the others.
# Batch Video Generation โ generate up to 20 videos in one request
curl -X POST https://api.creativeai.run/v1/video/generations/batch \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompts": [
{"prompt": "Product showcase video, rotating 360 degrees", "image_url": "https://example.com/product1.png"},
{"prompt": "Social media clip, dynamic transitions"},
{"prompt": "Aerial drone shot over mountains at sunset"}
],
"model": "auto",
"aspect_ratio": "16:9",
"duration": 5,
"webhook_url": "https://your-app.com/webhooks/creativeai/batch"
}'
# Response: { "batch_id": "video_batch_abc123", "total_jobs": 3, ... }
# Check batch status
curl https://api.creativeai.run/v1/video/generations/batch/video_batch_abc123 \
-H "Authorization: Bearer YOUR_API_KEY"Request Parameters
prompts โ Array of 1-20 prompt configs (required)prompts[].prompt โ Text description for this videoprompts[].image_url โ Optional: source image for I2Vmodel โ auto (default), kling-o3-pro, etc.aspect_ratio โ 21:9, 16:9, 4:3, 1:1, 3:4, 9:16duration โ Length in seconds (3-15)webhook_url โ Callback URL for batch completionBatch Status Tracking
POST โ 202 Accepted with batch_idGET /batch/{batch_id} โ Check progresscompleted, failed, processing, pendingoutput_url when done// POST /v1/video/generations/batch โ 202 Accepted
{
"batch_id": "video_batch_abc123",
"total_jobs": 3,
"jobs": [
{"id": "vid_xyz1", "prompt": "Product showcase video...", "status": "pending", "credits": 30},
{"id": "vid_xyz2", "prompt": "Social media clip...", "status": "pending", "credits": 30},
{"id": "vid_xyz3", "prompt": "Aerial drone shot...", "status": "pending", "credits": 30}
],
"credits_charged": 90
}// GET /v1/video/generations/batch/{batch_id} โ 200 OK
{
"batch_id": "video_batch_abc123",
"total_jobs": 3,
"completed": 2,
"failed": 0,
"pending": 0,
"processing": 1,
"jobs": [
{
"id": "vid_xyz1",
"prompt": "Product showcase video...",
"image_url": "https://example.com/product1.png",
"status": "completed",
"output_url": "https://cdn.creativeai.run/output/video-1.mp4",
"error_message": null,
"credits": 30
},
{
"id": "vid_xyz2",
"prompt": "Social media clip...",
"image_url": null,
"status": "completed",
"output_url": "https://cdn.creativeai.run/output/video-2.mp4",
"error_message": null,
"credits": 30
},
{
"id": "vid_xyz3",
"prompt": "Aerial drone shot...",
"image_url": null,
"status": "processing",
"output_url": null,
"error_message": null,
"credits": 30
}
]
}๐ก Use Cases
Video Response Format
// POST /v1/video/generations โ 202 Accepted
{
"id": "vgen_abc123",
"object": "video.generation",
"model": "auto",
"model_actual": "kling-o3-pro-t2v",
"status": "processing",
"prompt": "A drone shot over a coastal city at golden hour, cinematic",
"credits": 60,
"failover_used": false,
"output_url": null,
"webhook_url": "https://your-app.com/webhooks/creativeai/video",
"created_at": "2026-03-09T16:10:00Z"
}// GET /v1/video/generations/vgen_abc123 โ 200 OK (completed)
{
"id": "vgen_abc123",
"object": "video.generation",
"model": "auto",
"model_actual": "kling-o3-pro-t2v",
"status": "completed",
"prompt": "A drone shot over a coastal city at golden hour, cinematic",
"credits": 60,
"output_url": "https://cdn.creativeai.run/output/video-abc123.mp4",
"error": null,
"failover_used": false,
"debug": {
"provider_requested": "kling-o3-pro-t2v",
"provider_actual": "kling-o3-pro-t2v",
"submission_failover_used": false,
"submission_failover_reason": null,
"routing_safe_fallback_applied": true,
"routing_fallback_reason": "insufficient_margin_over_legacy",
"routing_preferred_provider": "seedance-t2v",
"routing_legacy_provider": "kling-o3-pro-t2v",
"routing_confidence_margin": 5.0,
"async_capacity_failover_attempted": false,
"async_capacity_failover_reason": null,
"prediction_id_original": null,
"prediction_id_current": "pred_auto_1"
},
"created_at": "2026-03-09T16:10:00Z",
"completed_at": "2026-03-09T16:12:43Z"
}// GET /v1/video/generations/vgen_abc123 โ 200 OK (failed)
{
"id": "vgen_abc123",
"object": "video.generation",
"model": "auto",
"model_actual": "seedance-t2v",
"status": "failed",
"prompt": "A drone shot over a coastal city at golden hour, cinematic",
"credits": 30,
"output_url": null,
"error": {
"message": "Video generation timed out after 10 minutes",
"type": "server_error",
"code": "generation_failed"
},
"failover_used": true,
"created_at": "2026-03-09T16:10:00Z",
"completed_at": "2026-03-09T16:20:02Z"
}Auto Routing, Failover & Debug
Public video responses always tell you what you asked for and what actually ran. When failover is enabled (default), submissions automatically retry on an alternative provider if the first one fails.
model โ The model you requested. If you send "auto", the status response keeps model: "auto".model_actual โ The concrete provider key that is currently serving or finished the job.failover_used โ true if submission failover moved the request to another provider.debug โ Optional status-only block. When enabled for your environment, it includes routing and failover metadata such as provider_requested, provider_actual, routing_safe_fallback_applied, routing_fallback_reason, routing_preferred_provider, routing_legacy_provider, and routing_confidence_margin.Quality-based auto routing is a gated rollout. If it is not enabled for your environment, auto stays on the conservative Kling O3 Pro default path.
Kling V3 Multi-Shot Storyboarding Early Access
The upcoming Kling V3 and O3 integrations offer advanced multi-shot storyboarding and cinematic AI director capabilities directly via our unified API. This enables high-volume, multi-scene video generation without jumping between different prompt workflows.
Multi-shot Generation Pipeline
- Sequential Prompting: Submit an array of prompts to generate connected scenes in a single API call.
- Persistent Fidelity: Kling V3 maintains character identity and background consistency across multiple generated shots.
- Automatic Transitions: Choose between hard cuts, crossfades, or continuous camera motion between prompt segments.
- Reference Anchor (O3 Pro): Provide a static keyframe (image) as an anchor that acts as the starting point for shot 1 and reference for subsequent shots.
AI Director Mode Parameters
- Camera Control: Explicit controls for
pan,tilt,zoom,truck,pedestal, androllfor each shot. - Motion Intensity: Variable
cfg_scaleand motion weighting per shot (e.g., fast action in shot 1, slow motion in shot 2). - Audio & Lip-sync:
sound: truegenerates native atmospheric audio that aligns with sequences. - Extended Durations: Generate seamless clips up to 10s (V3) and 15s (O3), which can be stitched for long-form ad creatives.
Multi-Shot Example Request (Python)
import requests
response = requests.post(
"https://api.creativeai.run/v1/video/generations",
headers={"Authorization": "Bearer YOUR_API_KEY"},
json={
"model": "kling-v3",
"prompt": "Camera pans right over a dense neon-lit cyberpunk city. Flying cars zip past.",
"multi_shot_prompts": [
"Cut to a close-up of a cyborg detective lighting a cigarette.",
"The camera pulls back quickly to reveal the detective is standing on the edge of a skyscraper."
],
"ai_director_mode": True,
"aspect_ratio": "16:9",
"character_reference_url": "https://example.com/character.png",
"character_references": ["https://example.com/char1.png", "https://example.com/char2.png"],
"camera_angle": "isometric",
"style_reference_url": "https://example.com/style.jpg",
"duration": 10,
"webhook_url": "https://your-server.com/webhooks/video"
}
)
print(response.json())One API for Image + Video + Voice
Unlike Replicate or fal.ai where you need separate integrations for each model type, CreativeAI gives you images, videos, and voice narration through a single unified API. No more stitching together multiple vendors.
Voice Generation
Generate natural speech from text in 10 languages with 6 voice presets. Uses the same async job pattern as video: submit via POST /api/generate/voice, then poll or receive a webhook.
๐ง 8 credits per generation. Pair with Compose to add narration to any video.
# Voice Generation (Text-to-Speech) โ async with polling
# Step 1: Submit voice generation job (8 credits)
curl -X POST https://api.creativeai.run/api/generate/voice \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Welcome to CreativeAI. Your product video is ready.",
"voice": "alloy",
"language": "en",
"speed": 1.0,
"webhook_url": "https://your-app.com/webhooks/creativeai/voice"
}'
# Response: { "id": "voice_abc123", "status": "PENDING", ... }
# Step 2: Poll for completion
curl https://api.creativeai.run/api/generate/status/voice_abc123 \
-H "Authorization: Bearer YOUR_API_KEY"Request Parameters
text โ Text to synthesize (1โ5,000 characters)voice โ alloy (default), echo, fable, onyx, nova, shimmerlanguage โ en (default), zh, ja, ko, es, fr, de, pt, ru, arspeed โ Playback speed 0.5โ2.0 (default 1.0)webhook_url โ Optional callback URL for completed/failed jobs// GET /api/generate/status/{id} โ 200 OK
{
"id": "voice_abc123",
"type": "VOICE",
"model": "creative-voice",
"status": "COMPLETED",
"output_url": "https://cdn.creativeai.run/output/voice-abc123.mp3",
"credits": 8,
"parameters": {
"voice": "alloy",
"language": "en",
"speed": 1.0
},
"created_at": "2026-03-27T18:30:00Z",
"completed_at": "2026-03-27T18:30:12Z"
}Status Flow
POST โ 202 Accepted with job IDGET โ Poll /api/generate/status/{id} every 3s, or wait for signed webhookPENDING/PROCESSING โ COMPLETED or FAILEDVideo Webhooks
For production workloads, pass webhook_url in your create request and process callback events instead of heavy polling loops. Webhooks are retried with exponential backoff and deduplicated per generation ID + terminal status.
# Outbound webhook from CreativeAI to your webhook_url
POST /webhooks/creativeai/video
Content-Type: application/json
X-CreativeAI-Event: video.generation.completed
X-CreativeAI-Delivery-Id: vgen_abc123
X-CreativeAI-Signature: sha256=<hmac-hex> # present when signed
{
"id": "vgen_abc123",
"object": "video.generation",
"status": "completed",
"model": "auto",
"model_actual": "seedance-t2v",
"prompt": "A drone shot over a coastal city at golden hour, cinematic",
"output_url": "https://cdn.creativeai.run/output/video-abc123.mp4",
"error_message": null,
"credits": 30,
"failover_used": true,
"completed_at": "2026-03-09T16:12:43Z"
}// Node.js signature verification
import crypto from 'crypto';
function verifyCreativeAiWebhook(rawBody, signatureHeader, signingSecret) {
if (!signatureHeader?.startsWith('sha256=')) return false;
const expected = crypto
.createHmac('sha256', signingSecret)
.update(rawBody)
.digest('hex');
const provided = signatureHeader.slice('sha256='.length);
return crypto.timingSafeEqual(Buffer.from(expected), Buffer.from(provided));
}Make.com Webhook Integration
To eliminate polling in Make.com, use a Custom Webhook module and map its URL to the webhook_url parameter. Combine this with webhook_auto_retry to natively handle transient model failures.
- Add a Webhooks โ Custom Webhook module
- Create a webhook and copy the unique URL
- Wait for the module to start listening for data
- Add an HTTP โ Make an API Key Auth request module
- Pass the webhook URL into
"webhook_url" - Pass
"webhook_auto_retry": trueto enable automatic retry if the generation job fails
Webhook Delivery Behavior
completed and failed states.0s, 5s, and 30s.generation_id + status.X-CreativeAI-Signature is included when a signing secret is configured.status API.Webhook Payload Examples Integration Planning
Complete webhook payload examples for all event types. Use these to plan your integration, build handlers, and test your webhook endpoints.
# Outbound webhook from CreativeAI to your webhook_url
POST /webhooks/creativeai/video
Content-Type: application/json
X-CreativeAI-Event: video.generation.completed
X-CreativeAI-Delivery-Id: vgen_abc123
X-CreativeAI-Signature: sha256=<hmac-hex> # present when signed
{
"id": "vgen_abc123",
"object": "video.generation",
"status": "completed",
"model": "auto",
"model_actual": "seedance-t2v",
"prompt": "A drone shot over a coastal city at golden hour, cinematic",
"output_url": "https://cdn.creativeai.run/output/video-abc123.mp4",
"error_message": null,
"credits": 30,
"failover_used": true,
"completed_at": "2026-03-09T16:12:43Z"
}# Webhook delivery for failed video generation
POST /webhooks/creativeai/video
Content-Type: application/json
X-CreativeAI-Event: video.generation.failed
X-CreativeAI-Delivery-Id: vgen_xyz789
X-CreativeAI-Signature: sha256=<hmac-hex>
{
"id": "vgen_xyz789",
"object": "video.generation",
"status": "failed",
"model": "auto",
"model_actual": "kling-o3-pro-t2v",
"prompt": "A drone shot over mountains at sunset",
"output_url": null,
"error": {
"message": "Video generation timed out after 10 minutes",
"type": "server_error",
"code": "generation_timeout"
},
"error_message": "Video generation timed out after 10 minutes",
"credits": 0,
"credits_refunded": 60,
"failover_used": true,
"completed_at": "2026-03-09T16:20:02Z"
}# Webhook delivery for batch video completion
POST /webhooks/creativeai/batch
Content-Type: application/json
X-CreativeAI-Event: video.batch.completed
X-CreativeAI-Delivery-Id: video_batch_abc123
X-CreativeAI-Signature: sha256=<hmac-hex>
{
"batch_id": "video_batch_abc123",
"object": "video.batch",
"status": "completed",
"total_jobs": 3,
"completed": 3,
"failed": 0,
"jobs": [
{
"id": "vid_xyz1",
"prompt": "Product showcase video, rotating 360 degrees",
"status": "completed",
"output_url": "https://cdn.creativeai.run/output/video-1.mp4",
"credits": 30
},
{
"id": "vid_xyz2",
"prompt": "Social media clip, dynamic transitions",
"status": "completed",
"output_url": "https://cdn.creativeai.run/output/video-2.mp4",
"credits": 30
},
{
"id": "vid_xyz3",
"prompt": "Aerial drone shot over mountains at sunset",
"status": "completed",
"output_url": "https://cdn.creativeai.run/output/video-3.mp4",
"credits": 30
}
],
"total_credits": 90,
"completed_at": "2026-03-09T16:15:00Z"
}# Webhook delivery for async image generation (native API)
POST /webhooks/creativeai/image
Content-Type: application/json
X-CreativeAI-Event: image.generation.completed
X-CreativeAI-Delivery-Id: img_abc123
X-CreativeAI-Signature: sha256=<hmac-hex>
{
"id": "img_abc123",
"object": "image.generation",
"status": "completed",
"model": "seedream-3.0",
"prompt": "A minimalist product photo on white background",
"output_url": "https://cdn.creativeai.run/output/image-abc123.png",
"revised_prompt": "A minimalist product photo on white background, professional lighting",
"width": 1024,
"height": 1024,
"credits": 2,
"completed_at": "2026-03-09T16:05:12Z"
}# Webhook delivery for voice generation
POST /webhooks/creativeai/voice
Content-Type: application/json
X-CreativeAI-Event: voice.generation.completed
X-CreativeAI-Delivery-Id: voice_abc123
X-CreativeAI-Signature: sha256=<hmac-hex>
{
"id": "voice_abc123",
"object": "voice.generation",
"status": "completed",
"text": "Welcome to CreativeAI. Your launch assets are rendering now.",
"voice": "alloy",
"language": "en",
"output_url": "https://cdn.creativeai.run/output/voice-abc123.mp3",
"duration_seconds": 4.2,
"credits": 1,
"completed_at": "2026-03-09T16:03:45Z"
}๐ก Integration Planning Tips
X-CreativeAI-Event header to route different event types to different handlers.X-CreativeAI-Delivery-Id + status as your dedup key.X-CreativeAI-Signature before processing. Use your API key as the HMAC secret.Getting Started
Create an account
Sign up at creativeai.run โ free tier includes starter credits.
Get your API key
Go to Dashboard โ API Keys โ Create New Key. Copy and store it securely.
Make your first request
Use any OpenAI SDK or plain HTTP. Point base_url to api.creativeai.run and start generating.
Authentication
All API requests require a Bearer token in the Authorization header.
Create API keys in your Dashboard โ API Keys. Keys are scoped to your account and inherit your credit balance.
Commerce enhancement โ motion โ voice workflow
This is the smallest standard-endpoint proof for a commerce buyer who wants one product image turned into a cleaned-up, background-ready, narrated short-form asset without stitching together multiple vendors. Start with /v1/images/edits, then chain /v1/video/generations, /api/generate/voice, and /api/generate/compose.
Step 1 โ Clean up the still
/v1/images/edits to relight, sharpen, or place the item onto a clean studio or lifestyle background.Step 2 โ Generate motion + narration
/v1/video/generations for a 5โ10 second motion clip./api/generate/voice.Step 3 โ Deliver the final asset
/api/generate/compose to merge the hosted MP3 onto the hosted MP4.# Commerce workflow: cleanup -> motion -> voice -> final MP4
# 1) Enhance the source product image for marketplace-safe lighting/background
curl -X POST https://api.creativeai.run/v1/images/edits -H "Authorization: Bearer YOUR_API_KEY" -F model="seedream-v4.5-edit" -F image=@product.jpg -F prompt="Clean up this product photo, sharpen details, improve lighting, and place the item on a clean white studio background with a soft natural shadow" -F size="1024x1024"
# 2) Turn the cleaned-up still into a short motion clip
curl -X POST https://api.creativeai.run/v1/video/generations -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"model": "auto",
"image_url": "https://cdn.creativeai.run/output/clean-product-shot.png",
"prompt": "Slow premium turntable shot of this product, subtle camera drift, soft studio reflections, ecommerce ad style",
"duration": 5,
"webhook_url": "https://your-app.com/webhooks/creativeai/video"
}'
# 3) Generate the narration track
curl -X POST https://api.creativeai.run/api/generate/voice -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"text": "Meet the travel mug built for commutes, workouts, and long desk days. Leak-resistant, lightweight, and ready to go.",
"voice": "alloy",
"language": "en",
"speed": 1.0,
"webhook_url": "https://your-app.com/webhooks/creativeai/voice"
}'
# 4) Merge the hosted voice track onto the hosted MP4
curl -X POST https://api.creativeai.run/api/generate/compose -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"video_url": "https://cdn.creativeai.run/output/product-motion.mp4",
"audio_url": "https://cdn.creativeai.run/output/product-voice.mp3",
"replace_audio": true,
"webhook_url": "https://your-app.com/webhooks/creativeai/compose"
}'// GET /api/generate/status/{id} โ 200 OK
{
"id": "comp_commerce123",
"status": "COMPLETED",
"output_url": "https://cdn.creativeai.run/output/travel-mug-story.mp4",
"credits": 5,
"created_at": "2026-03-27T21:40:00Z",
"completed_at": "2026-03-27T21:40:22Z"
}Buyer-safe implementation notes
Voice + Video Composition
Use POST /api/generate/compose to merge a hosted audio file onto a hosted MP4. It is the simplest public workflow proof for demo, documentation, and API-platform buyers who want one vendor for visuals, narration, and final delivery.
Required Inputs
video_url โ Hosted MP4 to updateaudio_url โ Hosted narration or soundtrackreplace_audio โ true to swap the soundtrack, false to mix audio over the original trackwebhook_url โ Optional callback when composition finishesTypical Workflow
POST /api/generate/voice/api/generate/status/{id} or wait for webhook deliveryStrong Buyer Fit
# Merge generated narration onto an existing video
# Step 1: submit a composition job
curl -X POST https://api.creativeai.run/api/generate/compose -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"video_url": "https://cdn.creativeai.run/output/product-demo.mp4",
"audio_url": "https://cdn.creativeai.run/output/voice-abc123.mp3",
"replace_audio": true,
"webhook_url": "https://your-app.com/webhooks/creativeai/compose"
}'
# Step 2: poll the shared async status endpoint if you are not using webhooks
curl https://api.creativeai.run/api/generate/status/comp_xyz789 -H "Authorization: Bearer YOUR_API_KEY"// GET /api/generate/status/{id} โ 200 OK
{
"id": "comp_xyz789",
"status": "COMPLETED",
"output_url": "https://cdn.creativeai.run/output/product-demo-with-voice.mp4",
"credits": 5,
"created_at": "2026-03-27T18:30:00Z",
"completed_at": "2026-03-27T18:30:18Z"
}Customer-safe implementation notes
Chart-to-Video API
Use POST /api/generate/chart-to-video to turn static charts into animated videos with Ken Burns-style effects, transitions, and zooms. Perfect for financial research agents, investor presentations, and automated report generation.
Input Options
chart_image_url โ Existing chart image to animatechart_data โ Data-driven config (line/bar/pie/area)animation_style โ ken_burns, build_up, reveal, morphtitle โ Overlay title for the videoAnimation Styles
Strong Buyer Fit
# Chart-to-Video: Turn financial charts into animated investor videos
# Step 1: Submit chart-to-video job with image or data
curl -X POST https://api.creativeai.run/api/generate/chart-to-video -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"chart_image_url": "https://example.com/charts/q4-revenue.png",
"animation_style": "ken_burns",
"title": "Q4 Revenue Growth",
"duration": 10,
"webhook_url": "https://your-app.com/webhooks/creativeai/chart-video"
}'
# Alternative: Data-driven chart generation
curl -X POST https://api.creativeai.run/api/generate/chart-to-video -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{
"chart_data": {
"type": "line",
"labels": ["Q1", "Q2", "Q3", "Q4"],
"datasets": [{"label": "Revenue", "data": [1.2, 1.8, 2.4, 3.1]}]
},
"animation_style": "build_up",
"title": "Revenue Trajectory",
"duration": 8
}'
# Step 2: Poll the shared async status endpoint
curl https://api.creativeai.run/api/generate/status/chart_xyz123 -H "Authorization: Bearer YOUR_API_KEY"// GET /api/generate/status/{id} โ 200 OK
{
"id": "chart_xyz123",
"status": "COMPLETED",
"output_url": "https://cdn.creativeai.run/output/animated-chart-q4-revenue.mp4",
"credits": 15,
"created_at": "2026-03-28T20:30:00Z",
"completed_at": "2026-03-28T20:30:45Z"
}Customer-safe implementation notes
Available Models
Access 15+ AI models through a single API. Use GET /v1/models for the full live list.
Image Generation
dall-e-3OpenAI-compatibleDrop-in DALL-E 3 replacement via /v1/images/generationsseedream-3.0ByteDanceHigh-quality image generation with excellent prompt adherenceflux-proBlack Forest LabsFast, high-fidelity image synthesisstable-diffusion-3Stability AIIndustry-standard open modelImage Editing (Inpainting, Outpainting, Style Transfer)
seedream-v4.5-editByteDanceHigh-quality inpainting and selective image editing via /v1/images/editswan-2.6-image-editAlibabaMulti-reference image editing with mask supportVideo Generation
autoCreativeAI routingDefault video alias. Routes to a live provider and keeps requested vs actual model visible in responses.kling-o3-proKuaishouHighest-quality explicit video model alias for text-to-video and image-to-video via /v1/video/generationskling-v3KuaishouKling v3 Pro shorthand alias for OpenAI-compatible video generationseedance-1.5-proByteDanceSeedance 1.5 alias for explicit text-to-video and image-to-video generationAudio / Text-to-Speech
alloyCreativeAIBalanced, versatile voice for general useechoCreativeAIWarm, natural-sounding male voicefableCreativeAIExpressive, storytelling-style voiceonyxCreativeAIDeep, authoritative male voicenovaCreativeAIFriendly, conversational female voiceshimmerCreativeAIClear, upbeat female voiceLLM Chat
claude-*AnthropicClaude family models via OpenAI-compatible endpointgemini-*GoogleGemini family modelsgpt-*OpenAIGPT family modelsResponse Format
{
"created": 1709550000,
"data": [
{
"url": "https://cdn.creativeai.run/output/abc123.png",
"revised_prompt": "A serene Japanese garden at sunset..."
}
]
}Image Edit Response
{
"created": 1709550000,
"data": [
{
"url": "https://cdn.creativeai.run/output/edited-abc123.png",
"revised_prompt": "A natural human hand with five fingers..."
}
]
}Endpoints
/v1/images/generationsGenerate images โ OpenAI DALL-E 3 compatible (drop-in replacement)
/v1/images/editsEdit images with advanced editing (inpainting, outpainting, style transfer) โ OpenAI compatible
/api/v1/model/generateImageGenerate images โ native API with async polling support
/v1/video/generationsGenerate videos โ async, returns 202 with job ID for polling
/v1/video/generations/{generation_id}Poll video status โ includes requested vs actual model plus optional debug metadata
/v1/video/generations/batchGenerate 1-20 videos in a single request โ ideal for A/B testing, product catalogs, marketing campaigns
/v1/video/generations/batch/{batch_id}Check batch video status โ returns counts and individual job results
/api/generate/voiceGenerate speech from text โ supports 6 voices, 10 languages, adjustable speed
/api/generate/composeMerge hosted audio onto a hosted video โ ideal for narrated demos, explainers, and product clips
/api/generate/status/{generation_id}Poll shared async status for voice, compose, background-removal, and other generation jobs
/api/v1/model/generateVideoGenerate videos โ native API (legacy, use /v1/video/generations instead)
/api/v1/model/prediction/:idPoll task status for async image/video generation (native API)
/v1/modelsList available models and capabilities
/v1/chat/completionsLLM chat completions โ OpenAI compatible
/v1/completionsText completions โ OpenAI compatible (legacy)
/api/v1/model/calculateCalculate pricing for a generation request before running it
Video Polling Best Practices
Video generation is async โ here's how to poll efficiently and avoid common pitfalls.
โฑ Recommended Polling Strategy
๐ซ Common Mistakes
failed status โ failed is terminal, submit a new job insteadfailover_used โ you may be charged less if failover occurredwebhook_url to receive completion/failure callbacks and reduce polling traffic.๐ก Video Output Hosting
Completed videos are hosted on our CDN โ you get a direct URL in data[0].url. No need to provision storage or pay for egress. URLs are valid for 7 days after generation. Download and store on your end if you need permanent access.
Error Handling
All errors follow a consistent JSON format. Content moderation errors include actionable detail.
{
"error": {
"code": "content_policy_violation",
"message": "Your request was rejected because it may generate content that violates our usage policy.",
"type": "invalid_request_error",
"param": "prompt"
}
}invalid_request_errorMissing or invalid parameters. Check param field for which one.
content_policy_violationPrompt or image was flagged by content moderation. Rephrase and retry. Not billed.
authentication_errorInvalid or missing API key. Check the Authorization header.
insufficient_creditsNot enough credits. Top up at Dashboard โ Billing. Request not billed.
rate_limit_exceededToo many requests. Back off and retry. Check Retry-After header for wait time in seconds.
provider_unavailableUpstream provider is down. With failover: true (default), we auto-retry on another provider โ you only see this if ALL providers are down.
๐ก Content Moderation Policy
Content moderation runs before generation โ you are never billed for blocked requests. If your prompt is blocked, you'll get a 400 with content_policy_violation and a human-readable message explaining why. Rephrase your prompt and retry. For SaaS builders: handle this error gracefully in your UI so your end-users get clear feedback.
Gemini Migration Guide
Experiencing Gemini 503 errors, quota issues, or alias changes? CreativeAI's multi-model failover means your app keeps working even when Google doesn't.
โ The Gemini Problem
โ How CreativeAI Handles It
model_actual tells you exactly what served your requestadjusted_credits reflects the lower pricebase_url and go# Before (direct Gemini โ breaks on 503)
client = OpenAI(api_key="GEMINI_KEY", base_url="https://generativelanguage.googleapis.com/v1beta/openai/")
# After (CreativeAI โ auto-failover, never goes down)
client = OpenAI(api_key="CREATIVEAI_KEY", base_url="https://api.creativeai.run/v1")
# Same code, same models โ now with automatic failover
response = client.chat.completions.create(
model="gemini-2.0-flash", # works as-is
messages=[{"role": "user", "content": "Hello!"}]
)
# Check if failover was used:
# response headers include x-model-actual, x-failover-usedUse promo code GEMINI2026 for 500 free credits to test the migration.
Rate Limits
Transparent rate limits โ no surprises, no hidden caps. All limits are per API key.
Free Tier
10
requests per minute
Pro Tier
100
requests per minute
Enterprise
1,000+
requests per minute
Rate limit headers are included in every response: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset. When exceeded, you'll get a 429 with a Retry-After header. Need higher limits? Contact us.
Video-Specific Limits
POST): 5 requests/minute โ video jobs are GPU-intensive, so creation is rate-limited separately from polling.GET): 120 requests/minute โ generous limit so you can poll frequently without hitting rate limits.Enterprise API keys can have custom rate limits and higher concurrency caps. Contact us to upgrade.
Technical Integration FAQ
Answers to common integration questions for our latest API capabilities.
How do I use the API as a Proxy Layer for n8n/Blotato?
To serve automation builders, we natively support /v1/video/generate as an exact alias to /v1/video/generations. You can plug this directly into your n8n or Blotato workflows as a lower-cost video fallback generation endpoint.
Our unified API acts as a built-in proxy layerโautomatically routing requests to the best available provider (Kling, Seedance, etc.) without requiring you to rewrite any code to manage costs or handle rate limits.
How do I use Kling V3 Early Access features?
Pass multi_shot_prompts (array of strings) and ai_director_mode: true in your /v1/video/generations request body.
How does Multi-Character Consistency work?
You can pass an array of up to 3 image URLs in the character_references field. Set camera_angle: "isometric" (or other angles) for consistent game asset generation. Available on both Image and Video generation endpoints.
Use character_references when you need multiple distinct characters to maintain visual consistency across generations. Each URL should point to a clear, single-character reference image. You can also combine it with a single character_reference_url for a primary character.
{
"model": "dall-e-3",
"prompt": "Three fantasy heroes standing on a cliff at sunrise, epic wide shot",
"size": "1792x1024",
"n": 1,
"character_references": [
"https://your-cdn.com/assets/hero_warrior.png",
"https://your-cdn.com/assets/hero_mage.png",
"https://your-cdn.com/assets/hero_rogue.png"
],
"camera_angle": "low-angle",
"style_reference_url": "https://your-cdn.com/assets/fantasy_style.jpg"
}Supported fields: character_references (array of up to 3 URLs) ยท character_reference_url (single URL for a primary character) ยท camera_angle (e.g. "isometric", "low-angle", "bird-eye") ยท style_reference_url (single URL for style transfer). Works on both /v1/images/generations and /v1/video/generations.
How to redeem the DALLE1000 Promo?
New API keys can submit the promo code DALLE1000 in the billing dashboard to receive $10 in free migration credits. Seamlessly switch from OpenAI by changing the base_url.
How does DALL-E 3 Fallback Insurance work?
When you use our unified API with model: "dall-e-3", our backend automatically provides fallback insurance. If the upstream provider experiences an outage, your requests are seamlessly routed to Stable Diffusion XL or DALL-E 2 with sub-second failover.
Zero code changes required: Simply replace your existing base_url with ours and keep your current OpenAI SDK implementation. Built-in per-key analytics provide complete transparency on model failover events and generation spend.