Replace base_url and api_key — your OpenAI SDK code works as-is. Same client.images.generate(), same response shape, plus video generation on the same key.
SORASWITCH for 50 free video credits# Before (Sora / OpenAI)
client = OpenAI(
api_key="sk-...",
# base_url default → api.openai.com
)
# After (CreativeAI)
client = OpenAI(
api_key="YOUR_CREATIVEAI_KEY",
base_url="https://api.creativeai.run/v1"
)
# Same SDK, same methods — plus videoCall /v1/images/generations with your prompt. Pick from GPT Image 1, Seedream 3, Gemini, or any supported model.
Pass the image URL to /v1/video/generations with a motion prompt. Kling, Seedance, or Veo handle the rest.
Poll the task or use webhook_url for async notification. Download or stream the result. Done.
Two API calls. One key. Image generated and animated into a video. See Batch I2V for catalog-scale workflows and A/B Variations for multi-treatment creative testing.
from openai import OpenAI
import requests, time
client = OpenAI(
api_key="YOUR_CREATIVEAI_KEY",
base_url="https://api.creativeai.run/v1"
)
# Step 1: Generate an image
image = client.images.generate(
model="gpt-image-1",
prompt="A golden retriever sitting in a sunlit meadow",
size="1024x1024"
)
image_url = image.data[0].url
print(f"Image: {image_url}")
# Step 2: Turn the image into a video
video = requests.post(
"https://api.creativeai.run/v1/video/generations",
headers={"Authorization": "Bearer YOUR_CREATIVEAI_KEY"},
json={
"model": "kling-v3",
"image_url": image_url,
"prompt": "The dog turns its head and wags its tail gently",
"duration": 5,
"aspect_ratio": "1:1"
}
).json()
# Step 3: Poll for completion (or use webhook_url)
task_id = video["data"]["task_id"]
while True:
status = requests.get(
f"https://api.creativeai.run/v1/video/generations/{task_id}",
headers={"Authorization": "Bearer YOUR_CREATIVEAI_KEY"}
).json()
if status["data"]["status"] == "completed":
print(f"Video: {status['data']['video_url']}")
break
time.sleep(5)GPT Image 1, Seedream, Gemini for images. Kling, Seedance, Veo for video. One API key covers everything.
If your image model hits a 429 or goes down, we route to the next best option. Your pipeline never stalls.
Skip polling. Pass webhook_url and get notified when your video is ready. Perfect for async pipelines and n8n workflows.
No subscriptions. Images from ~$0.21 (3 credits), videos from ~$0.36 (5 credits). Pay only for what you generate.
Use the official OpenAI Python/JS SDK. Change two lines — base_url and api_key — and you're connected.
Generate up to 4 images per request with n=4. Create variations, then pick the best one to animate.
Choose the best image model and the best video model for each job. Swap freely — your pipeline code stays the same.
gpt-image-1OpenAI's latest. Excellent prompt adherence.
seedream-3ByteDance. Great for commercial and product imagery.
gemini-2.0-flash-expGoogle. Strong with text-in-image and complex scenes.
dall-e-3Auto-routes to GPT Image 1. Zero-change migration.
kling-v3Next-gen quality. Best temporal consistency. 5–10s.
seedance-1.5ByteDance video gen. Quick turnaround.
veo-proGoogle DeepMind. Cinematic quality at higher cost.
Generate branded images, animate into short clips, and post to LinkedIn/Instagram/TikTok — all automated via n8n or Zapier.
Create product photos on clean backgrounds, then animate with subtle motion for listing videos and ads.
Blog illustrations → animated headers. Educational diagrams → explainer clips. One pipeline, infinite content.
Submit one hero product image with 4 different motion prompts — rotation, zoom, lifestyle, dramatic. Pick the CTR winner, then scale it across the catalog.
Generate concept art and turn it into animated mockups for investor pitches and client presentations.
Building a product-video app, ad-creative tool, or catalog video pipeline? Here's what changes when you use CreativeAI as your video generation backend instead of integrating each provider directly.
| Direct Integration | CreativeAI Backend | |
|---|---|---|
| API keys to manage | 1 per provider (Kling, Runway, Pika, Veo...) | 1 key — all models |
| Failover on 429 / outage | You build it | Automatic — routes to next model |
| Webhook delivery | Varies by provider | Unified — HMAC-signed, 3-retry backoff |
| Billing | Separate invoice per provider | One credit balance, one invoice |
| New model support | New integration per model | Day-one access — same endpoint |
| Catalog-scale batches | Build queue + concurrency yourself | Bounded concurrency + webhook per SKU |
| White-label ready | Your branding on each provider | Zero CreativeAI branding — per-client keys |
| OpenAI SDK compatible | No | Yes — change base_url and api_key |
| Sora 1 shut down (Mar 13) | Scramble for a new provider | Already covered — 5+ video models on one key |
| DALL-E 3 shutdown (May 12) | Rebuild image pipeline separately | Same key — images + videos, DALL-E 3 alias auto-routes |
If you're embedding image-to-video generation into a SaaS product, catalog tool, or ad-creative platform, here's the proof your team needs to evaluate us as a backend.
Per-key spend controls, pay-per-use economics, failover, and integration examples for product teams.
Batch product-image generation, async video delivery per SKU, and white-label pricing proof.
HMAC-signed callbacks, 3-retry backoff, failover scenarios, and render-status visibility guarantees.
Per-client API keys, zero branding, volume pricing, and reseller rewards for video app backends.
Building a product-video SaaS, daily auto-posting tool, or ad-creative platform? These are the proof pages your engineering and product team will want before committing to a backend.
HMAC-signed webhooks, 3-retry backoff, 7-day dedup, mid-render capacity failover, and render-status phase visibility — the proof your ops team needs for daily high-volume video pipelines.
Read the reliability guarantees →Reference-anchored generation preserves product details, logos, and brand identity across image and video outputs — critical for catalog-scale product video apps and ad-creative platforms.
See the fidelity proof →Exact per-model costs, per-key monthly spend caps, and volume discount tiers — so you can price your video product profitably without subscription math or surprise invoices.
View exact costs →DALL-E 2 is already deprecated. DALL-E 3 shuts down May 12, 2026. If your pipeline generates both images and videos, CreativeAI replaces both DALL-E and Sora with a single API key — same OpenAI SDK, 2-line change, no lock-in.
dall-e-3 alias auto-routes to GPT Image 1 — zero code changesDALLE1000 for 3,000 free image credits on signupDual migration — one change
dall-e-3 → GPT Image 1Two API calls: first, generate an image via POST /v1/images/generations with your prompt and model. Then pass the returned image URL to POST /v1/video/generations with a motion prompt. One API key handles both steps. Poll the task or use a webhook_url for async notification.
Kling V3 (Standard and Pro), Kling O3 (Standard and Pro), and Seedance 1.5 all support image-to-video. Pass your image URL in the image_url field. Veo 3.1 uses reference images via a separate reference-to-video workflow. Use "model": "auto" to let CreativeAI pick the best available model.
Images cost ~3 credits (~$0.21) and videos cost 5–50 credits depending on the model. A full image-to-video pipeline with Kling V3 Standard costs ~8 credits total (~$0.57). No subscription, no minimum spend. You get 50 free credits on signup with no credit card.
Yes. Generate hero product images, then animate each into a short video. Use bounded concurrency with ThreadPoolExecutor or async workers, and pass webhook_url per SKU so your system receives each finished video automatically. See the Batch I2V and A/B Variations code examples above.
No. CreativeAI is pay-per-generation with no subscription, no monthly minimum, and no seat fees. Top up credits when you need them. $100 = 1,400 credits (~$0.071/credit). 50 free credits on signup, no credit card required.
Yes. Sora 1 was shut down on March 13, 2026. CreativeAI uses an OpenAI-compatible API — change base_url and api_key in your existing OpenAI SDK code, and you get image generation plus video generation on the same key. Use code SORASWITCH for 50 free video credits.
Add a webhook_url field to your video generation request. When the render completes, CreativeAI sends a POST to your URL with the video result. Webhooks are HMAC-signed with your API key, retried 3 times with exponential backoff, and deduplicated with a 7-day window. Polling is always available as a fallback.
CreativeAI automatically fails over to the next-best available model. This works both at submission and mid-render. The response includes model_actual so you know which model served the request, and adjusted_credits if the fallback costs less. Your pipeline never stalls.
See how teams in your vertical use image-to-video generation today.
Listing-photo → virtual staging video, twilight transforms, batch photo-set pipelines.
MLS photo → walkthrough video with HMAC-signed webhook delivery, CSV batch manifests, agent-friendly status copy.
Product-image → product video at SKU scale, Shopify webhook delivery, batch manifests.
Social clips from thumbnails, short-form promo videos, multi-platform format variants.
n8n / Make / Zapier video nodes, scheduled social video, webhook-driven pipelines.
Product hero → ad video pipeline, bulk variation generation, multi-platform ad formats.
Bounded async batches, signed webhooks, SKU-level JSONL manifests for PDP video.
HMAC signatures, retry safety, 7-day dedup, render-status visibility guarantees.
Exact per-model costs, volume discount tiers, no subscriptions or seat fees.
Preserve product details, logos, and brand identity across video outputs.