Replace base_url and api_key — your OpenAI SDK code works as-is. Same client.images.generate(), same response shape, plus video generation on the same key.
SORASWITCH for 50 free video credits# Before (Sora / OpenAI)
client = OpenAI(
api_key="sk-...",
# base_url default → api.openai.com
)
# After (CreativeAI)
client = OpenAI(
api_key="YOUR_CREATIVEAI_KEY",
base_url="https://api.creativeai.run/v1"
)
# Same SDK, same methods — plus videoGenerate an image from text, then animate it into a video — all through one API key. No stitching providers. No juggling credentials. One pipeline, multiple models, automatic failover. Especially strong for product-media workflows: hero image → PDP clip, catalog refreshes, and ad-creative variations with async webhook delivery.
Call /v1/images/generations with your prompt. Pick from GPT Image 1, Seedream 3, Gemini, or any supported model.
Pass the image URL to /v1/video/generations with a motion prompt. Kling, Seedance, or Veo handle the rest.
Poll the task or use webhook_url for async notification. Download or stream the result. Done.
Two API calls. One key. Image generated and animated into a video. See Batch I2V for catalog-scale workflows and A/B Variations for multi-treatment creative testing.
from openai import OpenAI
import requests, time
client = OpenAI(
api_key="YOUR_CREATIVEAI_KEY",
base_url="https://api.creativeai.run/v1"
)
# Step 1: Generate an image
image = client.images.generate(
model="gpt-image-1",
prompt="A golden retriever sitting in a sunlit meadow",
size="1024x1024"
)
image_url = image.data[0].url
print(f"Image: {image_url}")
# Step 2: Turn the image into a video
video = requests.post(
"https://api.creativeai.run/v1/video/generations",
headers={"Authorization": "Bearer YOUR_CREATIVEAI_KEY"},
json={
"model": "kling-v3",
"image_url": image_url,
"prompt": "The dog turns its head and wags its tail gently",
"duration": "5s",
"aspect_ratio": "1:1",
"webhook_url": "https://your-app.com/webhooks/creativeai"
},
timeout=60,
)
video.raise_for_status()
job = video.json()
print(job["id"], job["status"])
# Step 3: Poll for completion if you don't want webhooks
while True:
status = requests.get(
f"https://api.creativeai.run/v1/video/generations/{job['id']}",
headers={"Authorization": "Bearer YOUR_CREATIVEAI_KEY"},
timeout=30,
).json()
if status["status"] == "completed":
print(f"Video: {status['output_url']}")
break
if status["status"] == "failed":
raise RuntimeError(status.get("error", {}).get("message", "Video generation failed"))
time.sleep(10)GPT Image 1, Seedream, Gemini for images. Kling, Seedance, Veo for video. One API key covers everything.
If your image model hits a 429 or goes down, we route to the next best option. Your pipeline never stalls.
Skip polling. Pass webhook_url and get notified when your video is ready. Perfect for async pipelines and n8n workflows.
No subscriptions. Images from ~$0.21 (3 credits), videos from ~$0.36 (5 credits). Pay only for what you generate.
Use the official OpenAI Python/JS SDK. Change two lines — base_url and api_key — and you're connected.
Generate up to 4 images per request with n=4. Create variations, then pick the best one to animate.
Choose the best image model and the best video model for each job. Swap freely — your pipeline code stays the same.
gpt-image-1OpenAI's latest. Excellent prompt adherence.
seedream-3ByteDance. Great for commercial and product imagery.
gemini-2.0-flash-expGoogle. Strong with text-in-image and complex scenes.
dall-e-3Auto-routes to GPT Image 1. Zero-change migration.
kling-v3Next-gen quality. Best temporal consistency. 5–10s.
seedance-1.5ByteDance video gen. Quick turnaround.
veo-proGoogle DeepMind. Cinematic quality at higher cost.
Generate branded images, animate into short clips, and post to LinkedIn/Instagram/TikTok — all automated via n8n or Zapier.
Create product photos on clean backgrounds, then animate with subtle motion for listing videos and ads.
Blog illustrations → animated headers. Educational diagrams → explainer clips. One pipeline, infinite content.
Submit one hero product image with 4 different motion prompts — rotation, zoom, lifestyle, dramatic. Pick the CTR winner, then scale it across the catalog.
Generate concept art and turn it into animated mockups for investor pitches and client presentations.
If the buyer asks “can you turn approved product images into storefront-ready video without a second vendor?”, this is the short proof story: one hero image, one async video job, signed webhook delivery, and predictable per-SKU cost.
Take one approved product image and submit it to /v1/video/generations for a 5-second storefront clip. Same API key, no extra vendor, no manual handoff.
Run bounded async batches per SKU, persist generation IDs, and let each finished clip arrive independently instead of waiting on one giant batch job.
Webhook delivery uses 3 attempts with exponential backoff (0s, 5s, 30s). If delivery still fails, the final result remains available via the status API.
Paste-ready positioning
“CreativeAI lets us take an approved product image and turn it into a storefront-ready video on the same API. We can submit one async job per SKU, receive signed webhook callbacks as each clip finishes, and keep polling as a fallback. That means product-media automation without adding a second generation vendor.”
Building a product-video app, ad-creative tool, or catalog video pipeline? Here's what changes when you use CreativeAI as your video generation backend instead of integrating each provider directly.
| Direct Integration | CreativeAI Backend | |
|---|---|---|
| API keys to manage | 1 per provider (Kling, Runway, Pika, Veo...) | 1 key — all models |
| Failover on 429 / outage | You build it | Automatic — routes to next model |
| Webhook delivery | Varies by provider | Unified — HMAC-signed, 3-retry backoff |
| Billing | Separate invoice per provider | One credit balance, one invoice |
| New model support | New integration per model | Day-one access — same endpoint |
| Catalog-scale batches | Build queue + concurrency yourself | Bounded concurrency + webhook per SKU |
| White-label ready | Your branding on each provider | Zero CreativeAI branding — per-client keys |
| OpenAI SDK compatible | No | Yes — change base_url and api_key |
| Sora 1 shut down (Mar 13) | Scramble for a new provider | Already covered — 5+ video models on one key |
| DALL-E 3 shutdown (May 12) | Rebuild image pipeline separately | Same key — images + videos, DALL-E 3 alias auto-routes |
If you're embedding image-to-video generation into a SaaS product, catalog tool, or ad-creative platform, here's the proof your team needs to evaluate us as a backend.
Per-key spend controls, pay-per-use economics, failover, and integration examples for product teams.
Batch product-image generation, async video delivery per SKU, and white-label pricing proof.
HMAC-signed callbacks, 3-retry backoff, failover scenarios, and render-status visibility guarantees.
Per-client API keys, zero branding, volume pricing, and reseller rewards for video app backends.
Building a product-video SaaS, daily auto-posting tool, or ad-creative platform? These are the proof pages your engineering and product team will want before committing to a backend.
HMAC-signed webhooks, 3-retry backoff, 7-day dedup, mid-render capacity failover, and render-status phase visibility — the proof your ops team needs for daily high-volume video pipelines.
Read the reliability guarantees →Reference-anchored generation preserves product details, logos, and brand identity across image and video outputs — critical for catalog-scale product video apps and ad-creative platforms.
See the fidelity proof →Exact per-model costs, per-key monthly spend caps, and volume discount tiers — so you can price your video product profitably without subscription math or surprise invoices.
View exact costs →DALL-E 2 is already deprecated. DALL-E 3 shuts down May 12, 2026. If your pipeline generates both images and videos, CreativeAI replaces both DALL-E and Sora with a single API key — same OpenAI SDK, 2-line change, no lock-in.
dall-e-3 alias auto-routes to GPT Image 1 — zero code changesDALLE1000 for 3,000 free image credits on signupDual migration — one change
dall-e-3 → GPT Image 1Two API calls: first, generate an image via POST /v1/images/generations with your prompt and model. Then pass the returned image URL to POST /v1/video/generations with a motion prompt. One API key handles both steps. Poll the task or use a webhook_url for async notification.
Kling V3 (Standard and Pro), Kling O3 (Standard and Pro), and Seedance 1.5 all support image-to-video. Pass your image URL in the image_url field. Veo 3.1 uses reference images via a separate reference-to-video workflow. Use "model": "auto" to let CreativeAI pick the best available model.
Images cost ~3 credits (~$0.21) and videos cost 5–50 credits depending on the model. A full image-to-video pipeline with Kling V3 Standard costs ~8 credits total (~$0.57). No subscription, no minimum spend. You get 50 free credits on signup with no credit card.
Yes. Generate hero product images, then animate each into a short video. Use bounded concurrency with ThreadPoolExecutor or async workers, and pass webhook_url per SKU so your system receives each finished video automatically. See the Batch I2V and A/B Variations code examples above.
No. CreativeAI is pay-per-generation with no subscription, no monthly minimum, and no seat fees. Top up credits when you need them. $100 = 1,400 credits (~$0.071/credit). 50 free credits on signup, no credit card required.
Yes. Sora 1 was shut down on March 13, 2026. CreativeAI uses an OpenAI-compatible API — change base_url and api_key in your existing OpenAI SDK code, and you get image generation plus video generation on the same key. Use code SORASWITCH for 50 free video credits.
Add a webhook_url field to your video generation request. When the render completes, CreativeAI sends a POST to your URL with the video result. Webhooks are HMAC-signed with your API key, delivered with 3 delivery attempts (0s, 5s, 30s), and deduplicated with a 7-day window. If delivery still fails, the final result remains available via the status API.
CreativeAI automatically fails over to the next-best available model. This works both at submission and mid-render. The response includes model_actual so you know which model served the request, and adjusted_credits if the fallback costs less. Your pipeline never stalls.
See how teams in your vertical use image-to-video generation today.
Listing-photo → virtual staging video, twilight transforms, batch photo-set pipelines.
MLS photo → walkthrough video with HMAC-signed webhook delivery, CSV batch manifests, agent-friendly status copy.
Product-image → product video at SKU scale, Shopify webhook delivery, batch manifests.
Social clips from thumbnails, short-form promo videos, multi-platform format variants.
n8n / Make / Zapier video nodes, scheduled social video, webhook-driven pipelines.
Product hero → ad video pipeline, bulk variation generation, multi-platform ad formats.
Bounded async batches, signed webhooks, SKU-level JSONL manifests for PDP video.
HMAC signatures, retry safety, 7-day dedup, render-status visibility guarantees.
Exact per-model costs, volume discount tiers, no subscriptions or seat fees.
Preserve product details, logos, and brand identity across video outputs.