Sora 1 shut down 2 days ago. Switch your video pipeline now — use code SORASWITCH for 50 free video credits.
Sora 1 shut down 2 days ago.

Coming from Sora? Two-line switch.

Replace base_url and api_key — your OpenAI SDK code works as-is. Same client.images.generate(), same response shape, plus video generation on the same key.

  • 5+ video models (Kling v3, Seedance, Veo) — no Sora 2 $200/mo lock-in
  • Async video + HMAC-signed webhooks — no polling required
  • Use code SORASWITCH for 50 free video credits
Before → After
# Before (Sora / OpenAI)
client = OpenAI(
  api_key="sk-...",
  # base_url default → api.openai.com
)

# After (CreativeAI)
client = OpenAI(
  api_key="YOUR_CREATIVEAI_KEY",
  base_url="https://api.creativeai.run/v1"
)
# Same SDK, same methods — plus video
Complete Pipeline — Image + Video

Image → Video in Two API Calls

Generate an image from text, then animate it into a video — all through one API key. No stitching providers. No juggling credentials. One pipeline, multiple models, automatic failover.

How It Works

Step 1

Generate Image

Call /v1/images/generations with your prompt. Pick from GPT Image 1, Seedream 3, Gemini, or any supported model.

Step 2

Animate to Video

Pass the image URL to /v1/video/generations with a motion prompt. Kling, Seedance, or Veo handle the rest.

Step 3

Get Your Video

Poll the task or use webhook_url for async notification. Download or stream the result. Done.

Complete Pipeline in Code

Two API calls. One key. Image generated and animated into a video. See Batch I2V for catalog-scale workflows and A/B Variations for multi-treatment creative testing.

Python
from openai import OpenAI
import requests, time

client = OpenAI(
    api_key="YOUR_CREATIVEAI_KEY",
    base_url="https://api.creativeai.run/v1"
)

# Step 1: Generate an image
image = client.images.generate(
    model="gpt-image-1",
    prompt="A golden retriever sitting in a sunlit meadow",
    size="1024x1024"
)
image_url = image.data[0].url
print(f"Image: {image_url}")

# Step 2: Turn the image into a video
video = requests.post(
    "https://api.creativeai.run/v1/video/generations",
    headers={"Authorization": "Bearer YOUR_CREATIVEAI_KEY"},
    json={
        "model": "kling-v3",
        "image_url": image_url,
        "prompt": "The dog turns its head and wags its tail gently",
        "duration": 5,
        "aspect_ratio": "1:1"
    }
).json()

# Step 3: Poll for completion (or use webhook_url)
task_id = video["data"]["task_id"]
while True:
    status = requests.get(
        f"https://api.creativeai.run/v1/video/generations/{task_id}",
        headers={"Authorization": "Bearer YOUR_CREATIVEAI_KEY"}
    ).json()
    if status["data"]["status"] == "completed":
        print(f"Video: {status['data']['video_url']}")
        break
    time.sleep(5)

Why Build Your Pipeline on CreativeAI?

One Key, Every Model

GPT Image 1, Seedream, Gemini for images. Kling, Seedance, Veo for video. One API key covers everything.

Automatic Failover

If your image model hits a 429 or goes down, we route to the next best option. Your pipeline never stalls.

Webhook Support

Skip polling. Pass webhook_url and get notified when your video is ready. Perfect for async pipelines and n8n workflows.

Pay Per Generation

No subscriptions. Images from ~$0.21 (3 credits), videos from ~$0.36 (5 credits). Pay only for what you generate.

OpenAI SDK Compatible

Use the official OpenAI Python/JS SDK. Change two lines — base_url and api_key — and you're connected.

Batch Generation

Generate up to 4 images per request with n=4. Create variations, then pick the best one to animate.

Mix and Match Models

Choose the best image model and the best video model for each job. Swap freely — your pipeline code stays the same.

Image Models

GPT Image 1Popular
gpt-image-1

OpenAI's latest. Excellent prompt adherence.

Seedream 3Fast
seedream-3

ByteDance. Great for commercial and product imagery.

GeminiVersatile
gemini-2.0-flash-exp

Google. Strong with text-in-image and complex scenes.

DALL-E 3Alias
dall-e-3

Auto-routes to GPT Image 1. Zero-change migration.

Video Models

Kling V3Recommended
kling-v3

Next-gen quality. Best temporal consistency. 5–10s.

Seedance 1.5Fast
seedance-1.5

ByteDance video gen. Quick turnaround.

Veo ProPremium
veo-pro

Google DeepMind. Cinematic quality at higher cost.

Built for Real Pipelines

Social Media Automation

Generate branded images, animate into short clips, and post to LinkedIn/Instagram/TikTok — all automated via n8n or Zapier.

n8nAutomationSocial

E-Commerce Product Videos

Create product photos on clean backgrounds, then animate with subtle motion for listing videos and ads.

E-CommerceProductAds

Content Creation at Scale

Blog illustrations → animated headers. Educational diagrams → explainer clips. One pipeline, infinite content.

ContentScaleBatch

Ad Creative A/B Testing

Submit one hero product image with 4 different motion prompts — rotation, zoom, lifestyle, dramatic. Pick the CTR winner, then scale it across the catalog.

AdsA/B TestingVariations

Prototype & Pitch Decks

Generate concept art and turn it into animated mockups for investor pitches and client presentations.

StartupPrototypeDesign

Direct Provider APIs vs. CreativeAI Backend

Building a product-video app, ad-creative tool, or catalog video pipeline? Here's what changes when you use CreativeAI as your video generation backend instead of integrating each provider directly.

Direct IntegrationCreativeAI Backend
API keys to manage1 per provider (Kling, Runway, Pika, Veo...)1 key — all models
Failover on 429 / outageYou build itAutomatic — routes to next model
Webhook deliveryVaries by providerUnified — HMAC-signed, 3-retry backoff
BillingSeparate invoice per providerOne credit balance, one invoice
New model supportNew integration per modelDay-one access — same endpoint
Catalog-scale batchesBuild queue + concurrency yourselfBounded concurrency + webhook per SKU
White-label readyYour branding on each providerZero CreativeAI branding — per-client keys
OpenAI SDK compatibleNoYes — change base_url and api_key
Sora 1 shut down (Mar 13)Scramble for a new providerAlready covered — 5+ video models on one key
DALL-E 3 shutdown (May 12)Rebuild image pipeline separatelySame key — images + videos, DALL-E 3 alias auto-routes

Transparent Pricing

No subscriptions. No minimums. Pay only for what you generate.

Images

~$0.21/image

3 credits per standard image. ~$0.071/credit at $100 top-up.

Videos

~$0.36/video

5 credits for a 5s clip. ~$0.36–$0.50 depending on credit pack.

DALL-E 3 shuts down May 12, 2026

Also migrating from DALL-E?

DALL-E 2 is already deprecated. DALL-E 3 shuts down May 12, 2026. If your pipeline generates both images and videos, CreativeAI replaces both DALL-E and Sora with a single API key — same OpenAI SDK, 2-line change, no lock-in.

  • dall-e-3 alias auto-routes to GPT Image 1 — zero code changes
  • 6+ image models (gpt-image-1, Seedream 3, and more) on the same endpoint
  • Use code DALLE1000 for 3,000 free image credits on signup

Dual migration — one change

Images: dall-e-3 → GPT Image 1
Videos: Sora → Kling v3 / Seedance
Same API key for both
Same OpenAI SDK

Frequently Asked Questions

How does the image-to-video pipeline work?

Two API calls: first, generate an image via POST /v1/images/generations with your prompt and model. Then pass the returned image URL to POST /v1/video/generations with a motion prompt. One API key handles both steps. Poll the task or use a webhook_url for async notification.

Which video models support image-to-video?

Kling V3 (Standard and Pro), Kling O3 (Standard and Pro), and Seedance 1.5 all support image-to-video. Pass your image URL in the image_url field. Veo 3.1 uses reference images via a separate reference-to-video workflow. Use "model": "auto" to let CreativeAI pick the best available model.

How much does image-to-video cost?

Images cost ~3 credits (~$0.21) and videos cost 5–50 credits depending on the model. A full image-to-video pipeline with Kling V3 Standard costs ~8 credits total (~$0.57). No subscription, no minimum spend. You get 50 free credits on signup with no credit card.

Can I use this for product and catalog video automation?

Yes. Generate hero product images, then animate each into a short video. Use bounded concurrency with ThreadPoolExecutor or async workers, and pass webhook_url per SKU so your system receives each finished video automatically. See the Batch I2V and A/B Variations code examples above.

Do I need a subscription or monthly fee?

No. CreativeAI is pay-per-generation with no subscription, no monthly minimum, and no seat fees. Top up credits when you need them. $100 = 1,400 credits (~$0.071/credit). 50 free credits on signup, no credit card required.

Can I migrate from Sora to this pipeline?

Yes. Sora 1 was shut down on March 13, 2026. CreativeAI uses an OpenAI-compatible API — change base_url and api_key in your existing OpenAI SDK code, and you get image generation plus video generation on the same key. Use code SORASWITCH for 50 free video credits.

How do webhooks work for video generation?

Add a webhook_url field to your video generation request. When the render completes, CreativeAI sends a POST to your URL with the video result. Webhooks are HMAC-signed with your API key, retried 3 times with exponential backoff, and deduplicated with a 7-day window. Polling is always available as a fallback.

What if a video model is at capacity or fails?

CreativeAI automatically fails over to the next-best available model. This works both at submission and mid-render. The response includes model_actual so you know which model served the request, and adjusted_credits if the fallback costs less. Your pipeline never stalls.

Stop Stitching Providers Together

One API key. One pipeline. Image to video in two calls.
50 free credits to start — no credit card required.