Zero Custom Adapters Required

Vercel AI SDK + CreativeAI

Add AI image generation to your Next.js app in 5 minutes. Uses the official @ai-sdk/openai provider — no proprietary adapter to install, audit, or maintain.

Access GPT Image 1, Seedream 3, DALL-E 3, and more through generateImage(). 2-line migration from OpenAI. Full OpenAI API compatibility.

3 steps

Setup in 3 steps

1npm install ai @ai-sdk/openai
2Set CREATIVEAI_API_KEY in .env.local
3Create provider with baseURL and call generateImage()
lib/creativeai.ts
// lib/creativeai.ts
import { createOpenAI } from "@ai-sdk/openai";

export const creativeai = createOpenAI({
  apiKey: process.env.CREATIVEAI_API_KEY,
  baseURL: "https://api.creativeai.run/v1",
});
app/api/generate/route.ts
// app/api/generate/route.ts
import { creativeai } from "@/lib/creativeai";
import { generateImage } from "ai";

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const { image } = await generateImage({
    model: creativeai.image("gpt-image-1"),
    prompt,
    size: "1024x1024",
  });

  return Response.json({ image: image.base64 });
}

Why CreativeAI for the Vercel AI SDK?

Full OpenAI compatibility means zero friction. No adapter lock-in. Multi-model access means you ship faster.

Zero custom adapters

Uses the official @ai-sdk/openai provider from Vercel. No proprietary packages to install, audit, or keep updated.

2-line migration

Already using OpenAI with the AI SDK? Change apiKey and add baseURL. Your entire codebase stays the same.

Multi-model access

GPT Image 1, Seedream 3, DALL-E 3 — switch models with a single parameter. No new imports or config.

Automatic failover

If a model is rate-limited or down, CreativeAI routes to an equivalent automatically. Zero-downtime generation.

Edge-ready

Works in Vercel Edge Functions, Serverless Functions, and traditional Node.js. Standard HTTP — no runtime constraints.

Pay-per-generation

No subscription lock-in. GPT Image 1 from ~2 credits per image. 50 free credits on signup — no credit card.

2-line change

Migrate from OpenAI in 2 lines

Already using @ai-sdk/openai with OpenAI? Change your API key and add a baseURL. Every generateImage() call, every parameter, every response format stays exactly the same.

  • Same generateImage() function
  • Same response format (base64 / URL)
  • Same size parameters (1024x1024, 1536x1024, etc.)
  • Same model names (gpt-image-1, dall-e-3)
2-line migration
// Before: OpenAI directly
import { createOpenAI } from "@ai-sdk/openai";
const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

// After: CreativeAI (2-line change)
const creativeai = createOpenAI({
  apiKey: process.env.CREATIVEAI_API_KEY,     // change key
  baseURL: "https://api.creativeai.run/v1",   // add baseURL
});

// Everything else stays the same
const { image } = await generateImage({
  model: creativeai.image("gpt-image-1"),
  prompt: "A cyberpunk cityscape at night",
});

Official provider vs. custom adapters

CreativeAI works with the official Vercel AI SDK provider. No proprietary packages in your dependency tree.

FeatureCreativeAICustom Adapter Providers
Package requiredOfficial @ai-sdk/openaiProprietary adapter npm package
Adapter maintenanceMaintained by VercelThird-party maintained
Supply chain riskZero — no proprietary codeMust audit + trust third-party pkg
generateImage() supportYesVaries
Image editing endpointYes — /v1/images/editsLimited
Video generationYes — Kling v3, Seedance 1.5, Veo 3.1No
Automatic failoverBuilt-in model routingNo
Models available6+ image & video1-2 models
Model quality tierGPT Image 1, Seedream 3, DALL-E 3Stable Diffusion variants
OpenAI SDK compatibleFull parityPartial
Edge Function supportYesVaries
Pricing modelPay-per-generation, no subscription — ~$0.07–$0.21/image$0.002/img (SD variants only)
Quality tier for priceGPT Image 1 / Seedream 3 — frontier qualityStable Diffusion — commodity quality

“But another provider is $0.002/image”

Here’s what that price actually buys — and why it matters for your product.

$0.002/image adapters
  • Stable Diffusion variants only — no GPT Image 1, no Seedream 3
  • Proprietary npm adapter in your supply chain
  • No video generation — images only
  • No automatic failover across models
CreativeAI — ~$0.07–$0.21/image
  • GPT Image 1 + Seedream 3 — frontier-quality output
  • Official @ai-sdk/openai — zero third-party code
  • Image + video (Kling v3, Seedance 1.5, Veo 3.1) — one key
  • Automatic failover + 50 free credits to start

If your users see the output, model quality is your product quality. Frontier models cost more per call — but they ship features that SD variants cannot.See full pricing breakdown →

Supply Chain Security

Why “no custom adapter” matters

New proprietary AI SDK adapters appear every week. Here’s why CreativeAI’s approach is fundamentally different.

Zero supply-chain risk

Custom adapters inject third-party code between your app and the AI provider. CreativeAI uses the official @ai-sdk/openai package maintained by Vercel — the same one you'd use with OpenAI directly.

Never blocked by adapter lag

When Vercel ships an AI SDK update, CreativeAI works immediately — because there's no adapter layer to update. Proprietary adapters can take weeks to catch up.

Image + video in one key

Most adapter-based providers only support image generation. CreativeAI gives you GPT Image 1, Seedream 3, DALL-E 3 for images and Kling v3, Seedance 1.5, Veo 3.1 for video — same API key.

Frontier models, not just SD variants

Adapter providers typically wrap Stable Diffusion variants. CreativeAI routes to GPT Image 1 and other frontier models with automatic failover across providers.

Bottom line: Proprietary adapters lock you into a third-party package you must audit, maintain, and trust. CreativeAI is OpenAI-compatible at the API level, so the official Vercel provider just works — today and after every future AI SDK release.

Copy-paste patterns for Next.js

From App Router handlers to Server Actions and Edge runtime, these are the first Vercel AI SDK patterns most teams ship.

Multi-model switching

One parameter change

Access GPT Image 1, Seedream 3, DALL-E 3, and more through the same generateImage() call. Switch models without changing code structure.

multi-model.ts
// Switch models with a single parameter — no code changes
const { image } = await generateImage({
  model: creativeai.image("gpt-image-1"),  // OpenAI GPT Image 1
  // model: creativeai.image("dall-e-3"),   // Routes to best available
  // model: creativeai.image("seedream-3"), // Seedream 3 — fast + high quality
  prompt: "Product photo of a minimalist watch on marble",
  size: "1536x1024",  // Landscape
});

Server Actions

Call from client components

Use generateImage() directly in Server Actions. Your client components call the action — no API route needed.

app/actions.ts
// app/actions.ts
"use server";
import { creativeai } from "@/lib/creativeai";
import { generateImage } from "ai";

export async function generateProductImage(prompt: string) {
  const { image } = await generateImage({
    model: creativeai.image("gpt-image-1"),
    prompt,
    size: "1024x1024",
  });

  return image.base64;
}

Edge Route Handler

Same code, edge runtime

Deploy the same generateImage() pattern on Vercel Edge by exportingruntime = "edge"and keeping the rest of your provider setup unchanged.

app/api/generate-edge/route.ts
// app/api/generate-edge/route.ts
import { creativeai } from "@/lib/creativeai";
import { generateImage } from "ai";

export const runtime = "edge";
export const maxDuration = 30;

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const { image } = await generateImage({
    model: creativeai.image("seedream-3"),
    prompt,
    size: "1024x1024",
  });

  return Response.json({ image: image.base64 });
}

Image editing endpoint

Inpainting, variations, style transfer

CreativeAI supports the full /v1/images/edits endpoint. Build generate-then-edit workflows in your Next.js app with the same API key and auth.

app/api/edit/route.ts
// app/api/edit/route.ts — Image editing via OpenAI-compatible endpoint
export async function POST(req: Request) {
  const formData = await req.formData();

  const response = await fetch("https://api.creativeai.run/v1/images/edits", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${process.env.CREATIVEAI_API_KEY}`,
    },
    body: formData,
  });

  return Response.json(await response.json());
}
Image + Video — Same API Key

Video generation from Next.js

Most adapter-based providers only support images. CreativeAI gives you Kling v3, Seedance 1.5, and Veo 3.1 through the same API key — submit a job, receive a webhook when rendering completes.

Submit video job

Text-to-video or image-to-video

POST to /v1/video/generations from any Next.js API route. Pass image_url for image-to-video, or omit it for text-to-video. Include a webhook URL for async delivery.

app/api/video/route.ts
// app/api/video/route.ts — Video generation from Next.js
export async function POST(req: Request) {
  const { prompt, image_url } = await req.json();

  const response = await fetch("https://api.creativeai.run/v1/video/generations", {
    method: "POST",
    headers: {
      Authorization: `Bearer ${process.env.CREATIVEAI_API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      model: "kling-v3",           // or seedance-1.5, veo-3.1
      prompt,
      ...(image_url && { image_url }),  // image-to-video if provided
      duration: "5",
      aspect_ratio: "16:9",
      webhook_url: `${process.env.NEXT_PUBLIC_APP_URL}/api/webhooks/video`,
    }),
  });

  const job = await response.json();
  return Response.json({ id: job.id, status: job.status });
}

Webhook receiver

HMAC-signed, retried 3x

CreativeAI POSTs the result when rendering completes — HMAC-signed with your API key, retried 3x with backoff. No polling loops needed. ~5 credits per video (~$0.36–$0.50).

app/api/webhooks/video/route.ts
// app/api/webhooks/video/route.ts — Receive render completion
import { createHmac } from "crypto";

export async function POST(req: Request) {
  const body = await req.text();
  const signature = req.headers.get("x-webhook-signature") ?? "";

  // Verify HMAC-SHA256 signature
  const expected = createHmac("sha256", process.env.CREATIVEAI_API_KEY!)
    .update(body)
    .digest("hex");

  if (signature !== expected) {
    return new Response("Unauthorized", { status: 401 });
  }

  const event = JSON.parse(body);
  // event.status === "completed" → event.video_url has the result
  // Store result, notify user, etc.

  return Response.json({ received: true });
}

This is what $0.002/image adapters cannot do. Video generation with webhook delivery, multi-model failover, and image-to-video support — same API key you already use for images.Full webhook docs →

Frequently asked questions

Do I need a custom npm package or adapter?

No. CreativeAI is fully OpenAI-compatible, so the official @ai-sdk/openai provider works out of the box. Install "ai" and "@ai-sdk/openai", set baseURL to https://api.creativeai.run/v1, and you're done.

How is this different from providers that ship custom adapters?

Custom adapters add a dependency you have to audit, update, and trust. CreativeAI implements the OpenAI API spec natively, so you use Vercel's own official provider. No third-party code in your supply chain.

Does generateImage() work?

Yes. The generateImage() function from the "ai" package works with CreativeAI. Pass creativeai.image("gpt-image-1") as the model and it returns base64 images, exactly like OpenAI.

Can I use this in Edge Functions?

Yes. The API is standard HTTP — works in Vercel Edge Functions, Cloudflare Workers, traditional Node.js, and any runtime that supports fetch.

Does it work with Server Actions?

Yes. You can call generateImage() from a Server Action the same way you would from an API route. See the Server Action example above.

What about video generation?

Video models (Kling v3, Seedance 1.5, Veo 3.1) are available through the /v1/video/generations endpoint. Submit a job from any Next.js API route and receive the result via HMAC-signed webhook — or poll for completion. Supports text-to-video and image-to-video. See the video code examples above.

How does pricing compare to OpenAI and $0.002/image providers?

Pay-per-generation with no monthly minimum. GPT Image 1 starts at ~2 credits per image (~$0.07–$0.21 depending on your credit package). Providers advertising $0.002/image are running Stable Diffusion variants — not frontier models. If your product needs GPT Image 1 or Seedream 3 quality, CreativeAI is significantly cheaper than OpenAI direct while giving you multi-model access and video generation in the same API. 50 free credits on signup, no credit card required. See /transparent-pricing for the full breakdown.

Another provider just launched a Vercel AI SDK adapter. How is CreativeAI different?

Most providers ship a proprietary npm adapter that you must install, audit, and trust in your supply chain. CreativeAI uses the official @ai-sdk/openai provider maintained by Vercel — zero third-party code. You also get frontier models (GPT Image 1, Seedream 3) instead of just Stable Diffusion variants, plus video generation (Kling v3, Seedance 1.5, Veo 3.1) through the same API key. And because there's no adapter layer, you're never blocked waiting for a third-party package to catch up after an AI SDK update.

Ready to integrate?

Get your API key, install the AI SDK, and generate your first image in under 5 minutes. No custom adapters. No lock-in.