Vercel AI SDK + CreativeAI
Add AI image generation to your Next.js app in 5 minutes. Uses the official @ai-sdk/openai provider β no custom adapter needed. Access GPT Image 1, Seedream 3, and more through a single generateImage() call.
Why CreativeAI + Vercel AI SDK?
Zero Custom Packages
Uses the official @ai-sdk/openai provider. No custom adapter to maintain or update.
Multi-Model Access
GPT Image 1, Seedream 3, DALL-E 3, and more. Switch models with one parameter.
2-Line Migration
Already using OpenAI with Vercel AI SDK? Change apiKey + add baseURL. Done.
Automatic Failover
If a model is down, CreativeAI routes to an equivalent automatically. Zero downtime.
Edge Compatible
Works in Vercel Edge Functions, serverless, and traditional Node.js runtimes.
Image Editing Too
Full /v1/images/edits support β inpainting, variations, and style transfer.
Already using OpenAI with Vercel AI SDK?
Migration takes 2 lines. Change your API key and add a baseURL. Your entire codebase stays the same β same generateImage() calls, same parameters, same response format.
// Before: Using OpenAI directly
import { createOpenAI } from "@ai-sdk/openai";
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// After: Using CreativeAI (2-line change)
const creativeai = createOpenAI({
apiKey: process.env.CREATIVEAI_API_KEY, // β change key
baseURL: "https://api.creativeai.run/v1", // β add baseURL
});
// Everything else stays the same!
const { image } = await generateImage({
model: creativeai.image("gpt-image-1"),
prompt: "A cyberpunk cityscape at night",
});Step-by-Step Integration
Install Dependencies
Install the Vercel AI SDK and the OpenAI provider. Since CreativeAI is fully OpenAI-compatible, you use the official @ai-sdk/openai package β no custom adapter needed.
npm install ai @ai-sdk/openai
Configure Your API Key
Sign up at creativeai.run, grab your API key from the dashboard, and add it to your environment variables. You get 50 free credits to start β no credit card required.
# .env.local CREATIVEAI_API_KEY=your_api_key_here
Create the Provider
Create a shared provider instance that points to CreativeAI's API. This is the only configuration needed β every AI SDK primitive works automatically.
// lib/creativeai.ts
import { createOpenAI } from "@ai-sdk/openai";
export const creativeai = createOpenAI({
apiKey: process.env.CREATIVEAI_API_KEY,
baseURL: "https://api.creativeai.run/v1",
});Generate Images with generateImage()
Create an API route that generates images using the Vercel AI SDK's generateImage() function. Works exactly like the OpenAI provider β same API, more models.
// app/api/generate/route.ts
import { creativeai } from "@/lib/creativeai";
import { generateImage } from "ai";
export async function POST(req: Request) {
const { prompt } = await req.json();
const { image } = await generateImage({
model: creativeai.image("gpt-image-1"),
prompt,
size: "1024x1024",
});
return Response.json({
image: image.base64,
});
}Switch Models Instantly
Access GPT Image 1, Seedream 3, DALL-E 3, and more β all through the same interface. Switch models with a single parameter change. No code rewrites, no new SDKs.
// Switch models with a single parameter change
const { image } = await generateImage({
model: creativeai.image("gpt-image-1"), // OpenAI GPT Image 1
// model: creativeai.image("dall-e-3"), // Routes to best available
// model: creativeai.image("seedream-3"), // Seedream 3
prompt: "A serene mountain lake at sunset, photorealistic",
size: "1536x1024", // Landscape
});Build the Frontend
Wire up a complete image generator page. This is a fully working Next.js component you can drop into your app right now.
// app/page.tsx β Complete working example
"use client";
import { useState } from "react";
export default function ImageGenerator() {
const [prompt, setPrompt] = useState("");
const [image, setImage] = useState<string | null>(null);
const [loading, setLoading] = useState(false);
async function generate() {
setLoading(true);
const res = await fetch("/api/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt }),
});
const data = await res.json();
setImage(`data:image/png;base64,${data.image}`);
setLoading(false);
}
return (
<main className="max-w-2xl mx-auto p-8">
<h1 className="text-3xl font-bold mb-6">AI Image Generator</h1>
<div className="flex gap-2 mb-6">
<input
value={prompt}
onChange={(e) => setPrompt(e.target.value)}
placeholder="Describe an image..."
className="flex-1 border rounded-lg px-4 py-2"
/>
<button
onClick={generate}
disabled={loading || !prompt}
className="bg-blue-600 text-white px-6 py-2 rounded-lg disabled:opacity-50"
>
{loading ? "Generating..." : "Generate"}
</button>
</div>
{image && (
<img src={image} alt={prompt} className="rounded-lg shadow-lg w-full" />
)}
</main>
);
}Bonus: Image Editing Endpoint
CreativeAI also supports the /v1/images/edits endpoint for inpainting, style transfer, and iterative refinement. Use it alongside generateImage() for a complete generate β edit workflow.
// app/api/edit/route.ts
import { creativeai } from "@/lib/creativeai";
export async function POST(req: Request) {
const formData = await req.formData();
const image = formData.get("image") as File;
const prompt = formData.get("prompt") as string;
// Use the OpenAI-compatible images/edits endpoint
const response = await fetch("https://api.creativeai.run/v1/images/edits", {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.CREATIVEAI_API_KEY}`,
},
body: (() => {
const fd = new FormData();
fd.append("image", image);
fd.append("prompt", prompt);
fd.append("model", "gpt-image-1");
return fd;
})(),
});
return Response.json(await response.json());
}Bonus: Run the same route on Vercel Edge
If you want lower-latency execution on Vercel, exportruntime = "edge"and keep the rest of your AI SDK code exactly the same. No adapter swap, no new client, no separate provider setup.
// app/api/generate-edge/route.ts
import { creativeai } from "@/lib/creativeai";
import { generateImage } from "ai";
export const runtime = "edge";
export const maxDuration = 30;
export async function POST(req: Request) {
const { prompt } = await req.json();
const { image } = await generateImage({
model: creativeai.image("seedream-3"),
prompt,
size: "1024x1024",
});
return Response.json({ image: image.base64 });
}Available Models
GPT Image 1
PopularOpenAI's latest β photorealistic, text rendering, complex compositions
model: "gpt-image-1"Seedream 3
FastFast, high-quality generation with excellent prompt adherence
model: "seedream-3"DALL-E 3
DefaultRoutes to best available model β great default choice
model: "dall-e-3"Seedance 1.5
VideoVideo generation model β use with video endpoint
model: "seedance-1.5"Kling v3
VideoProfessional video generation with cinematic quality
model: "kling-v3"Veo 3.1
VideoGoogle's latest video model β high-fidelity output
model: "veo-3.1"FAQ
Do I need a custom npm package?
No. CreativeAI is fully OpenAI-compatible, so the official @ai-sdk/openai provider works out of the box. Just set the baseURL.
Does it work with Edge Functions?
Yes. The API is standard HTTP β works in Vercel Edge Functions, Cloudflare Workers, traditional Node.js, and any runtime that supports fetch.
What about streaming?
Image generation returns a complete response (not streamed). For text models, streaming works via the standard AI SDK streaming APIs.
How does pricing work?
Pay-per-generation with no monthly minimum. GPT Image 1 starts at ~2 credits per image. You get 50 free credits on signup.
Can I use this in production?
Absolutely. CreativeAI handles rate limiting, failover, and CDN delivery. We serve production traffic for apps with thousands of users.
Ready to Build?
Get your API key and start generating images in your Next.js app. 50 free credits β no credit card required.