Latest Release: Version 3.x supports AI SDK v6. Version 2.x is the previous AI SDK v6 line. For AI SDK v5 support, use the
ai-sdk-v5tag (0.x.x).
A community provider for the Vercel AI SDK that enables using AI models through OpenCode and the @opencode-ai/sdk/v2 APIs. OpenCode is a terminal-based AI coding assistant that supports multiple providers (Anthropic, OpenAI, Google, and more).
This provider enables you to use OpenCode's AI capabilities through the familiar Vercel AI SDK interface, supporting generateText(), streamText(), streamObject(), native JSON-schema structured output with practical fallback patterns, tool approval flows, and file/source streaming parts.
| Provider Version | AI SDK Version | NPM Tag | Status | Branch |
|---|---|---|---|---|
| 3.x.x | v6 | latest |
Stable | main |
| 2.x.x | v6 | N/A | Legacy | historical |
| 1.x.x | v6 | N/A | Legacy | historical |
| 0.x.x | v5 | ai-sdk-v5 |
Maintenance | ai-sdk-v5 |
This release upgrades the provider internals to OpenCode SDK v2 and includes behavior changes that can affect existing integrations:
- OpenCode request/response routing now uses v2 parameter shapes (
sessionID, top-level args). - New settings are available:
permission,variant,directory,outputFormatRetryCount. cwdandtoolsremain supported but are now legacy/deprecated pathways.- Structured output uses OpenCode native
json_schemamode. Depending on model/backend route, strict object generation can still be inconsistent.
For production object extraction, use a two-step pattern: try Output.object(...) first, then fallback to strict JSON prompting + parse/validate.
For AI SDK v6 (recommended):
npm install ai-sdk-provider-opencode-sdk ai@^6.0.0For AI SDK v5:
npm install ai-sdk-provider-opencode-sdk@ai-sdk-v5 ai@^5.0.0This package is compatible with Zod 3 and Zod 4 (aligned with ai):
# With Zod 3
npm install ai-sdk-provider-opencode-sdk ai zod@^3.25.76
# With Zod 4
npm install ai-sdk-provider-opencode-sdk ai zod@^4.1.8- Node.js >= 18
- OpenCode CLI installed (
npm install -g opencode) - Valid API keys configured in OpenCode for your preferred providers
import { generateText } from "ai";
import { opencode } from "ai-sdk-provider-opencode-sdk";
const result = await generateText({
model: opencode("openai/gpt-5.3-codex-spark"),
prompt: "What is the capital of France?",
});
console.log(result.text);import { createOpencode } from "ai-sdk-provider-opencode-sdk";
// Default provider (auto-starts server)
const opencode = createOpencode();
// With custom settings
const opencode = createOpencode({
hostname: "127.0.0.1",
port: 4096,
autoStartServer: true,
serverTimeout: 10000,
defaultSettings: {
agent: "build",
sessionTitle: "My Session",
},
});Models are specified in providerID/modelID format:
// Anthropic models (Claude 4.5 series)
opencode("anthropic/claude-sonnet-4-5-20250929");
opencode("anthropic/claude-haiku-4-5-20251001");
opencode("anthropic/claude-opus-4-5-20251101");
// OpenAI models (GPT-5.3 / GPT-5.1 series)
opencode("openai/gpt-5.3-codex-spark");
opencode("openai/gpt-5.1");
opencode("openai/gpt-5.1-codex");
// Google Gemini models
opencode("google/gemini-3-pro-preview");
opencode("google/gemini-2.5-flash");
opencode("google/gemini-2.5-pro");
opencode("google/gemini-2.0-flash");import { streamText } from "ai";
const result = streamText({
model: opencode("openai/gpt-5.3-codex-spark"),
prompt: "Write a haiku about coding.",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}import { generateText, type ModelMessage } from "ai";
const messages: ModelMessage[] = [
{ role: "user", content: "My name is Alice." },
{ role: "assistant", content: "Hello Alice! How can I help you today?" },
{ role: "user", content: "What is my name?" },
];
const result = await generateText({
model: opencode("openai/gpt-5.3-codex-spark"),
messages,
});OpenCode supports different agents for different tasks:
const model = opencode("openai/gpt-5.3-codex-spark", {
agent: "build", // or 'plan', 'general', 'explore'
});Sessions maintain conversation context:
const model = opencode("openai/gpt-5.3-codex-spark", {
sessionTitle: "Code Review Session",
});
// First call creates a session
const result1 = await generateText({ model, prompt: "Review this code..." });
// Subsequent calls reuse the same session
const result2 = await generateText({ model, prompt: "What did you find?" });
// Get session ID from metadata
const sessionId = result1.providerMetadata?.opencode?.sessionId;
// Resume a specific session
const resumeModel = opencode("openai/gpt-5.3-codex-spark", {
sessionId: sessionId,
});OpenCode executes tools server-side. You can observe tool execution but cannot provide custom implementations:
import { streamText } from "ai";
const result = streamText({
model: opencode("openai/gpt-5.3-codex-spark"),
prompt: "List files in the current directory.",
});
for await (const part of result.fullStream) {
switch (part.type) {
case "tool-call":
console.log(`tool-call: ${part.toolName}`);
break;
case "tool-result":
console.log(`tool-result: ${part.toolName}`);
break;
case "tool-approval-request":
console.log(`approval-request: ${part.approvalId}`);
break;
case "file":
console.log(`file: ${part.file.mediaType}`);
break;
case "source":
console.log(`source: ${part.sourceType}`);
break;
case "text-delta":
process.stdout.write(part.text ?? "");
break;
case "finish":
console.log(`finish: ${part.finishReason}`);
break;
case "error":
console.error(part.error);
break;
}
}| Feature | Support | Notes |
|---|---|---|
| Text generation | ✅ Full | generateText(), streamText() |
| Streaming | ✅ Full | Real-time SSE streaming |
| Multi-turn conversations | ✅ Full | Session-based context |
| Tool observation | ✅ Full | See tool execution |
| Reasoning/thinking | ✅ Full | ReasoningPart support |
| Model selection | ✅ Full | Per-request model |
| Agent selection | ✅ Full | build, plan, general, explore |
| Abort/cancellation | ✅ Full | AbortSignal support |
| Image input (base64) | Data URLs only | |
| Image input (URL) | ❌ None | Not supported |
| Structured output (JSON) | Native json_schema; use prompt+validation fallback for strict reliability |
|
| Custom tools | ❌ None | Server-side only |
| Tool approvals | ✅ Full | tool-approval-request / tool-approval-response |
| File/source streaming | ✅ Full | Emits file and source stream parts |
| temperature/topP/topK | ❌ None | Provider defaults |
| maxTokens | ❌ None | Agent config |
examples/basic-usage.ts- Minimal text generation.examples/streaming.ts- Streaming text chunks and final usage.examples/conversation-history.ts- Multi-turn prompts with session continuity.examples/generate-object.ts- Native object mode with robust JSON fallback.examples/stream-object.ts- Streaming structured output with fallback parsing.examples/tool-observation.ts- Observe tool calls, results, approvals, files, and sources.examples/abort-signal.ts- Cancellation patterns for generate and stream calls.examples/image-input.ts- File/image input using base64 or data URLs.examples/custom-config.ts- Provider/model configuration and reliability controls.examples/client-options.ts-clientOptionspassthrough and preconfiguredclientpatterns.examples/limitations.ts- Practical limitations and expected behaviors.examples/long-running-tasks.ts- Patterns for longer tasks and retries.
interface OpencodeProviderSettings {
hostname?: string; // Default: '127.0.0.1'
port?: number; // Default: 4096
baseUrl?: string; // Override full URL
autoStartServer?: boolean; // Default: true
serverTimeout?: number; // Default: 10000
clientOptions?: OpencodeClientOptions; // Pass-through to createOpencodeClient()
client?: OpencodeClient; // Preconfigured SDK client (bypasses server management)
defaultSettings?: OpencodeSettings;
}clientOptions forwards OpenCode SDK client configuration such as:
headers(custom HTTP headers)fetch(custom fetch implementation)auth(token or auth function)bodySerializer/querySerializerrequestValidator/responseValidator/responseTransformerthrowOnError- standard
RequestInitfields (credentials,mode,cache,signal, etc.)
Notes:
baseUrlanddirectoryremain provider/model managed (baseUrlat provider level,directoryviadefaultSettingsor per-model settings).- If both
clientandclientOptionsare provided,clienttakes precedence. - If
clientis provided, its lifecycle remains caller-managed;dispose()only cleans up provider-managed server processes.
Example:
const opencode = createOpencode({
baseUrl: "http://127.0.0.1:4096",
clientOptions: {
headers: {
"x-api-key": process.env.OPENCODE_API_KEY ?? "",
},
credentials: "include",
throwOnError: true,
},
});interface OpencodeSettings {
sessionId?: string; // Resume session
createNewSession?: boolean; // Force new session
sessionTitle?: string; // Title for new sessions
agent?: string; // Agent name
systemPrompt?: string; // Override system prompt
tools?: Record<string, boolean>; // Enable/disable tools (deprecated in favor of permissions)
permission?: Array<{
permission: string;
pattern: string;
action: "allow" | "deny" | "ask";
}>; // Session ruleset
variant?: string; // OpenCode variant
directory?: string; // Per-request directory
cwd?: string; // Legacy working directory alias
outputFormatRetryCount?: number; // JSON schema retry count
logger?: Logger | false; // Logging
verbose?: boolean; // Debug logging
}The package also exports lower-level APIs for advanced integrations:
- Runtime classes:
OpencodeLanguageModel,OpencodeClientManager - Validation/config helpers:
validateSettings,validateProviderSettings,validateModelId,mergeSettings - Logging helpers:
getLogger,defaultLogger,silentLogger,createContextLogger - Event/message utilities:
convertToOpencodeMessages,convertEventToStreamParts,createStreamState,createFinishParts
These are intended for power users and tooling integrations. Most applications should use createOpencode() / opencode() directly.
The provider converts OpenCode errors to AI SDK error types:
import {
isAuthenticationError,
isTimeoutError,
} from "ai-sdk-provider-opencode-sdk";
try {
const result = await generateText({ model, prompt: "..." });
} catch (error) {
if (isAuthenticationError(error)) {
console.error("Check your API keys in OpenCode");
} else if (isTimeoutError(error)) {
console.error("Request timed out");
}
}When using Output.object(...), the provider sends OpenCode native format: { type: "json_schema", schema }. This is the preferred path and works in many cases.
Some model/backend routes can still return output that does not parse into a strict object every time. The examples examples/generate-object.ts and examples/stream-object.ts intentionally demonstrate a robust fallback strategy:
- Try native structured output.
- Retry a small number of times.
- Fallback to strict JSON prompting and validate with Zod.
Always dispose of the provider when done to stop the managed server:
const opencode = createOpencode();
// ... use the provider ...
// Clean up
await opencode.dispose?.();MIT