Releases: ben-vargas/ai-sdk-provider-opencode-sdk
v3.0.1
Added
- User message ID passthrough - Added support for
providerOptions.opencode.messageIDto control the user message ID sent to OpenCode. Must start with"msg_". (PR #14 by @abhijit-hota) - Exported
OpencodeProviderOptionstype - New type insrc/types.tsdocumenting the per-request provider options surface.
Fixed
- Session creation error messages -
Failed to create sessionnow includes the server error payload for easier debugging.
v3.0.0
Breaking change
Model IDs containing multiple / separators now match OpenCode upstream parsing.
Previous behavior:
litellm/anthropic/claude-sonnet-4-6was parsed asproviderID = "litellm/anthropic"andmodelID = "claude-sonnet-4-6"
New behavior in 3.0.0:
litellm/anthropic/claude-sonnet-4-6is parsed asproviderID = "litellm"andmodelID = "anthropic/claude-sonnet-4-6"
This aligns the provider with OpenCode itself and fixes provider lookup failures for integrations where the model portion may contain slashes, including LiteLLM-style routes.
This is published as a major release so consumers pinned to ^2.x do not receive this behavioral change automatically.
Included in this release
- Align multi-slash model ID parsing with OpenCode upstream
- Add regression coverage for multi-slash model IDs, including a LiteLLM-style case
- Update version compatibility documentation for the new
3.xline
v2.1.2
Fixed
- Streaming delta handling — Added support for
message.part.deltaevents to enable true incremental text and reasoning streaming instead of batch delivery viamessage.part.updatedonly. (PR #9 by @abhijit-hota) - User-message filtering for deltas — Applied user-role guard to
handlePartDeltato prevent user prompt text from leaking into assistant stream output, matching existing filtering inhandlePartUpdated.
Changed
- Dependencies — Bumped
@opencode-ai/sdkfrom^1.1.65to^1.2.15.
Full Changelog: v2.1.1...v2.1.2
v2.1.1
Added
- Isolated client manager instances - Added
OpencodeClientManager.createInstance()for creating standalone (non-singleton) client managers, enabling concurrent sessions pointing at different servers. - Client manager injection - Added
clientManageroption onOpencodeProviderSettingsto use a custom client manager instead of the shared singleton. - Validation for conflicting options - Added warning when both
clientManagerandclientare provided.
Changed
- OpencodeClient type - Aligned
OpencodeClienttype alias to the SDK-exportedOpencodeClienttype directly instead of inferring fromcreateOpencodeClientreturn type.
v2.1.0
What's New
Added
- SDK client passthrough options —
clientOptionsonOpencodeProviderSettingsto forwardcreateOpencodeClient()configuration (headers, auth, fetch, serializers, validators, transformers,throwOnError, andRequestInit-compatible options) - Preconfigured client support —
clientonOpencodeProviderSettingsto use a prebuilt OpenCode SDK client directly - New runnable example —
examples/client-options.tsdemonstrating both patterns
Changed
- Client initialization behavior —
clientOptionsare now applied consistently across externalbaseUrl, existing-server, and auto-started-server client creation paths - Conflict handling — Reserved
baseUrlanddirectoryvalues inclientOptionsare ignored with warnings;clienttakes precedence overclientOptions - Singleton safety —
OpencodeClientManagernow warns when options are silently dropped after initialization instead of failing silently
Fixed
- Duplicate provider warnings — Validation warnings are no longer logged twice during provider creation
Full Changelog: v2.0.0...v2.1.0
v2.0.0 — OpenCode SDK v2 Migration
What's Changed
Migrates the provider from OpenCode SDK v1 to v2 (@opencode-ai/sdk/v2), adding support for new v2 capabilities and hardening the implementation.
Core Migration
- SDK v2 API style: All SDK calls migrated from nested
{ path, body }to v2 flat parameter style ({ sessionID, directory, ... }) - Import path:
@opencode-ai/sdk→@opencode-ai/sdk/v2 - Dependency bumps:
@ai-sdk/provider^3.0.8,@ai-sdk/provider-utils^4.0.15,ai^6.0.85
New Features
- Permission/tool approval flow: Emits
tool-approval-requeststream parts frompermission.askedevents; handlestool-approval-responseparts viapermission.reply()API - Native structured output: JSON mode uses OpenCode's native
json_schemaformat withretryCountinstead of prompt-based instructions - File/source content streaming:
handleFilePart()fully implemented — handles data URLs, HTTP URLs, and source metadata - New settings:
permission(ruleset),variant,directory(per-request routing),outputFormatRetryCount - New error types:
ContextOverflowError→"length",StructuredOutputError→"error"
Hardening
- Question events:
question.askedemits a stream error part with warning; related events are known no-ops - Known v2 events/parts: All new v2 event and part types explicitly handled (no "Unknown" debug spam)
- Safe tool input serialization:
safeStringifyToolInput()with try/catch fallback - Typed approval client:
ApprovalClientinterface replaces loose typing - File part diagnostics: Debug logs when file parts are skipped
- Non-prefix tool input delta: Prevents data loss on non-incremental input changes
Deprecations
toolssetting → usepermissionrulesetcwdsetting → usedirectory
Tests
- 283 tests passing (up from 278)
Full Changelog: v1.0.0...v2.0.0
AI SDK v6 Compatibility (v1.0.0)
Overview
This is the first AI SDK v6–compatible release of ai-sdk-provider-opencode-sdk.
What’s new
- Provider updated to AI SDK v6 interfaces (LanguageModelV3/ProviderV3).
- Examples updated to v6 APIs and Output.object.
- README now documents v5/v6 compatibility and npm tags.
Versioning & tags
- latest → 1.x (AI SDK v6)
- ai-sdk-v5 → 0.x (AI SDK v5)
v0.0.2 - Dependency Updates
Changes
Updated Dependencies
@ai-sdk/provider-utils: 3.0.9 → 3.0.18@opencode-ai/sdk: ^1.0.141 → ^1.0.137 (aligned with stable release)
Fixed
- OpenAI model names - Documentation now correctly references GPT-5.1 series models:
openai/gpt-5.1openai/gpt-5.1-codexopenai/gpt-5.1-codex-miniopenai/gpt-5.1-codex-max
Installation
npm install ai-sdk-provider-opencode-sdk@0.0.2See CHANGELOG.md for full details.
v0.0.1 - Initial Release
AI SDK Provider for OpenCode - Initial Release
This is the initial release of the AI SDK v5 provider for OpenCode via the @opencode-ai/sdk.
Features
Core Functionality
- Text generation -
generateText()andstreamText()with real-time SSE streaming - Object generation -
generateObject()andstreamObject()with Zod schema validation - Multi-turn conversations - Session-based context management
- Tool observation - Watch server-side tool execution (Read, Write, Bash, etc.)
Multi-Provider Support
- Anthropic - Claude 4.5 series (opus, sonnet, haiku)
- OpenAI - GPT-5.1 series (gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-mini, gpt-5.1-codex-max)
- Google - Gemini 2.0/2.5/3.0 series
Provider Features
- Auto-start server management
- Agent selection (build, plan, general, explore)
- Custom system prompts
- Session resume/management
- AbortController support for cancellation
- Base64 image input for vision models
- Custom logging with verbose mode
Installation
npm install ai-sdk-provider-opencode-sdk ai @opencode-ai/sdkQuick Start
import { generateText } from 'ai';
import { opencode } from 'ai-sdk-provider-opencode-sdk';
const result = await generateText({
model: opencode('anthropic/claude-opus-4-5-20251101'),
prompt: 'What is the capital of France?',
});
console.log(result.text);Examples
The package includes comprehensive examples:
basic-usage.ts- Simple text generationstreaming.ts- Real-time streaminggenerate-object.ts/stream-object.ts- Structured outputtool-observation.ts- Watching tool executionimage-input.ts- Vision model usageabort-signal.ts- Request cancellationcustom-config.ts- Provider configurationlimitations.ts- What's NOT supported
Known Limitations
The following AI SDK parameters are ignored (OpenCode uses provider defaults):
temperature,topP,topKmaxOutputTokenspresencePenalty,frequencyPenaltystopSequences,seed
Custom tool definitions are ignored - OpenCode executes tools server-side.
See CHANGELOG.md for full details.