-
Notifications
You must be signed in to change notification settings - Fork 135
Description
Environment
@openrouter/ai-sdk-provider: 2.2.3ai(Vercel AI SDK): 6.0.79- Models tested:
google/gemini-3-pro-preview,openai/gpt-5.2
Description
When using generateText with both output: Output.object({ schema }) and tools, the model returns tool call arguments as plain text in message.content with finish_reason: "tool_calls", but without an actual tool_calls array in the response.
This happens because Output.object() adds response_format: { type: "json_schema", ... } to every step of the tool loop (including steps where the model should be making tool calls). The model receives conflicting instructions — "respond strictly in this JSON schema" AND "here are tools you can call" — and resolves the conflict by dumping tool arguments into the content field as text while still signaling finish_reason: "tool_calls".
Observed behavior
response.text = '{"email":"abraeng@gmail.com"}' // tool call args leaked as text
response.finishReason = 'tool-calls'
response.rawFinishReason = 'tool_calls'
No tools are actually executed. The do...while loop in generateText exits immediately because clientToolCalls.length === 0 (no type: "tool-call" parts in the provider response content). The stopWhen callback is never even evaluated.
Expected behavior
The model should either:
- Return a proper
tool_callsarray so the AI SDK can parse and execute the tools, OR - The provider should not send
response_format: { type: "json_schema" }on steps wheretoolsare present (or at least document this incompatibility)
Root cause (traced through provider source)
Output.object()setsresponseFormat: { type: 'json', schema }on everydoGeneratecall- The provider maps this to
response_format: { type: "json_schema", json_schema: { schema, strict: true } }(line ~3077 in dist) - Both
response_formatandtoolsare sent in the same API request body - The upstream model returns tool call args in
message.content(text) +finish_reason: "tool_calls", but nomessage.tool_calls - Provider creates a
{ type: "text" }content part frommessage.contentand maps finish reason to"tool-calls", but no{ type: "tool-call" }parts exist - AI SDK finds zero tool-call parts → loop exits → tools never execute
Reproduction
import { generateText, Output } from 'ai';
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { z } from 'zod';
const openrouter = createOpenRouter({ apiKey: '...' });
const response = await generateText({
model: openrouter.chat('google/gemini-3-pro-preview'),
system: 'You are a helpful assistant. Use tools when needed.',
messages: [{ role: 'user', content: 'Look up the email abraeng@gmail.com' }],
tools: {
lookupEmail: {
description: 'Look up information about an email address',
parameters: z.object({ email: z.string() }),
execute: async ({ email }) => ({ found: true, name: 'Test User' }),
},
},
output: Output.object({
schema: z.object({
answer: z.string(),
}),
}),
stopWhen: ({ steps }) => {
const lastStep = steps.at(-1);
return Boolean(lastStep?.text && lastStep?.finishReason !== 'tool-calls') || steps.length > 10;
},
});
// Expected: tools execute, response.text contains final JSON answer
// Actual: response.text = '{"email":"abraeng@gmail.com"}', response.finishReason = 'tool-calls'
console.log(response.text, response.finishReason, response.rawFinishReason);