Skip to content

Support NativeOutput and PromptedOutput modes in addition to ToolOutput #1628

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 61 commits into from
Jun 24, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
e290951
WIP: Output modes
DouweM Jun 3, 2025
2056539
WIP: More output modes
DouweM Jun 3, 2025
bceba19
Merge remote-tracking branch 'origin/main' into output-modes
DouweM Jun 3, 2025
0cb25c4
Fix tests
DouweM Jun 3, 2025
933b74e
Remove syntax invalid before Python 3.12
DouweM Jun 3, 2025
7974df0
Fix tests
DouweM Jun 3, 2025
9cc19e2
Add TextOutput marker
DouweM Jun 9, 2025
bc6bb65
Merge remote-tracking branch 'origin/main' into output-modes
DouweM Jun 9, 2025
0e356a3
Add VCR recording of new test
DouweM Jun 9, 2025
81312dc
Implement additional output modes in GeminiModel and GoogleModel
DouweM Jun 10, 2025
52ef4d5
Fix prompted_json on OpenAIResponses
DouweM Jun 10, 2025
fe05956
Test output modes on Gemini and Anthropic
DouweM Jun 10, 2025
94421f3
Add VCR recordings of Gemini output mode tests
DouweM Jun 10, 2025
1902d00
Remove some old TODO comments
DouweM Jun 10, 2025
1f53c9b
Add missing VCR recording of Gemini output mode test
DouweM Jun 10, 2025
a4c2877
Add more missing VCR recordings
DouweM Jun 10, 2025
56e58f9
Fix OpenAI tools
DouweM Jun 10, 2025
a5234e1
Improve test coverage
DouweM Jun 10, 2025
40def08
Update unsupported output mode error message
DouweM Jun 10, 2025
837d305
Improve test coverage
DouweM Jun 10, 2025
3598bef
Merge branch 'main' into output-modes
DouweM Jun 10, 2025
5f71ba8
Test streaming with structured text output
DouweM Jun 10, 2025
cfc2749
Make TextOutputFunction Python 3.9 compatible
DouweM Jun 10, 2025
a137641
Properly merge JSON schemas accounting for defs
DouweM Jun 11, 2025
f495d46
Refactor output schemas and modes: more 'isinstance(output_schema, ..…
DouweM Jun 12, 2025
449ed0d
Merge branch 'main' into output-modes
DouweM Jun 12, 2025
e70d249
Clean up some variable names
DouweM Jun 12, 2025
4592b0b
Improve test coverage
DouweM Jun 12, 2025
db1c628
Merge branch 'main' into output-modes
DouweM Jun 13, 2025
f57d078
Combine JsonSchemaOutput and PromptedJsonOutput into StructuredTextOu…
DouweM Jun 13, 2025
5112455
Add missing cassettes
DouweM Jun 13, 2025
416cc7d
Can't use dataclass kw_only on 3.9
DouweM Jun 13, 2025
4b0e5cf
Improve test coverage
DouweM Jun 13, 2025
094920f
Improve test coverage
DouweM Jun 13, 2025
9f61706
Improve test coverage
DouweM Jun 13, 2025
9f51387
Remove unnecessary coverage ignores
DouweM Jun 13, 2025
9a1e628
Remove unnecessary coverage ignore
DouweM Jun 13, 2025
2b5fa81
Add docs
DouweM Jun 13, 2025
6c4662b
Fix docs refs
DouweM Jun 13, 2025
3ed3431
Fix nested list in docs
DouweM Jun 13, 2025
3d77818
Merge branch 'main' into output-modes
DouweM Jun 17, 2025
a86d7d4
Split StructuredTextOutput into ModelStructuredOutput and PromptedStr…
DouweM Jun 17, 2025
ce985a0
Merge branch 'main' into output-modes
DouweM Jun 17, 2025
71d1655
Fix WrapperModel.profile
DouweM Jun 17, 2025
8c04144
Update output modes docs
DouweM Jun 17, 2025
d78b5f7
Add examples to output mode marker docstrings
DouweM Jun 17, 2025
70d1197
Fix mypy type inference
DouweM Jun 17, 2025
2eb7fd1
Improve test coverage
DouweM Jun 17, 2025
25ccb54
Merge branch 'main' into output-modes
DouweM Jun 17, 2025
9e00c32
Import cast and RunContext in _function_schema
DouweM Jun 17, 2025
7de3c0d
Move RunContext and AgentDepsT into their own module to solve circula…
DouweM Jun 17, 2025
4029fac
Make _run_context module private, RunContext can be accessed through …
DouweM Jun 17, 2025
98bccf2
Merge branch 'main' into output-modes
DouweM Jun 19, 2025
8041cf3
Fix thinking part related tests
DouweM Jun 19, 2025
6fb7e4f
Address feedback
DouweM Jun 19, 2025
e48f10d
Merge branch 'main' into output-modes
DouweM Jun 20, 2025
1f858ae
Fix docs examples requiring code from other examples
DouweM Jun 20, 2025
e463216
Reduce public classes, update docstrings
DouweM Jun 23, 2025
312acdd
Merge branch 'main' into output-modes
DouweM Jun 23, 2025
abfbb77
Merge branch 'main' into output-modes
DouweM Jun 23, 2025
fc009f6
Rename ModelStructuredOutput to NativeOutput and PromptedStructuredOu…
DouweM Jun 24, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions docs/api/output.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# `pydantic_ai.output`

::: pydantic_ai.output
options:
inherited_members: true
members:
- OutputDataT
- ToolOutput
- NativeOutput
- PromptedOutput
- TextOutput
1 change: 0 additions & 1 deletion docs/api/result.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,4 @@
options:
inherited_members: true
members:
- OutputDataT
- StreamedRunResult
183 changes: 168 additions & 15 deletions docs/output.md

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ nav:
- api/agent.md
- api/tools.md
- api/common_tools.md
- api/output.md
- api/result.md
- api/messages.md
- api/exceptions.md
Expand Down
7 changes: 5 additions & 2 deletions pydantic_ai_slim/pydantic_ai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
)
from .format_prompt import format_as_xml
from .messages import AudioUrl, BinaryContent, DocumentUrl, ImageUrl, VideoUrl
from .result import ToolOutput
from .output import NativeOutput, PromptedOutput, TextOutput, ToolOutput
from .tools import RunContext, Tool

__all__ = (
Expand Down Expand Up @@ -41,8 +41,11 @@
# tools
'Tool',
'RunContext',
# result
# output
'ToolOutput',
'NativeOutput',
'PromptedOutput',
'TextOutput',
# format_prompt
'format_as_xml',
)
Expand Down
42 changes: 29 additions & 13 deletions pydantic_ai_slim/pydantic_ai/_agent_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
from pydantic_graph.nodes import End, NodeRunEndT

from . import _output, _system_prompt, exceptions, messages as _messages, models, result, usage as _usage
from .result import OutputDataT
from .output import OutputDataT, OutputSpec
from .settings import ModelSettings, merge_model_settings
from .tools import RunContext, Tool, ToolDefinition, ToolsPrepareFunc

Expand Down Expand Up @@ -102,7 +102,7 @@ class GraphAgentDeps(Generic[DepsT, OutputDataT]):
end_strategy: EndStrategy
get_instructions: Callable[[RunContext[DepsT]], Awaitable[str | None]]

output_schema: _output.OutputSchema[OutputDataT] | None
output_schema: _output.OutputSchema[OutputDataT]
output_validators: list[_output.OutputValidator[DepsT, OutputDataT]]

history_processors: Sequence[HistoryProcessor[DepsT]]
Expand Down Expand Up @@ -286,10 +286,23 @@ async def add_mcp_server_tools(server: MCPServer) -> None:
function_tool_defs = await ctx.deps.prepare_tools(run_context, function_tool_defs) or []

output_schema = ctx.deps.output_schema

output_tools = []
output_object = None
if isinstance(output_schema, _output.ToolOutputSchema):
output_tools = output_schema.tool_defs()
elif isinstance(output_schema, _output.NativeOutputSchema):
output_object = output_schema.object_def

# ToolOrTextOutputSchema, NativeOutputSchema, and PromptedOutputSchema all inherit from TextOutputSchema
allow_text_output = isinstance(output_schema, _output.TextOutputSchema)

return models.ModelRequestParameters(
function_tools=function_tool_defs,
allow_text_output=_output.allow_text_output(output_schema),
output_tools=output_schema.tool_defs() if output_schema is not None else [],
output_mode=output_schema.mode,
output_tools=output_tools,
output_object=output_object,
allow_text_output=allow_text_output,
)


Expand Down Expand Up @@ -484,7 +497,7 @@ async def _run_stream() -> AsyncIterator[_messages.HandleResponseEvent]:
# when the model has already returned text along side tool calls
# in this scenario, if text responses are allowed, we return text from the most recent model
# response, if any
if _output.allow_text_output(ctx.deps.output_schema):
if isinstance(ctx.deps.output_schema, _output.TextOutputSchema):
for message in reversed(ctx.state.message_history):
if isinstance(message, _messages.ModelResponse):
last_texts = [p.content for p in message.parts if isinstance(p, _messages.TextPart)]
Expand All @@ -507,10 +520,11 @@ async def _handle_tool_calls(
output_schema = ctx.deps.output_schema
run_context = build_run_context(ctx)

# first, look for the output tool call
final_result: result.FinalResult[NodeRunEndT] | None = None
parts: list[_messages.ModelRequestPart] = []
if output_schema is not None:

# first, look for the output tool call
if isinstance(output_schema, _output.ToolOutputSchema):
for call, output_tool in output_schema.find_tool(tool_calls):
try:
result_data = await output_tool.process(call, run_context)
Expand Down Expand Up @@ -568,9 +582,9 @@ async def _handle_text_response(

text = '\n\n'.join(texts)
try:
if _output.allow_text_output(output_schema):
# The following cast is safe because we know `str` is an allowed result type
result_data = cast(NodeRunEndT, text)
if isinstance(output_schema, _output.TextOutputSchema):
run_context = build_run_context(ctx)
result_data = await output_schema.process(text, run_context)
else:
m = _messages.RetryPromptPart(
content='Plain text responses are not permitted, please include your response in a tool call',
Expand Down Expand Up @@ -669,7 +683,7 @@ async def process_function_tools( # noqa C901
yield event
call_index_to_event_id[len(calls_to_run)] = event.call_id
calls_to_run.append((mcp_tool, call))
elif output_schema is not None and call.tool_name in output_schema.tools:
elif call.tool_name in output_schema.tools:
# if tool_name is in output_schema, it means we found a output tool but an error occurred in
# validation, we don't add another part here
if output_tool_name is not None:
Expand Down Expand Up @@ -809,7 +823,9 @@ def _unknown_tool(
) -> _messages.RetryPromptPart:
ctx.state.increment_retries(ctx.deps.max_result_retries)
tool_names = list(ctx.deps.function_tools.keys())
if output_schema := ctx.deps.output_schema:

output_schema = ctx.deps.output_schema
if isinstance(output_schema, _output.ToolOutputSchema):
tool_names.extend(output_schema.tool_names())

if tool_names:
Expand Down Expand Up @@ -886,7 +902,7 @@ def get_captured_run_messages() -> _RunMessages:
def build_agent_graph(
name: str | None,
deps_type: type[DepsT],
output_type: _output.OutputType[OutputT],
output_type: OutputSpec[OutputT],
) -> Graph[GraphAgentState, GraphAgentDeps[DepsT, result.FinalResult[OutputT]], result.FinalResult[OutputT]]:
"""Build the execution [Graph][pydantic_graph.Graph] for a given agent."""
nodes = (
Expand Down
5 changes: 2 additions & 3 deletions pydantic_ai_slim/pydantic_ai/_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,13 @@

from typing_inspection.introspection import get_literal_values

from pydantic_ai.result import OutputDataT
from pydantic_ai.tools import AgentDepsT

from . import __version__
from ._run_context import AgentDepsT
from .agent import Agent
from .exceptions import UserError
from .messages import ModelMessage
from .models import KnownModelName, infer_model
from .output import OutputDataT

try:
import argcomplete
Expand Down
5 changes: 1 addition & 4 deletions pydantic_ai_slim/pydantic_ai/_function_schema.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,8 @@
from pydantic_core import SchemaValidator, core_schema
from typing_extensions import Concatenate, ParamSpec, TypeIs, TypeVar, get_origin

from pydantic_ai.tools import RunContext

from ._griffe import doc_descriptions
from ._run_context import RunContext
from ._utils import check_object_json_schema, is_async_callable, is_model_like, run_in_executor

if TYPE_CHECKING:
Expand Down Expand Up @@ -281,6 +280,4 @@ def _build_schema(

def _is_call_ctx(annotation: Any) -> bool:
"""Return whether the annotation is the `RunContext` class, parameterized or not."""
from .tools import RunContext

return annotation is RunContext or get_origin(annotation) is RunContext
Loading