Skip to content

Reasoning response support #907

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
samuelcolvin opened this issue Feb 12, 2025 · 8 comments · May be fixed by #1142
Open

Reasoning response support #907

samuelcolvin opened this issue Feb 12, 2025 · 8 comments · May be fixed by #1142
Assignees
Labels
Feature request New feature request

Comments

@samuelcolvin
Copy link
Member

See:

No idea yet how this should look, but we should try to support it.

@samuelcolvin samuelcolvin added the Feature request New feature request label Feb 12, 2025
@Wh1isper
Copy link
Contributor

Wh1isper commented Mar 5, 2025

I've used some pretty tricky ways to implement reasoning in the pydantic-ai-bedrock package: ai-zerolab/pydantic-ai-bedrock#15

Based on this practice, I think we need to add a ReasoningPart, and then the Model will implement the object transformation based on that. On top of that, we need to think about how the Agent is returned, perhaps using the <Thinking> tag would be better?

@Kludex Kludex marked this as a duplicate of #1080 Mar 16, 2025
@Kludex Kludex marked this as a duplicate of #731 Mar 17, 2025
@soichisumi
Copy link

I would like to request support for Claude's Extended thinking.
While it may be difficult to be compatible with other models, this feature provides robust and flexible thinking and integration with tool use.
https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking?q=thinking#how-extended-thinking-works

@arty-hlr
Copy link

arty-hlr commented Apr 3, 2025

Maybe openrouter's reasoning tokens should also be added: https://openrouter.ai/docs/use-cases/reasoning-tokens for people who use Claude 3.7 through it for example. At the moment no reasoning is shown.

@arty-hlr
Copy link

arty-hlr commented Apr 7, 2025

Link to relevant slack conversation:
https://pydanticlogfire.slack.com/archives/C083V7PMHHA/p1743405703872439

@pedroallenrevez
Copy link
Contributor

Any news on this?

@brycedrennan
Copy link

I want to use pydantic-ai at work but not being able to use reasoning models is one of the things holding me back.

@Kludex Kludex linked a pull request Apr 19, 2025 that will close this issue
3 tasks
@aiizloli-ecs
Copy link

Any update on this ?

@seangal2
Copy link

Hi,

I’d like to express interest in supporting extended reasoning models, and contribute a few observations.

Looking at PR #1142, I noticed that the current implementation returns the summary field from OpenAI as the reasoning part. However, OpenAI’s responses API also supports an optional encrypted_content field (reference, docs), which is useful for including full reasoning traces in follow-up interactions. This is especially recommended when using tool calling (docs).

To support this, I suggest either:

  • Extending ThinkingPart to include the encrypted_content field, or
  • Introducing a new kind of part specifically for encrypted or provider-specific reasoning items.

It’s worth noting that encrypted_content is provider-dependent and may not be interoperable - for example, it can’t be mapped to Anthropic’s redacted_thinking. Still, a unified abstraction might help represent these different formats consistently.

Happy to help further if there’s interest!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request New feature request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants