-
Notifications
You must be signed in to change notification settings - Fork 912
Inferred strict=true may cause compatibility issues with OpenAI-compatible servers #1561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@chizukicn Thanks for raising this. This is really part of the bigger issue where "OpenAI-compatible" APIs are really only compatible with specific (older) versions of the OpenAI API, and even then they may behave differently on the details. Our It may be time to abstract @dmontagu What do you think? |
@chizukicn I'm gonna look into fix this -- just for completeness, which OpenAI-compatible API were you using here? I can see the model is Qwen but not the provider. |
Thanks for looking into this! I'm using the OpenAI-compatible API provided by a Chinese cloud service platform: ppinfra.com (API endpoint: https://api.ppinfra.com/v3/openai). |
@DouweM What I find a bit hard to understand is that, based on packet capture, when there is a required parameter in the MCP tool, it includes the argument |
@chizukicn See here: https://github.com/pydantic/pydantic-ai/blob/main/pydantic_ai_slim/pydantic_ai/models/openai.py#L1014. OpenAI only supports strict mode when there are no optional fields. |
@chizukicn With the changes in #1835, you should be able to stop PydanticAI from sending model = OpenAIModel(model_name=..., provider=..., profile=OpenAIModelProfile(openai_supports_strict_tool_definition=False)) If you have a chance, could you verify that that works as expected? We could also include a new |
Nice! This problem is resloved. |
Uh oh!
There was an error while loading. Please reload this page.
Initial Checks
Description
🐛 Bug Description
When defining tools with required parameters using the MCP Server. the pydantic-ai automatically infers and inserts "strict": true into the request, even if the original tool definition explicitly sets strict to null.
This behavior causes compatibility issues with some OpenAI-compatible servers that do not support the strict field, as the pydantic-ai automatically infers and adds strict: true.
🔍 Tool Registration JSON Example
Here is the actual JSON representation of a registered tool:
❌ Actual Request Payload
Despite the tool definition having strict as null, the request payload includes the inferred strict: true:
This request will cause compatibility issues with OpenAI-compatible servers that do not support the strict field.
Example Code
Python, Pydantic AI & LLM client version
Note: This issue was drafted and refined with the help of ChatGPT.
The text was updated successfully, but these errors were encountered: