You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I guess an alternative is an option on the Agent constructor / Tool constructor / tool decorator to use batch tool calling, which would automatically create this tool under the hood.
I.e.
agent=Agent(..., batch_tools=True) # all tools now merged into a single tool called batch# or @agent.tool(..., batched=True) # if at least 1 tool has batch=True, then the batch tool is defineddeffoo(...): ...
I like this as something PydanticAI could do automatically for models it knows are bad at parallel tool calling, similar to how in #1628 we're going to be supporting different output modes (e.g. format=json_schema in addition to the current final_result output tool), and pick the best one based on the specific model chosen.
There (and here) the end-user would still have the ability to steer this, but it'd make most simple agent/tool definitions Just Work no matter what model is chosen, with PydanticAI doing the heavy lifting to get the most out of each model.
Description
Due to some limitations(?) with Claude 3.7 Sonnet, it is much less likely to make parallel tool calls.
(See here in the Anthropic Cookbook)
It would be helpful for there to be a built-in tool that would make this easy to implement.
Something along the lines of:
This implicitly creates a single tool with an Input schema similar to:
References
No response
The text was updated successfully, but these errors were encountered: