Skip to content

New Common Tool: Batch #1769

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
chasewalden opened this issue May 19, 2025 · 2 comments
Open

New Common Tool: Batch #1769

chasewalden opened this issue May 19, 2025 · 2 comments
Labels
Feature request New feature request

Comments

@chasewalden
Copy link
Contributor

Description

Due to some limitations(?) with Claude 3.7 Sonnet, it is much less likely to make parallel tool calls.
(See here in the Anthropic Cookbook)

It would be helpful for there to be a built-in tool that would make this easy to implement.

Something along the lines of:

batch = BatchTool()

agent = Agent( ..., tools = [batch])

@batch.tool
async def frobnicate(ctx: RunContext[...], foo: str): ...

@batch.tool_plain
async def encabulate(bar: int) : ...

This implicitly creates a single tool with an Input schema similar to:

@dataclass
class Invocation[Name: LiteralString, Args]:
    name: Name
    args: Args

@dataclass 
class FooArgs:
    foo: str

@dataclass
class BarArgs:
    bar: Int

@agent.tool
def batch(ctx: RunContext[...], invocations: list[Annotated[
    Invocation[Literal['foo'], FooArgs] | Invocation[Literal['bar'], BarArgs],
    Discriminant('name'),
]]) -> list[...]:
   ...

References

No response

@chasewalden
Copy link
Contributor Author

I guess an alternative is an option on the Agent constructor / Tool constructor / tool decorator to use batch tool calling, which would automatically create this tool under the hood.

I.e.

agent = Agent(..., batch_tools = True) # all tools now merged into a single tool called batch

# or 

@agent.tool(..., batched = True) # if at least 1 tool has batch=True, then the batch tool is defined
def foo(...): ...

@DouweM DouweM added the Feature request New feature request label May 19, 2025
@DouweM
Copy link
Contributor

DouweM commented May 19, 2025

I like this as something PydanticAI could do automatically for models it knows are bad at parallel tool calling, similar to how in #1628 we're going to be supporting different output modes (e.g. format=json_schema in addition to the current final_result output tool), and pick the best one based on the specific model chosen.

There (and here) the end-user would still have the ability to steer this, but it'd make most simple agent/tool definitions Just Work no matter what model is chosen, with PydanticAI doing the heavy lifting to get the most out of each model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request New feature request
Projects
None yet
Development

No branches or pull requests

2 participants