Skip to content

running into request rate limiting error frequently for openAI models #782

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
saipavankumar-muppalaneni opened this issue Jan 26, 2025 · 7 comments
Labels
Feature request New feature request

Comments

@saipavankumar-muppalaneni
Copy link

saipavankumar-muppalaneni commented Jan 26, 2025

How can I set request rate limits in pydanticAI ?

INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 25.677000 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 25.201000 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 26.583000 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 26.409000 seconds
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests"
INFO:openai._base_client:Retrying request to /chat/completions in 27.608000 seconds
@samuelcolvin
Copy link
Member

Seems like a reasonable request, PR welcome.

@samuelcolvin samuelcolvin added the Feature request New feature request label Jan 27, 2025
@iamaseem
Copy link

I like to take this can you guys specify the reproduction step @samuelcolvin @saipavankumar-muppalaneni ??
I also checked the Agent class which retries are present and not in Graph.

PS: The CONTRIBUTE.md is missing.

@saipavankumar-muppalaneni
Copy link
Author

It is just an agentic workflow where the agent hits a couple of tools to get back search results for a query, the agent then has to decide if the search results are satisfactory, if not then the agent will again make a search request using the tool, this loop continues until the agent is satisfactory. This back and forth happens rapidly at (@ nearly 1 request per second) and that seems to be causing rate limiting error with OpenAI I believe.

I tried to reproduce the error and now my agent is hitting all kinds of rate limits from RPM, TPM, and content length. Maybe it's a good idea to have control over these to avoid agent crashes in production.

@snake-speak
Copy link

I use UsageLimits to get around this when it happens, when searching my database with tools :

result = await agent.run(
    user_input, 
    deps=deps, 
    usage_limits=UsageLimits(request_limit=3),
    message_history=result.all_messages() if result else None
    )

@mkrueger12
Copy link

+1 this. I may be able to open the PR.

@grll
Copy link

grll commented May 15, 2025

I just opened a PR for this #1734

@tekumara
Copy link

The rate limiting algorithm I like to use is GCRA because it doesn't require a background process to leak the bucket. So would it make sense to add an API to allow custom rate limiters? Langchain has this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request New feature request
Projects
None yet
Development

No branches or pull requests

7 participants