Why do we receive this error even we set the token limit in the agent zero settings ? #1094
Unanswered
dulara905-bit
asked this question in
Q&A
Replies: 2 comments
-
|
I have instructed agent0 to change bevaviour also to 16384 tokens per send to llm. but this happens again. |
Beta Was this translation helpful? Give feedback.
0 replies
-
this is not working any more
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment

Uh oh!
There was an error while loading. Please reload this page.
-
litellm.exceptions.ContextWindowExceededError: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - This model's maximum context length is 16384 tokens. However, your request has 17823 input tokens. Please reduce the length of the input messages. (parameter=input_tokens, value=17823)
Traceback (most recent call last):
Traceback (most recent call last):
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 991, in async_streaming
headers, response = await self.make_openai_chat_completion_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 190, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 454, in make_openai_chat_completion_request
raise e
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 436, in make_openai_chat_completion_request
await openai_aclient.chat.completions.with_raw_response.create(
File "/opt/venv-a0/lib/python3.12/site-packages/openai/_legacy_response.py", line 381, in wrapped
return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2589, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1794, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/openai/_base_client.py", line 1594, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 16384 tokens. However, your request has 17823 input tokens. Please reduce the length of the input messages. (parameter=input_tokens, value=17823)", 'type': 'BadRequestError', 'param': 'input_tokens', 'code': 400}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/main.py", line 598, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 1041, in async_streaming
raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'message': "This model's maximum context length is 16384 tokens. However, your request has 17823 input tokens. Please reduce the length of the input messages. (parameter=input_tokens, value=17823)", 'type': 'BadRequestError', 'param': 'input_tokens', 'code': 400}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/a0/agent.py", line 454, in monologue
agent_response, _reasoning = await self.call_chat_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/a0/agent.py", line 811, in call_chat_model
response, reasoning = await model.unified_call(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/a0/models.py", line 512, in unified_call
_completion = await acompletion(
^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/utils.py", line 1638, in wrapper_async
raise e
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/utils.py", line 1484, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/main.py", line 617, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2323, in exception_type
raise e
File "/opt/venv-a0/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 358, in exception_type
raise ContextWindowExceededError(
Beta Was this translation helpful? Give feedback.
All reactions