-
Notifications
You must be signed in to change notification settings - Fork 8.6k
Open
Labels
azureIssues related to the Azure-hosted OpenAI modelsIssues related to the Azure-hosted OpenAI modelsbugSomething isn't workingSomething isn't working
Description
What version of Codex is running?
codex-cli 0.113.0
What subscription do you have?
BYOM
Which model were you using?
llm-gateway/gpt-5.4 (via LiteLLM proxy routing to Azure AI Foundry)
What platform is your computer?
macOS (darwin 25.3.0)
What terminal emulator and version are you using (if applicable)?
zsh
What issue are you seeing?
After a few tool-call round-trips, the call_id field sent in the request payload exceeds the 64-character maximum enforced by Azure AI Foundry, causing a 400 error:
litellm.BadRequestError: AzureException BadRequestError - {
"error": {
"message": "Invalid 'input[4].call_id': string too long. Expected a string with maximum length 64, but got a string with length 1324 instead.",
"type": "invalid_request_error",
"param": "input[4].call_id",
"code": "string_above_max_length"
}
}
Once this error occurs, the session is unrecoverable — every subsequent request in the same session fails with the same error. The only workaround is starting a new session.
Related but distinct from:
- invalid_request_error: Invalid 'input[15].arguments': string too long #13270 (
argumentsstring too long — different field, different limit) - "Invalid 'input[213].encrypted_content': string too long. #10506 (
encrypted_contenttoo long — different field) - Empty function name cause Azure to reject the request #7094 (empty function name on Azure — fixed)
What steps can reproduce the bug?
- Configure Codex with a LiteLLM proxy routing to Azure AI Foundry:
[model_providers.llm-gateway]
name = "llm-gateway"
base_url = "<litellm-proxy-url>"
env_key = "LLM_GATEWAY_KEY"
wire_api = "responses"- Start a session with a model routed through Azure AI Foundry (e.g.
llm-gateway/gpt-5.4) - Interact until tool calls are generated (e.g. ask it to read a file or run a command)
- After several tool-call round-trips the request is rejected with the error above
What is the expected behavior?
call_id values in the request payload should stay within the 64-character limit accepted by OpenAI-compatible backends.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
azureIssues related to the Azure-hosted OpenAI modelsIssues related to the Azure-hosted OpenAI modelsbugSomething isn't workingSomething isn't working