-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Open
Labels
AI ProjectsService AttentionWorkflow: This issue is responsible by Azure service team.Workflow: This issue is responsible by Azure service team.customer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.needs-team-attentionWorkflow: This issue needs attention from Azure service team or SDK teamWorkflow: This issue needs attention from Azure service team or SDK teamquestionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that
Description
- Package Name: azure-ai-projects
- Package Version: 2.0.0b2
- Operating System:
- Python Version:
Describe the bug
When using the Responses API with the OpenAI client obtained via azure-ai-projects (AIProjectClient.get_openai_client()), the same input payload that works correctly with the official OpenAI Python SDK fails with a 400 invalid_payload error.
This indicates an input schema inconsistency between:
- the official OpenAI Python SDK (
openai.AsyncOpenAI) - the OpenAI-compatible client provided by
azure-ai-projects
Structured input arrays (message-based format with role and content) are accepted by the OpenAI SDK but rejected by the Azure client.
To Reproduce
Steps to reproduce the behavior:
- Define the input payload
input_item_list = [
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "What is an eBike?",
}
],
},
{
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "An eBike is a bicycle with an electric motor that helps with pedaling.",
}
],
},
]- Call Responses API using azure-ai-projects OpenAI client
import os
from azure.ai.projects.aio import AIProjectClient
from azure.identity.aio import DefaultAzureCredential
from dotenv import load_dotenv
load_dotenv()
credential = DefaultAzureCredential()
project_client = AIProjectClient(
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"], credential=credential
)
openai_client = project_client.get_openai_client()
response = await openai_client.responses.create(
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
input=input_item_list,
)
print(response.output_text)- Error occurs
---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[15], line 15
9 project_client = AIProjectClient(
10 endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"], credential=credential
11 )
13 openai_client = project_client.get_openai_client()
---> 15 response = await openai_client.responses.create(
16 model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
17 input=input_item_list,
18 )
19 print(f"Response output: {response.output_text}")
File ~/Code/ai_project_test/.venv/lib/python3.13/site-packages/openai/resources/responses/responses.py:2480, in AsyncResponses.create(self, background, conversation, include, input, instructions, max_output_tokens, max_tool_calls, metadata, model, parallel_tool_calls, previous_response_id, prompt, prompt_cache_key, prompt_cache_retention, reasoning, safety_identifier, service_tier, store, stream, stream_options, temperature, text, tool_choice, tools, top_logprobs, top_p, truncation, user, extra_headers, extra_query, extra_body, timeout)
2442 async def create(
2443 self,
2444 *,
(...) 2478 timeout: float | httpx.Timeout | None | NotGiven = not_given,
2479 ) -> Response | AsyncStream[ResponseStreamEvent]:
-> 2480 return await self._post(
2481 "/responses",
2482 body=await async_maybe_transform(
2483 {
2484 "background": background,
2485 "conversation": conversation,
2486 "include": include,
2487 "input": input,
2488 "instructions": instructions,
2489 "max_output_tokens": max_output_tokens,
2490 "max_tool_calls": max_tool_calls,
2491 "metadata": metadata,
2492 "model": model,
2493 "parallel_tool_calls": parallel_tool_calls,
2494 "previous_response_id": previous_response_id,
2495 "prompt": prompt,
2496 "prompt_cache_key": prompt_cache_key,
2497 "prompt_cache_retention": prompt_cache_retention,
2498 "reasoning": reasoning,
2499 "safety_identifier": safety_identifier,
2500 "service_tier": service_tier,
2501 "store": store,
2502 "stream": stream,
2503 "stream_options": stream_options,
2504 "temperature": temperature,
2505 "text": text,
2506 "tool_choice": tool_choice,
2507 "tools": tools,
2508 "top_logprobs": top_logprobs,
2509 "top_p": top_p,
2510 "truncation": truncation,
2511 "user": user,
2512 },
2513 response_create_params.ResponseCreateParamsStreaming
2514 if stream
2515 else response_create_params.ResponseCreateParamsNonStreaming,
2516 ),
2517 options=make_request_options(
2518 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
2519 ),
2520 cast_to=Response,
2521 stream=stream or False,
2522 stream_cls=AsyncStream[ResponseStreamEvent],
2523 )
File ~/Code/ai_project_test/.venv/lib/python3.13/site-packages/openai/_base_client.py:1797, in AsyncAPIClient.post(self, path, cast_to, body, files, options, stream, stream_cls)
1783 async def post(
1784 self,
1785 path: str,
(...) 1792 stream_cls: type[_AsyncStreamT] | None = None,
1793 ) -> ResponseT | _AsyncStreamT:
1794 opts = FinalRequestOptions.construct(
1795 method="post", url=path, json_data=body, files=await async_to_httpx_files(files), **options
1796 )
-> 1797 return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File ~/Code/ai_project_test/.venv/lib/python3.13/site-packages/openai/_base_client.py:1597, in AsyncAPIClient.request(self, cast_to, options, stream, stream_cls)
1594 await err.response.aread()
1596 log.debug("Re-raising status error")
-> 1597 raise self._make_status_error_from_response(err.response) from None
1599 break
1601 assert response is not None, "could not resolve response (should never happen)"
BadRequestError: Error code: 400 - {'error': {'code': 'invalid_payload', 'message': 'Invalid payload', 'param': None, 'type': 'invalid_request_error', 'details': [{'code': 'ValidationError', 'message': 'type: Value is "array" but should be "string"', 'param': '/input', 'type': 'error'}, {'code': 'ValidationError', 'message': 'required: Required properties ["type"] are not present', 'param': '/input/1', 'type': 'error'}, {'code': 'ValidationError', 'message': 'type: Value is "array" but should be "string"', 'param': '/input/1/content', 'type': 'error'}, {'code': 'ValidationError', 'message': 'required: Required properties ["annotations"] are not present', 'param': '/input/1/content/0', 'type': 'error'}], 'additionalInfo': {'request_id': '0869495a74280b82327bd15e8e3bab2a'}}}
Expected behavior
The same input payload should work identically to the official OpenAI Python SDK.
from dotenv import load_dotenv
from openai import AsyncOpenAI
client = AsyncOpenAI()
response = await client.responses.create(
model="gpt-4o-mini",
input=input_item_list,
)
print(response.output_text)
# Expected output
# An eBike, or electric bicycle, is a bicycle that incorporates an electric motor to assist with pedaling. This assistance can make riding easier and more enjoyable, especially on challenging terrains or for longer distances. eBikes typically come with features such as:
# 1. **Electric Motor**: Provides pedal assistance, making it easier to climb hills or ride against the wind.
# 2. **Battery**: Powers the motor and typically has a range of 20 to 50 miles, depending on the type of terrain and level of assistance.
# 3. **Pedal Assist**: Most eBikes use a system where the motor only engages when the rider pedals, rather than providing full throttle like a scooter.
# 4. **Throttle**: Some eBikes allow the rider to control the speed with a throttle, similar to a motorcycle.
# 5. **Regenerative Braking**: This feature helps recharge the battery when braking.
# eBikes come in various styles, including mountain bikes, city bikes, and cruisers, catering to different riding preferences and needs. They are increasingly popular for commuting, recreational riding, and reducing reliance on cars.
Additional context
Is the Azure OpenAI client expected to fully support the OpenAI Responses API input schema?
Metadata
Metadata
Assignees
Labels
AI ProjectsService AttentionWorkflow: This issue is responsible by Azure service team.Workflow: This issue is responsible by Azure service team.customer-reportedIssues that are reported by GitHub users external to the Azure organization.Issues that are reported by GitHub users external to the Azure organization.needs-team-attentionWorkflow: This issue needs attention from Azure service team or SDK teamWorkflow: This issue needs attention from Azure service team or SDK teamquestionThe issue doesn't require a change to the product in order to be resolved. Most issues start as thatThe issue doesn't require a change to the product in order to be resolved. Most issues start as that