Skip to content

Conversation

@TheAli711
Copy link
Contributor

The Issue

livekit-google-plugin by default uses v1beta1 routes,we pass id in function_call and function_response structure, which v1beta1 route accepts
But when we switch to v1 routes, they dont accet it and we get the following error

2025-11-12 15:09:59,888 - INFO livekit.agents - LLM metrics {"model_name": "gemini-2.5-flash", "model_provider": "Vertex AI", "ttft": 1.6, "prompt_tokens": 39, "prompt_cached_tokens": 0, "completion_tokens": 5, "tokens_per_second": 3.0, "pid": 339759, "job_id": "AJ_KdqHszSN4KCf"}
2025-11-12 15:09:59,889 - DEBUG livekit.agents - executing tool {"function": "get_current_time", "arguments": "{}", "speech_id": "speech_a605a9406d6e", "pid": 339759, "job_id": "AJ_KdqHszSN4KCf"}
2025-11-12 15:09:59,890 - DEBUG livekit.agents - tools execution completed {"speech_id": "speech_a605a9406d6e", "pid": 339759, "job_id": "AJ_KdqHszSN4KCf"}
2025-11-12 15:09:59,892 - INFO google_genai.models - AFC is enabled with max remote calls: 10. {"pid": 339759, "job_id": "AJ_KdqHszSN4KCf"}
2025-11-12 15:10:00,845 - ERROR livekit.agents - Error in _llm_inference_task
Traceback (most recent call last):
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/livekit/plugins/google/llm.py", line 353, in _run
    async for response in stream:
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/google/genai/models.py", line 7569, in async_generator
    response = await self._generate_content_stream(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/google/genai/models.py", line 6476, in _generate_content_stream
    response_stream = await self._api_client.async_request_streamed(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 1116, in async_request_streamed
    response = await self._async_request(http_request=http_request, stream=True)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 1046, in _async_request
    return await self._async_retry(  # type: ignore[no-any-return]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/tenacity/asyncio/__init__.py", line 111, in __call__
    do = await self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/tenacity/asyncio/__init__.py", line 153, in iter
    result = await action(retry_state)
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/tenacity/_utils.py", line 99, in inner
    return call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/tenacity/__init__.py", line 418, in exc_check
    raise retry_exc.reraise()
          ^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/tenacity/__init__.py", line 185, in reraise
    raise self.last_attempt.result()
          ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/tenacity/asyncio/__init__.py", line 114, in __call__
    result = await fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 996, in _async_request_once
    await errors.APIError.raise_for_async_response(response)
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/google/genai/errors.py", line 155, in raise_for_async_response
    raise ClientError(status_code, response_json, response)
google.genai.errors.ClientError: 400 Bad Request. {'message': '{\n  "error": {\n    "code": 400,\n    "message": "Invalid JSON payload received. Unknown name \\"id\\" at \'contents[1].parts[0].function_call\': Cannot find field.\\nInvalid JSON payload received. Unknown name \\"id\\" at \'contents[2].parts[0].function_response\': Cannot find field.",\n    "status": "INVALID_ARGUMENT",\n    "details": [\n      {\n        "@type": "type.googleapis.com/google.rpc.BadRequest",\n        "fieldViolations": [\n          {\n            "field": "contents[1].parts[0].function_call",\n            "description": "Invalid JSON payload received. Unknown name \\"id\\" at \'contents[1].parts[0].function_call\': Cannot find field."\n          },\n          {\n            "field": "contents[2].parts[0].function_response",\n            "description": "Invalid JSON payload received. Unknown name \\"id\\" at \'contents[2].parts[0].function_response\': Cannot find field."\n          }\n        ]\n      }\n    ]\n  }\n}\n', 'status': 'Bad Request'}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/livekit/agents/utils/log.py", line 16, in async_fn_logs
    return await fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/opentelemetry/util/_decorator.py", line 71, in async_wrapper
    return await func(*args, **kwargs)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/livekit/agents/voice/generation.py", line 125, in _llm_inference_task
    async for chunk in llm_node:
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/livekit/agents/voice/agent.py", line 402, in llm_node
    async for chunk in stream:
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/livekit/agents/llm/llm.py", line 344, in __anext__
    raise exc  # noqa: B904
    ^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/opentelemetry/util/_decorator.py", line 71, in async_wrapper
    return await func(*args, **kwargs)  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/livekit/agents/llm/llm.py", line 190, in _main_task
    return await self._run()
           ^^^^^^^^^^^^^^^^^
  File "/home/ali/dev/livekit/venv/lib/python3.12/site-packages/livekit/plugins/google/llm.py", line 395, in _run
    raise APIStatusError(
livekit.agents._exceptions.APIStatusError: gemini llm: client error (status_code=400, request_id=a18dad7d36dd, body={
  "error": {
    "code": 400,
    "message": "Invalid JSON payload received. Unknown name \"id\" at 'contents[1].parts[0].function_call': Cannot find field.\nInvalid JSON payload received. Unknown name \"id\" at 'contents[2].parts[0].function_response': Cannot find field.",
    "status": "INVALID_ARGUMENT",
    "details": [
      {
        "@type": "type.googleapis.com/google.rpc.BadRequest",
        "fieldViolations": [
          {
            "field": "contents[1].parts[0].function_call",
            "description": "Invalid JSON payload received. Unknown name \"id\" at 'contents[1].parts[0].function_call': Cannot find field."
          },
          {
            "field": "contents[2].parts[0].function_response",
            "description": "Invalid JSON payload received. Unknown name \"id\" at 'contents[2].parts[0].function_response': Cannot find field."
          }
        ]
      }
    ]
  }
}
 Bad Request, retryable=False) {"pid": 339759, "job_id": "AJ_KdqHszSN4KCf"}

If we remove id from the payload, it works for both v1 and v1beta1 routes

2025-11-12 15:33:03,194 - DEBUG livekit.agents - executing tool {"function": "get_current_time", "arguments": "{}", "speech_id": "speech_129d7c5ea983", "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:03,195 - DEBUG livekit.agents - tools execution completed {"speech_id": "speech_129d7c5ea983", "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:03,197 - INFO google_genai.models - AFC is enabled with max remote calls: 10. {"pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:04,513 - INFO google_genai.models - AFC remote call 1 is done. {"pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
get_current_time called
2025-11-12 15:33:04,518 - INFO livekit.agents - LLM metrics {"model_name": "gemini-2.5-flash", "model_provider": "Vertex AI", "ttft": 1.32, "prompt_tokens": 70, "prompt_cached_tokens": 0, "completion_tokens": 5, "tokens_per_second": 3.78, "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:04,519 - DEBUG livekit.agents - executing tool {"function": "get_current_time", "arguments": "{}", "speech_id": "speech_129d7c5ea983", "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:04,519 - DEBUG livekit.agents - tools execution completed {"speech_id": "speech_129d7c5ea983", "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:04,521 - INFO google_genai.models - AFC is enabled with max remote calls: 10. {"pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:05,906 - INFO google_genai.models - AFC remote call 1 is done. {"pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:06,012 - DEBUG livekit.agents - http_session(): creating a new httpclient ctx {"pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:06,106 - INFO livekit.agents - LLM metrics {"model_name": "gemini-2.5-flash", "model_provider": "Vertex AI", "ttft": 1.39, "prompt_tokens": 101, "prompt_cached_tokens": 0, "completion_tokens": 41, "tokens_per_second": 25.86, "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:06,106 - DEBUG livekit.agents - generated assistant message id=I'm doing well, thank you for asking! It's currently 10:33 AM on November 12, 2025. How can I help you today? {"pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:08,400 - INFO livekit.agents - TTS metrics {"model_name": "mistv2", "model_provider": "Rime", "ttfb": 2.3548044549825136, "audio_duration": 1.8, "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:09,157 - DEBUG livekit.agents - flush audio emitter due to slow audio generation {"pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:09,916 - INFO livekit.agents - TTS metrics {"model_name": "mistv2", "model_provider": "Rime", "ttfb": 0.5759212620032486, "audio_duration": 3.58, "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}
2025-11-12 15:33:10,482 - INFO livekit.agents - TTS metrics {"model_name": "mistv2", "model_provider": "Rime", "ttfb": 0.5426903050101828, "audio_duration": 1.23, "pid": 344547, "job_id": "AJ_Jruwgtd7py23"}```

Removed 'id' field from function call and response.
v1beta1 was accepting it, but v1 is not.
@Is44m
Copy link
Contributor

Is44m commented Nov 12, 2025

Bumping up!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants