-
Notifications
You must be signed in to change notification settings - Fork 876
Add id
and finish_reason
to Gemini
#1800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add id
and finish_reason
to Gemini
#1800
Conversation
@davide-andreoli Thanks! Can you please have a look at the failing tests, and also implement this in the new |
@DouweM The test failures are due to the vendor_details field. I see two potential ways to address this:
Please let me know which approach you’d prefer. And sure, no problem with adding the fields to |
@davide-andreoli I had a chat about this with @dmontagu and we decided to make it so that dataclass fields with default values are excluded from reps: #1812. So once that merges (should be soon), you can merge/rebase, drop |
28dec6a
to
273e783
Compare
Hello @DouweM, work is done and tests are passing. |
|
||
for message in result.all_messages(): | ||
if isinstance(message, ModelResponse): | ||
assert message.vendor_details is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How come we're not getting a finish reason here, even though we don't fake a response like in the Gemini test, and the other agent runs in this file do get one? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're not getting a finish reason here because there is no finish reason in the cassette file for this test, which is test_googe_no_finish_reason.yaml
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davide-andreoli Just to make sure I understand correctly, did that response come without a finish reason, or was it manually edited out of the cassette? I worry about at some point regenerating the cassette and breaking the test. If it came without a finish reason, why this and not all of the other ones in the file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DouweM you're right, it does not make sense.
The test coverage is flagging the check for a None finish reason, since the code verifies whether it’s set. However, in practice, it’s difficult to obtain an API response that omits this field, as it’s currently included by default.
I added the check to make the code more resilient, but given the circumstances, I see two options: either remove the check from the implementation or mock the client response in the test to simulate the missing field.
Let me know which approach you prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DouweM you're right, it does not make sense.
The test coverage is flagging the check for a None finish reason, since the code verifies whether it’s set. However, in practice, it’s difficult to obtain an API response that omits this field, as it’s currently included by default.
I added the check to make the code more resilient, but given the circumstances, I see two options: either remove the check from the implementation or mock the client response in the test to simulate the missing field.
Let me know which approach you prefer.
|
||
for message in result.all_messages(): | ||
if isinstance(message, ModelResponse): | ||
assert message.vendor_details is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davide-andreoli Just to make sure I understand correctly, did that response come without a finish reason, or was it manually edited out of the cassette? I worry about at some point regenerating the cassette and breaking the test. If it came without a finish reason, why this and not all of the other ones in the file?
Add
id
andfinish_reason
to GeminiDescription
Following PR#1761 , this update adds
vendor_id
andfinish_reason
to Gemini answers.Below the details:
vendor_id
: maps toresponseId
finish_reason
:finishReason
for the first candidate of the output. The finish reason is added in vendor_details, following the attribute specificationThe handling of finish_reasons aligns with the current approach for other response parts, where only the first candidate's reason is considered.
Testing
The changes were validated using my API key, and the behavior matches expectations.
Impact
This update impacts only Gemini Responses, but it is not a breaking change.