-
Notifications
You must be signed in to change notification settings - Fork 19.4k
fix(core): on_llm_new_token
correct casting for .content to string
#33498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
fix(core): on_llm_new_token
correct casting for .content to string
#33498
Conversation
CodSpeed Performance ReportMerging #33498 will not alter performanceComparing
|
run_manager.on_llm_new_token( | ||
json.dumps(chunk.message.content), chunk=chunk | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to json.dumps the content or just call chunk.message.text
as the original issue proposed?
my bias is to do chunk.message.text
, which will represent tokens of text content. it will miss tokens from reasoning, tool calls, and other content, but I don't think on_llm_new_token
was designed to support these well. so my thought is to go with well-defined behavior for now.
This requires very careful design review. It's possible that the type is wrong and the callback handler actually needs to support all content not just text. We need to check how things are wired |
@eyurtsev right now it only supports text, do you mean we need to update |
on_llm_new_token
function should only accept values of typestring
. However, the.content
attribute can sometimes be a list, so it’s unclear why it wasn’t being cast to a string. This PR addresses that by adding the necessary type casting.on_llm_new_token
gets list of dicts instead of string for Anthropic models #30703