-
Notifications
You must be signed in to change notification settings - Fork 760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: Usage input/output 0 after upgrading from v2.56.2 to v2.57.1 #4974
Comments
Hey @jerry-reevo, I'm here to help you with the bug you've encountered. I can assist with bug fixes, answer questions, and guide you to become a contributor. The issue you're experiencing with input/output usage being reported as 0/0 after upgrading to v2.57.1 of the Langfuse Python SDK seems to be specific to LangChain traces with BedRock LLM calls. This discrepancy does not appear to affect other LLM calls through LiteLLM to OpenAI, Anthropic, and Gemini, which are tracked correctly. Unfortunately, I couldn't find specific changes in the Langfuse Python SDK v2.57.1 that might have caused this issue. It might be beneficial to check the release notes or changelog for v2.57.1 for any updates related to BedRock LLM or input/output usage tracking, as these could provide insights into the root cause of the problem. In the meantime, reverting to v2.56.2, as you've done, seems to be a practical workaround until the issue is resolved in a future update. If you're interested in contributing a fix, examining the differences in the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Which version of Langfuse Server do you use or are you on Langfuse Cloud? |
Edit: thanks for confirming that you use Langfuse Cloud. We will look into this |
Thanks for reporting this @jerry-reevo! We have a fix for this in the linked PR. Please try that out once released and let us know whether this is fixed! |
@hassiebp Thanks for the update! I'll follow and lookout for the release. |
@jerry-reevo Thanks for your patience, the fix went out in 2.57.8 - could you please retry with that version? |
@hassiebp I just tested with v2.57.8 and the issue appears to be fixed. Just for my own understanding, could you just point out the code change that fixed it? |
@jerry-reevo Thanks for letting us know! We have server side support for parsing the OpenAI usage details schema. In our Langchain integration, we had to remove any parsing done SDK to OpenAI schemas that would otherwise break the server side parsing. See here: https://github.com/langfuse/langfuse-python/blob/40cb0e168175c7651d225a02302407ec3b902ffd/langfuse/callback/langchain.py#L1073 |
Describe the bug
Specifically for Langchain traces that make LLM calls to BedRock, we've noticed that input/output usage is reported as 0/0 after upgrading to v2.57.1 of the Python SDK. Total is still there, but because input and output is 0, there is no more cost tracking.
Example:
Simply reverting to v2.56.2 results in intput/output and cost to populate again.
All our other LLM calls going through LiteLLM to OpenAI, Anthropic, and Gemini all appear to have usage tracked as expected.
To reproduce
Simply use version v2.57.1 or newer of the Python SDK and create a LangChain chain with a ChatBedrock LLM. In our case, we were using
anthropic.claude-3-5-sonnet-20240620-v1:0
.We also use the Langfuse
CallbackHandler
attached as a callback to the.invoke(... config={ ... })
call.SDK and container versions
v2.57.1+
Python 3.12
Additional information
No response
Are you interested to contribute a fix for this bug?
Yes
The text was updated successfully, but these errors were encountered: