Replies: 1 comment 1 reply
-
|
Thanks for reaching out @eddieahn i did a quick check in the otel docs and it specifically has request model also under the invoke agent span: https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-agent-spans/#invoke-agent-span so in this case, I think it's up to langfuse to ensure they follow the setup and not infer token usage when you configure that,when I tested with local langfuse it did seem to work correctly though... |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment

Uh oh!
There was an error while loading. Please reload this page.
-
When sending data from Microsoft Agent Framework to Langfuse, should the invoke_agent span have gen_ai.request.model? My assumption is that its an orchestration span, not an LLM call so in Langfuse the issue below occurs:
Even though the Microsoft Agent Framework sets capture_usage=False, Langfuse will still attempt to infer token usage because:
Is this an issue that needs to be fixed on the Langfuse side or can we remove that specific field from being passed?
Beta Was this translation helpful? Give feedback.
All reactions