Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] cannot get current span for langgraph with async stream of events #1304

Open
axiomofjoy opened this issue Feb 21, 2025 · 2 comments
Open
Labels
bug Something isn't working language: python Related to Python integration triage Issues that require triage

Comments

@axiomofjoy
Copy link
Contributor

axiomofjoy commented Feb 21, 2025

Our get_current_span and get_ancestor_spans helper methods appear to not work when using astream_events.

import asyncio
from langgraph.graph import StateGraph


from phoenix.otel import register
from openinference.instrumentation.langchain import (
    LangChainInstrumentor,
    get_ancestor_spans,
    get_current_span,
)

tracer_provider = register()
LangChainInstrumentor().instrument(tracer_provider=tracer_provider)


async def chatbot_node(state):
    assert get_current_span() is None  # bug
    assert len(get_ancestor_spans()) == 0  # bug
    print(f"{get_ancestor_spans()=}")
    return {"response": f"Echo: {state['input']}"}


graph = StateGraph(dict)
graph.add_node("chatbot", chatbot_node)
graph.set_entry_point("chatbot")
graph = graph.compile()


async def run_graph():
    async for event in graph.astream_events({"input": "Hello"}, version="v2"):
        print("Event:", event)


asyncio.run(run_graph())

Context: https://arize-ai.slack.com/archives/C04R3GXC8HK/p1740059242858999

@axiomofjoy axiomofjoy added bug Something isn't working triage Issues that require triage labels Feb 21, 2025
Copy link

dosubot bot commented Feb 21, 2025

The issue you're encountering with get_current_span and get_ancestor_spans returning None or empty lists in the context of asynchronous event streaming with astream_events might be related to how context propagation is handled in asynchronous operations.

In the OpenInference framework, these methods are designed to retrieve the current and ancestor spans during the execution of a function. However, in asynchronous contexts, especially when using event streams, the context might not be properly propagated, leading to the observed behavior.

Here are a few things you might consider checking or trying:

  1. Ensure Proper Context Propagation: Make sure that the context is correctly propagated across asynchronous boundaries. This might involve using context managers or ensuring that the OpenTelemetry context is properly maintained across async calls.

  2. Instrumentation Setup: Verify that the LangChainInstrumentor is correctly set up and that the tracer provider is properly configured. Ensure that the instrumentation is applied before any spans are created or retrieved.

  3. Test with Synchronous Code: As a diagnostic step, try running a similar setup in a synchronous context to see if the spans are correctly retrieved. This can help isolate whether the issue is specific to asynchronous execution.

  4. Review Test Cases: The OpenInference repository contains test cases that demonstrate the use of these methods in asynchronous contexts. Reviewing these tests might provide insights into any additional setup or configuration required [1].

If these steps do not resolve the issue, it might be necessary to delve deeper into the specifics of how context is managed in your particular setup or consult additional documentation on context propagation in OpenTelemetry.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@dosubot dosubot bot added the language: python Related to Python integration label Feb 21, 2025
@BenMcH
Copy link

BenMcH commented Feb 22, 2025

Adding this here in addition to the linked slack thread. I believe there is a dependency issue at play here.

I was able to use official guides to instrument LangGraph (LangChain instrumentation handles this) with ease and in an async context. Note: these examples use ainvoke instead of astream_events. Both methods work, but ainvoke gives us the state back with ease, allowing it to be returned as an api response. It is also seen as output in phoenix, so for testing, this can be converted to astream_events by changing the fastapi handler to this:

@app.get("/")
async def root():
    async for e in graph.astream_events({"span": "unknown-phoenix"}, version="v2"):
        ...
    
    return {}

The span ID will just need to be found within the locally running phoenix

Image

Repro repo: https://github.com/BenMcH/phoenix-tracing-fastapi-async

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working language: python Related to Python integration triage Issues that require triage
Projects
Status: No status
Development

No branches or pull requests

2 participants