Skip to content

Add LangChain + LangSmith tracing example#3

Open
saif-at-scalekit wants to merge 2 commits into
mainfrom
feat/langsmith-tracing-example
Open

Add LangChain + LangSmith tracing example#3
saif-at-scalekit wants to merge 2 commits into
mainfrom
feat/langsmith-tracing-example

Conversation

@saif-at-scalekit
Copy link
Copy Markdown
Contributor

@saif-at-scalekit saif-at-scalekit commented May 12, 2026

Adds a tested example showing Scalekit AgentKit tools traced in LangSmith via the native LangChain adapter.

What this adds

  • python/frameworks/langchain/langsmith_tracing.py — working example that:
    • Connects a user to Gmail via Scalekit
    • Loads native LangChain StructuredTool objects
    • Runs a tool-calling agent loop via LiteLLM
    • Sends traces to LangSmith automatically
  • Updated .env.example with LANGCHAIN_* and LITELLM_* vars
  • Updated requirements.txt with langsmith
  • Updated README.md framework table

Testing

Verified end-to-end: Gmail tools load (8 tools), agent invokes gmail_fetch_mails, returns real email data, traces appear in LangSmith project.

Context

This example will be referenced from a Scalekit integration page on the LangSmith docs site (langchain-ai/docs).

Summary by CodeRabbit

  • New Features

    • LangSmith tracing and LiteLLM proxy integration capabilities now available for enhanced observability and model management
  • Documentation

    • Added new framework integration example demonstrating agent execution with tracing
  • Chores

    • Updated dependencies and environment configuration templates with required variables

Review Change Stack

Adds a tested example showing Scalekit tools traced in LangSmith.
Verified end-to-end: Gmail tools load, agent runs, traces appear in
LangSmith project.

- python/frameworks/langchain/langsmith_tracing.py: working example
- .env.example: add LANGCHAIN_* and LITELLM_* vars
- python/requirements.txt: add langsmith
- README.md: add example to framework table
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 12, 2026

Warning

Rate limit exceeded

@saif-at-scalekit has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 52 minutes and 39 seconds before requesting another review.

You’ve run out of usage credits. Purchase more in the billing tab.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2af72eb8-c954-4d90-997c-e090e172c1d8

📥 Commits

Reviewing files that changed from the base of the PR and between 8b69922 and 7f00f08.

⛔ Files ignored due to path filters (1)
  • python/frameworks/langchain/images/langsmith-trace.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • README.md
📝 Walkthrough

Walkthrough

This PR introduces a complete LangChain agent example with LangSmith tracing that uses Scalekit's native Gmail tools. It adds required environment variables, a Python dependency, documentation reference, and a runnable example script demonstrating the full agentic workflow from client initialization through tool invocation and completion.

Changes

LangChain + LangSmith Example

Layer / File(s) Summary
Environment and dependency configuration
.env.example, python/requirements.txt
Adds LANGCHAIN_TRACING_V2, LANGCHAIN_API_KEY, LANGCHAIN_PROJECT, and LiteLLM proxy variables to .env.example; adds langsmith dependency to requirements.txt.
Framework documentation reference
README.md
Adds a new row to the Python frameworks table linking to the LangChain + LangSmith example.
LangChain agent with Scalekit tools and LangSmith tracing
python/frameworks/langchain/langsmith_tracing.py
New script demonstrates loading LangSmith tracing config, initializing a Scalekit client, authorizing a Gmail connected account, fetching native LangChain tools, configuring a LiteLLM-backed model with bound tools, iteratively invoking the model and executing tool calls, and printing completion output with tracing metadata.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • scalekit-developers/agent-auth-examples#2: Adds a foundational LangChain agent example and related environment configuration, while this PR builds on that with LangSmith tracing integration and extended tooling.

Poem

🐇 A chain of thought, traced with care,
LangSmith sees what happens where,
Gmail tools dance, agentic and free,
Scalekit connects them—pure harmony! 🎉

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'Add LangChain + LangSmith tracing example' directly and accurately describes the main change: adding a new example demonstrating LangChain with LangSmith tracing integration.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/langsmith-tracing-example

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@python/frameworks/langchain/langsmith_tracing.py`:
- Around line 99-103: The loop over response.tool_calls calls
tool_map[tc["name"]].invoke(tc["args"]) without protection; wrap that invoke in
a try/except (catch broad exceptions like Exception) inside the for tc in
response.tool_calls loop so a failing tool does not crash the whole run, log or
record the exception via your logger/print with context including tc["name"] and
tc["id"], and append a ToolMessage indicating the failure (e.g.,
ToolMessage(content=str(error) or a structured error marker,
tool_call_id=tc["id"])) so downstream code sees a failure result rather than
raising; ensure you still increment tool_call_count and continue to the next
tool call.
- Around line 45-49: Before constructing scalekit_client with ScalekitClient,
validate that environment variables SCALEKIT_CLIENT_ID, SCALEKIT_CLIENT_SECRET,
and SCALEKIT_ENVIRONMENT_URL are present; check os.getenv(...) for each (or use
os.environ) and raise a clear exception or log an error if any are missing so
the code does not call ScalekitClient with None values. Update the instantiation
site where scalekit_client = scalekit.client.ScalekitClient(...) to perform this
pre-check and include the variable names in the error message to aid debugging.
- Around line 82-86: The ChatOpenAI instantiation uses incorrect parameter names
(openai_api_base, openai_api_key); update the constructor call for llm (the
ChatOpenAI(...) expression that is then .bind_tools(tools)) to pass
base_url=os.getenv("LITELLM_BASE_URL") and api_key=os.getenv("LITELLM_API_KEY")
while keeping model and the .bind_tools(tools) chain intact so the correct
parameters expected by ChatOpenAI are used.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c51848ce-56e0-4361-8579-06f89c438305

📥 Commits

Reviewing files that changed from the base of the PR and between f1723f3 and 8b69922.

📒 Files selected for processing (4)
  • .env.example
  • README.md
  • python/frameworks/langchain/langsmith_tracing.py
  • python/requirements.txt

Comment on lines +45 to +49
scalekit_client = scalekit.client.ScalekitClient(
client_id=os.getenv("SCALEKIT_CLIENT_ID"),
client_secret=os.getenv("SCALEKIT_CLIENT_SECRET"),
env_url=os.getenv("SCALEKIT_ENVIRONMENT_URL"),
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Validate required environment variables before use.

The script doesn't check that SCALEKIT_CLIENT_ID, SCALEKIT_CLIENT_SECRET, and SCALEKIT_ENVIRONMENT_URL are set before passing them to the SDK. If any are missing, the error from the SDK might be less clear than a proactive check.

✅ Suggested validation
 # ── Initialize Scalekit client ──────────────────────────────────────────────
 
 import scalekit.client
 
+required_vars = ["SCALEKIT_CLIENT_ID", "SCALEKIT_CLIENT_SECRET", "SCALEKIT_ENVIRONMENT_URL"]
+missing = [v for v in required_vars if not os.getenv(v)]
+if missing:
+    print(f"❌ Missing required environment variables: {', '.join(missing)}")
+    print("   Set these in your .env file. See .env.example for reference.")
+    exit(1)
+
 scalekit_client = scalekit.client.ScalekitClient(
     client_id=os.getenv("SCALEKIT_CLIENT_ID"),
     client_secret=os.getenv("SCALEKIT_CLIENT_SECRET"),
     env_url=os.getenv("SCALEKIT_ENVIRONMENT_URL"),
 )
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@python/frameworks/langchain/langsmith_tracing.py` around lines 45 - 49,
Before constructing scalekit_client with ScalekitClient, validate that
environment variables SCALEKIT_CLIENT_ID, SCALEKIT_CLIENT_SECRET, and
SCALEKIT_ENVIRONMENT_URL are present; check os.getenv(...) for each (or use
os.environ) and raise a clear exception or log an error if any are missing so
the code does not call ScalekitClient with None values. Update the instantiation
site where scalekit_client = scalekit.client.ScalekitClient(...) to perform this
pre-check and include the variable names in the error message to aid debugging.

Comment on lines +82 to +86
llm = ChatOpenAI(
model=model,
openai_api_base=os.getenv("LITELLM_BASE_URL"),
openai_api_key=os.getenv("LITELLM_API_KEY"),
).bind_tools(tools)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

What are the correct parameter names for ChatOpenAI from langchain-openai to specify a custom base URL and API key?

💡 Result:

For langchain-openai’s ChatOpenAI, the correct constructor parameter names are: - api_key: pass your API key directly with api_key="..." (or omit to read OPENAI_API_KEY from the environment) [1]. - base_url: pass your custom base URL with base_url="..." [1]. Example: from langchain_openai import ChatOpenAI llm = ChatOpenAI( model="gpt-5-nano", base_url="https://your-endpoint.example.com/v1", api_key="your-api-key",) These parameters are documented in the ChatOpenAI reference table (api_key and base_url) and in the integration docs (including use of base_url for Azure OpenAI by appending /openai/v1/). [1][2]

Citations:


Use correct parameter names for ChatOpenAI: base_url and api_key instead of openai_api_base and openai_api_key.

According to langchain-openai's official documentation, the constructor parameters are base_url and api_key. Update lines 82-86:

Replace with:
llm = ChatOpenAI(
    model=model,
    base_url=os.getenv("LITELLM_BASE_URL"),
    api_key=os.getenv("LITELLM_API_KEY"),
).bind_tools(tools)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@python/frameworks/langchain/langsmith_tracing.py` around lines 82 - 86, The
ChatOpenAI instantiation uses incorrect parameter names (openai_api_base,
openai_api_key); update the constructor call for llm (the ChatOpenAI(...)
expression that is then .bind_tools(tools)) to pass
base_url=os.getenv("LITELLM_BASE_URL") and api_key=os.getenv("LITELLM_API_KEY")
while keeping model and the .bind_tools(tools) chain intact so the correct
parameters expected by ChatOpenAI are used.

Comment on lines +99 to +103
for tc in response.tool_calls:
tool_call_count += 1
print(f" 🔧 Tool call #{tool_call_count}: {tc['name']}")
result = tool_map[tc["name"]].invoke(tc["args"])
messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add error handling for tool execution.

Tool invocation at line 102 has no error handling. If a tool fails due to network issues, API errors, rate limits, or invalid arguments, the script will crash mid-execution rather than gracefully handling the error.

🛡️ Suggested error handling
     for tc in response.tool_calls:
         tool_call_count += 1
         print(f"  🔧 Tool call #{tool_call_count}: {tc['name']}")
-        result = tool_map[tc["name"]].invoke(tc["args"])
-        messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
+        try:
+            result = tool_map[tc["name"]].invoke(tc["args"])
+            messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
+        except Exception as e:
+            error_msg = f"Error executing {tc['name']}: {str(e)}"
+            print(f"  ⚠️  {error_msg}")
+            messages.append(ToolMessage(content=error_msg, tool_call_id=tc["id"]))
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@python/frameworks/langchain/langsmith_tracing.py` around lines 99 - 103, The
loop over response.tool_calls calls tool_map[tc["name"]].invoke(tc["args"])
without protection; wrap that invoke in a try/except (catch broad exceptions
like Exception) inside the for tc in response.tool_calls loop so a failing tool
does not crash the whole run, log or record the exception via your logger/print
with context including tc["name"] and tc["id"], and append a ToolMessage
indicating the failure (e.g., ToolMessage(content=str(error) or a structured
error marker, tool_call_id=tc["id"])) so downstream code sees a failure result
rather than raising; ensure you still increment tool_call_count and continue to
the next tool call.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants