Add LangChain + LangSmith tracing example#3
Conversation
Adds a tested example showing Scalekit tools traced in LangSmith. Verified end-to-end: Gmail tools load, agent runs, traces appear in LangSmith project. - python/frameworks/langchain/langsmith_tracing.py: working example - .env.example: add LANGCHAIN_* and LITELLM_* vars - python/requirements.txt: add langsmith - README.md: add example to framework table
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThis PR introduces a complete LangChain agent example with LangSmith tracing that uses Scalekit's native Gmail tools. It adds required environment variables, a Python dependency, documentation reference, and a runnable example script demonstrating the full agentic workflow from client initialization through tool invocation and completion. ChangesLangChain + LangSmith Example
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@python/frameworks/langchain/langsmith_tracing.py`:
- Around line 99-103: The loop over response.tool_calls calls
tool_map[tc["name"]].invoke(tc["args"]) without protection; wrap that invoke in
a try/except (catch broad exceptions like Exception) inside the for tc in
response.tool_calls loop so a failing tool does not crash the whole run, log or
record the exception via your logger/print with context including tc["name"] and
tc["id"], and append a ToolMessage indicating the failure (e.g.,
ToolMessage(content=str(error) or a structured error marker,
tool_call_id=tc["id"])) so downstream code sees a failure result rather than
raising; ensure you still increment tool_call_count and continue to the next
tool call.
- Around line 45-49: Before constructing scalekit_client with ScalekitClient,
validate that environment variables SCALEKIT_CLIENT_ID, SCALEKIT_CLIENT_SECRET,
and SCALEKIT_ENVIRONMENT_URL are present; check os.getenv(...) for each (or use
os.environ) and raise a clear exception or log an error if any are missing so
the code does not call ScalekitClient with None values. Update the instantiation
site where scalekit_client = scalekit.client.ScalekitClient(...) to perform this
pre-check and include the variable names in the error message to aid debugging.
- Around line 82-86: The ChatOpenAI instantiation uses incorrect parameter names
(openai_api_base, openai_api_key); update the constructor call for llm (the
ChatOpenAI(...) expression that is then .bind_tools(tools)) to pass
base_url=os.getenv("LITELLM_BASE_URL") and api_key=os.getenv("LITELLM_API_KEY")
while keeping model and the .bind_tools(tools) chain intact so the correct
parameters expected by ChatOpenAI are used.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c51848ce-56e0-4361-8579-06f89c438305
📒 Files selected for processing (4)
.env.exampleREADME.mdpython/frameworks/langchain/langsmith_tracing.pypython/requirements.txt
| scalekit_client = scalekit.client.ScalekitClient( | ||
| client_id=os.getenv("SCALEKIT_CLIENT_ID"), | ||
| client_secret=os.getenv("SCALEKIT_CLIENT_SECRET"), | ||
| env_url=os.getenv("SCALEKIT_ENVIRONMENT_URL"), | ||
| ) |
There was a problem hiding this comment.
Validate required environment variables before use.
The script doesn't check that SCALEKIT_CLIENT_ID, SCALEKIT_CLIENT_SECRET, and SCALEKIT_ENVIRONMENT_URL are set before passing them to the SDK. If any are missing, the error from the SDK might be less clear than a proactive check.
✅ Suggested validation
# ── Initialize Scalekit client ──────────────────────────────────────────────
import scalekit.client
+required_vars = ["SCALEKIT_CLIENT_ID", "SCALEKIT_CLIENT_SECRET", "SCALEKIT_ENVIRONMENT_URL"]
+missing = [v for v in required_vars if not os.getenv(v)]
+if missing:
+ print(f"❌ Missing required environment variables: {', '.join(missing)}")
+ print(" Set these in your .env file. See .env.example for reference.")
+ exit(1)
+
scalekit_client = scalekit.client.ScalekitClient(
client_id=os.getenv("SCALEKIT_CLIENT_ID"),
client_secret=os.getenv("SCALEKIT_CLIENT_SECRET"),
env_url=os.getenv("SCALEKIT_ENVIRONMENT_URL"),
)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@python/frameworks/langchain/langsmith_tracing.py` around lines 45 - 49,
Before constructing scalekit_client with ScalekitClient, validate that
environment variables SCALEKIT_CLIENT_ID, SCALEKIT_CLIENT_SECRET, and
SCALEKIT_ENVIRONMENT_URL are present; check os.getenv(...) for each (or use
os.environ) and raise a clear exception or log an error if any are missing so
the code does not call ScalekitClient with None values. Update the instantiation
site where scalekit_client = scalekit.client.ScalekitClient(...) to perform this
pre-check and include the variable names in the error message to aid debugging.
| llm = ChatOpenAI( | ||
| model=model, | ||
| openai_api_base=os.getenv("LITELLM_BASE_URL"), | ||
| openai_api_key=os.getenv("LITELLM_API_KEY"), | ||
| ).bind_tools(tools) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What are the correct parameter names for ChatOpenAI from langchain-openai to specify a custom base URL and API key?
💡 Result:
For langchain-openai’s ChatOpenAI, the correct constructor parameter names are: - api_key: pass your API key directly with api_key="..." (or omit to read OPENAI_API_KEY from the environment) [1]. - base_url: pass your custom base URL with base_url="..." [1]. Example: from langchain_openai import ChatOpenAI llm = ChatOpenAI( model="gpt-5-nano", base_url="https://your-endpoint.example.com/v1", api_key="your-api-key",) These parameters are documented in the ChatOpenAI reference table (api_key and base_url) and in the integration docs (including use of base_url for Azure OpenAI by appending /openai/v1/). [1][2]
Citations:
- 1: https://reference.langchain.com/python/langchain-openai/chat_models/base/ChatOpenAI
- 2: https://docs.langchain.com/oss/python/integrations/chat/openai
Use correct parameter names for ChatOpenAI: base_url and api_key instead of openai_api_base and openai_api_key.
According to langchain-openai's official documentation, the constructor parameters are base_url and api_key. Update lines 82-86:
Replace with:
llm = ChatOpenAI(
model=model,
base_url=os.getenv("LITELLM_BASE_URL"),
api_key=os.getenv("LITELLM_API_KEY"),
).bind_tools(tools)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@python/frameworks/langchain/langsmith_tracing.py` around lines 82 - 86, The
ChatOpenAI instantiation uses incorrect parameter names (openai_api_base,
openai_api_key); update the constructor call for llm (the ChatOpenAI(...)
expression that is then .bind_tools(tools)) to pass
base_url=os.getenv("LITELLM_BASE_URL") and api_key=os.getenv("LITELLM_API_KEY")
while keeping model and the .bind_tools(tools) chain intact so the correct
parameters expected by ChatOpenAI are used.
| for tc in response.tool_calls: | ||
| tool_call_count += 1 | ||
| print(f" 🔧 Tool call #{tool_call_count}: {tc['name']}") | ||
| result = tool_map[tc["name"]].invoke(tc["args"]) | ||
| messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"])) |
There was a problem hiding this comment.
Add error handling for tool execution.
Tool invocation at line 102 has no error handling. If a tool fails due to network issues, API errors, rate limits, or invalid arguments, the script will crash mid-execution rather than gracefully handling the error.
🛡️ Suggested error handling
for tc in response.tool_calls:
tool_call_count += 1
print(f" 🔧 Tool call #{tool_call_count}: {tc['name']}")
- result = tool_map[tc["name"]].invoke(tc["args"])
- messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
+ try:
+ result = tool_map[tc["name"]].invoke(tc["args"])
+ messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
+ except Exception as e:
+ error_msg = f"Error executing {tc['name']}: {str(e)}"
+ print(f" ⚠️ {error_msg}")
+ messages.append(ToolMessage(content=error_msg, tool_call_id=tc["id"]))🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@python/frameworks/langchain/langsmith_tracing.py` around lines 99 - 103, The
loop over response.tool_calls calls tool_map[tc["name"]].invoke(tc["args"])
without protection; wrap that invoke in a try/except (catch broad exceptions
like Exception) inside the for tc in response.tool_calls loop so a failing tool
does not crash the whole run, log or record the exception via your logger/print
with context including tc["name"] and tc["id"], and append a ToolMessage
indicating the failure (e.g., ToolMessage(content=str(error) or a structured
error marker, tool_call_id=tc["id"])) so downstream code sees a failure result
rather than raising; ensure you still increment tool_call_count and continue to
the next tool call.
Adds a tested example showing Scalekit AgentKit tools traced in LangSmith via the native LangChain adapter.
What this adds
python/frameworks/langchain/langsmith_tracing.py— working example that:StructuredToolobjects.env.examplewithLANGCHAIN_*andLITELLM_*varsrequirements.txtwithlangsmithREADME.mdframework tableTesting
Verified end-to-end: Gmail tools load (8 tools), agent invokes
gmail_fetch_mails, returns real email data, traces appear in LangSmith project.Context
This example will be referenced from a Scalekit integration page on the LangSmith docs site (langchain-ai/docs).
Summary by CodeRabbit
New Features
Documentation
Chores