Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,17 @@ SCALEKIT_CLIENT_ID=skc_your_client_id_here
SCALEKIT_CLIENT_SECRET=skcs_your_client_secret_here
SCALEKIT_ENVIRONMENT_URL=https://your-subdomain.scalekit.dev

# LangSmith tracing (for langsmith_tracing.py example).
# Get your API key from: https://smith.langchain.com/settings
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=lsv2_your_langsmith_api_key_here
LANGCHAIN_PROJECT=scalekit-langsmith-test

# LiteLLM proxy (used by LangChain and other framework examples).
LITELLM_BASE_URL=
LITELLM_API_KEY=
LITELLM_MODEL=claude-sonnet-4-6

# Optional model provider settings for framework examples.
OPENAI_BASE_URL=
OPENAI_API_KEY=
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ These examples are the framework-oriented AgentKit samples migrated from the doc
|----------|------|
| Quickstart | [python/frameworks/quickstart/main.py](python/frameworks/quickstart/main.py) |
| LangChain | [python/frameworks/langchain/agent.py](python/frameworks/langchain/agent.py) |
| LangChain + LangSmith | [python/frameworks/langchain/langsmith_tracing.py](python/frameworks/langchain/langsmith_tracing.py) |
| Google ADK | [python/frameworks/google-adk/agent.py](python/frameworks/google-adk/agent.py) |
| Anthropic | [python/frameworks/anthropic/agent.py](python/frameworks/anthropic/agent.py) |
| OpenAI-compatible | [python/frameworks/openai/agent.py](python/frameworks/openai/agent.py) |
Expand Down
110 changes: 110 additions & 0 deletions python/frameworks/langchain/langsmith_tracing.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
"""
LangChain agent with Scalekit tools + LangSmith tracing.

Demonstrates that Scalekit's native LangChain StructuredTool objects trace
automatically in LangSmith when LANGCHAIN_TRACING_V2=true.

Run: python python/frameworks/langchain/langsmith_tracing.py

Required env vars (.env at repo root):
SCALEKIT_ENVIRONMENT_URL SCALEKIT_CLIENT_ID SCALEKIT_CLIENT_SECRET
LITELLM_BASE_URL LITELLM_API_KEY
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY (this is the LangSmith API key)

Optional:
LITELLM_MODEL (default: "claude-sonnet-4-6")
LANGCHAIN_PROJECT (default: "scalekit-langsmith-test")
"""

import os

from dotenv import find_dotenv, load_dotenv

load_dotenv(find_dotenv())

# ── Verify LangSmith tracing is configured ──────────────────────────────────

tracing_enabled = os.getenv("LANGCHAIN_TRACING_V2", "").lower() == "true"
langsmith_key = os.getenv("LANGCHAIN_API_KEY", "")
project = os.getenv("LANGCHAIN_PROJECT", "scalekit-langsmith-test")

if not tracing_enabled:
print("⚠️ LANGCHAIN_TRACING_V2 is not set to 'true'. Traces will NOT be sent to LangSmith.")
print(" Set LANGCHAIN_TRACING_V2=true in your .env to enable tracing.")
if not langsmith_key:
print("⚠️ LANGCHAIN_API_KEY is not set. Traces will NOT be sent to LangSmith.")
print(" Get your API key from https://smith.langchain.com/settings")
else:
print(f"✅ LangSmith tracing enabled — project: {project}")

# ── Initialize Scalekit client ──────────────────────────────────────────────

import scalekit.client

scalekit_client = scalekit.client.ScalekitClient(
client_id=os.getenv("SCALEKIT_CLIENT_ID"),
client_secret=os.getenv("SCALEKIT_CLIENT_SECRET"),
env_url=os.getenv("SCALEKIT_ENVIRONMENT_URL"),
)
Comment on lines +45 to +49
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Validate required environment variables before use.

The script doesn't check that SCALEKIT_CLIENT_ID, SCALEKIT_CLIENT_SECRET, and SCALEKIT_ENVIRONMENT_URL are set before passing them to the SDK. If any are missing, the error from the SDK might be less clear than a proactive check.

✅ Suggested validation
 # ── Initialize Scalekit client ──────────────────────────────────────────────
 
 import scalekit.client
 
+required_vars = ["SCALEKIT_CLIENT_ID", "SCALEKIT_CLIENT_SECRET", "SCALEKIT_ENVIRONMENT_URL"]
+missing = [v for v in required_vars if not os.getenv(v)]
+if missing:
+    print(f"❌ Missing required environment variables: {', '.join(missing)}")
+    print("   Set these in your .env file. See .env.example for reference.")
+    exit(1)
+
 scalekit_client = scalekit.client.ScalekitClient(
     client_id=os.getenv("SCALEKIT_CLIENT_ID"),
     client_secret=os.getenv("SCALEKIT_CLIENT_SECRET"),
     env_url=os.getenv("SCALEKIT_ENVIRONMENT_URL"),
 )
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@python/frameworks/langchain/langsmith_tracing.py` around lines 45 - 49,
Before constructing scalekit_client with ScalekitClient, validate that
environment variables SCALEKIT_CLIENT_ID, SCALEKIT_CLIENT_SECRET, and
SCALEKIT_ENVIRONMENT_URL are present; check os.getenv(...) for each (or use
os.environ) and raise a clear exception or log an error if any are missing so
the code does not call ScalekitClient with None values. Update the instantiation
site where scalekit_client = scalekit.client.ScalekitClient(...) to perform this
pre-check and include the variable names in the error message to aid debugging.

actions = scalekit_client.actions

# ── Connect user to Gmail ───────────────────────────────────────────────────

IDENTIFIER = "user_123"

response = actions.get_or_create_connected_account(
connection_name="gmail",
identifier=IDENTIFIER,
)
if response.connected_account.status != "ACTIVE":
link = actions.get_authorization_link(connection_name="gmail", identifier=IDENTIFIER)
print("Authorize Gmail:", link.link)
input("Press Enter after authorizing...")
else:
print(f"✅ Gmail connected for {IDENTIFIER}")

# ── Get native LangChain tools ──────────────────────────────────────────────

tools = actions.langchain.get_tools(
identifier=IDENTIFIER,
connection_names=["gmail"],
)
tool_map = {t.name: t for t in tools}
print(f"✅ Loaded {len(tools)} LangChain tools: {[t.name for t in tools[:5]]}")

# ── Run agent with LangSmith tracing ────────────────────────────────────────

from langchain_core.messages import HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI

model = os.getenv("LITELLM_MODEL", "claude-sonnet-4-6")
llm = ChatOpenAI(
model=model,
openai_api_base=os.getenv("LITELLM_BASE_URL"),
openai_api_key=os.getenv("LITELLM_API_KEY"),
).bind_tools(tools)
Comment on lines +82 to +86
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

What are the correct parameter names for ChatOpenAI from langchain-openai to specify a custom base URL and API key?

💡 Result:

For langchain-openai’s ChatOpenAI, the correct constructor parameter names are: - api_key: pass your API key directly with api_key="..." (or omit to read OPENAI_API_KEY from the environment) [1]. - base_url: pass your custom base URL with base_url="..." [1]. Example: from langchain_openai import ChatOpenAI llm = ChatOpenAI( model="gpt-5-nano", base_url="https://your-endpoint.example.com/v1", api_key="your-api-key",) These parameters are documented in the ChatOpenAI reference table (api_key and base_url) and in the integration docs (including use of base_url for Azure OpenAI by appending /openai/v1/). [1][2]

Citations:


Use correct parameter names for ChatOpenAI: base_url and api_key instead of openai_api_base and openai_api_key.

According to langchain-openai's official documentation, the constructor parameters are base_url and api_key. Update lines 82-86:

Replace with:
llm = ChatOpenAI(
    model=model,
    base_url=os.getenv("LITELLM_BASE_URL"),
    api_key=os.getenv("LITELLM_API_KEY"),
).bind_tools(tools)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@python/frameworks/langchain/langsmith_tracing.py` around lines 82 - 86, The
ChatOpenAI instantiation uses incorrect parameter names (openai_api_base,
openai_api_key); update the constructor call for llm (the ChatOpenAI(...)
expression that is then .bind_tools(tools)) to pass
base_url=os.getenv("LITELLM_BASE_URL") and api_key=os.getenv("LITELLM_API_KEY")
while keeping model and the .bind_tools(tools) chain intact so the correct
parameters expected by ChatOpenAI are used.

print(f"✅ Using model: {model} via LiteLLM")
messages = [HumanMessage("Fetch my last 3 unread emails and summarize them")]

print("\n--- Running agent (traces sent to LangSmith) ---\n")

tool_call_count = 0
while True:
response = llm.invoke(messages)
messages.append(response)
if not response.tool_calls:
print(response.content)
break
for tc in response.tool_calls:
tool_call_count += 1
print(f" 🔧 Tool call #{tool_call_count}: {tc['name']}")
result = tool_map[tc["name"]].invoke(tc["args"])
messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
Comment on lines +99 to +103
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Add error handling for tool execution.

Tool invocation at line 102 has no error handling. If a tool fails due to network issues, API errors, rate limits, or invalid arguments, the script will crash mid-execution rather than gracefully handling the error.

🛡️ Suggested error handling
     for tc in response.tool_calls:
         tool_call_count += 1
         print(f"  🔧 Tool call #{tool_call_count}: {tc['name']}")
-        result = tool_map[tc["name"]].invoke(tc["args"])
-        messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
+        try:
+            result = tool_map[tc["name"]].invoke(tc["args"])
+            messages.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
+        except Exception as e:
+            error_msg = f"Error executing {tc['name']}: {str(e)}"
+            print(f"  ⚠️  {error_msg}")
+            messages.append(ToolMessage(content=error_msg, tool_call_id=tc["id"]))
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@python/frameworks/langchain/langsmith_tracing.py` around lines 99 - 103, The
loop over response.tool_calls calls tool_map[tc["name"]].invoke(tc["args"])
without protection; wrap that invoke in a try/except (catch broad exceptions
like Exception) inside the for tc in response.tool_calls loop so a failing tool
does not crash the whole run, log or record the exception via your logger/print
with context including tc["name"] and tc["id"], and append a ToolMessage
indicating the failure (e.g., ToolMessage(content=str(error) or a structured
error marker, tool_call_id=tc["id"])) so downstream code sees a failure result
rather than raising; ensure you still increment tool_call_count and continue to
the next tool call.


# ── Summary ─────────────────────────────────────────────────────────────────

print(f"\n✅ Agent completed — {tool_call_count} tool call(s)")
if tracing_enabled and langsmith_key:
print(f"✅ Check traces at: https://smith.langchain.com/o/default/projects/p/{project}")
print(" (Open LangSmith → select your project → view the latest trace)")
1 change: 1 addition & 0 deletions python/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ requests
anthropic
langchain
langchain-openai
langsmith
google-adk
litellm
openai
Expand Down