A simple ReAct agent template built with LangGraph that demonstrates how to create conversational AI agents with tool usage capabilities.
This template requires a language model to function. You have several options:
-
Install Ollama:
# macOS brew install ollama # Linux curl -fsSL https://ollama.ai/install.sh | sh # Windows # Download from https://ollama.ai/download
-
Download a model:
# Download a model (choose one based on your hardware) ollama pull llama3.2:3b # Lightweight, good for testing ollama pull llama3.2:1b # Very lightweight ollama pull qwen2.5:7b # Good balance of performance/size ollama pull mistral:7b # High quality
-
Update the model in
utils/model.py:model = ChatOllama( model="gpt-oss:20b", # You need to use this model for tool executions temperature=0 )
-
Get an OpenAI API key from OpenAI Platform
-
Set environment variable:
export OPENAI_API_KEY="your-api-key-here"
-
Update
utils/model.py:from langchain_openai import ChatOpenAI model = ChatOpenAI( model="gpt-3.5-turbo", temperature=0 )
For other providers (Anthropic, Google, etc.), update utils/model.py with the appropriate LangChain integration and set the required API keys.
LangGraph provides a development server that automatically reloads your graph when you make changes:
-
Install dependencies:
pip install -r requirements.txt
-
Start the development server:
langgraph dev
-
Access the LangGraph Studio:
- Open your browser to
http://localhost:8123 - This provides a visual interface to test and debug your agent
- You can see the graph execution flow, inspect state, and test different inputs
- Open your browser to
- Hot Reload: Automatically reloads when you change your code
- Visual Debugging: See the execution flow of your agent
- State Inspection: View the state at each step
- Interactive Testing: Test your agent with different inputs
- Graph Visualization: Visual representation of your agent's workflow
The langgraph.json file configures your graph:
{
"dependencies": ["./agent.py"],
"graphs": {
"react_agent_template": "./agent.py:workflow"
},
"env": "./.env"
}LangServe is the official deployment solution for LangChain applications:
-
Install LangServe:
pip install langserve
-
Create a deployment script (
deploy.py):from langserve import add_routes from fastapi import FastAPI from agent import workflow app = FastAPI() # Add the LangGraph workflow as a route add_routes(app, workflow, path="/agent") if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000)
-
Run the deployment:
python deploy.py
-
Access your deployed agent:
- API endpoint:
http://localhost:8000/agent - Interactive docs:
http://localhost:8000/docs
- API endpoint:
-
Create a Dockerfile:
FROM python:3.11-slim WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . CMD ["langgraph", "dev", "--host", "0.0.0.0", "--port", "8000"]
-
Build and run:
docker build -t langgraph-agent . docker run -p 8000:8000 langgraph-agent
# Start the development server
langgraph dev
# Access LangGraph Studio at http://localhost:8123# Using LangServe
python deploy.py
# Or using Docker
docker run -p 8000:8000 langgraph-agent├── agent.py # Main agent definition
├── langgraph.json # LangGraph configuration
├── main.py # Entry point for standalone usage
├── requirements.txt # Python dependencies
├── utils/
│ ├── model.py # Model configuration
│ ├── state.py # State schema definition
│ └── tools.py # Available tools for the agent
└── README.md # This file
-
Define your tool in
utils/tools.py:from langchain_core.tools import tool @tool def your_custom_tool(input: str) -> str: """Description of what your tool does.""" # Your tool logic here return result
-
Add it to the agent in
agent.py:from utils.tools import addition, your_custom_tool workflow = create_react_agent( # ... other parameters tools=[addition, your_custom_tool], )
Edit the system_prompt in agent.py to customize your agent's behavior:
system_prompt = """Your custom system prompt here.
Define how your agent should behave, what it can do, and how it should respond.
"""- Model not found: Ensure your model is downloaded with Ollama or your API key is set correctly
- Import errors: Make sure all dependencies are installed with
pip install -r requirements.txt - Port conflicts: Change the port in your configuration if 8000 or 8123 are already in use