Skip to content

LangGraph-powered AI assistant with research sub-agents, file tools, and Open WebUI integration

License

Notifications You must be signed in to change notification settings

TerboucheHacene/deep-agent

Repository files navigation

Deep Agent

License: MIT Python 3.11+ LangGraph FastAPI Docker

Table of Contents

Overview

Deep Agent is an autonomous AI assistant built with LangGraph. It excels at multi-step research tasks, breaking down complex questions into manageable sub-tasks and delegating them to specialized sub-agents. While it uses Claude Sonnet by default, it can be configured to work with any LLM supported by LangChain.

Key capabilities:

  • Autonomous task decomposition and execution
  • Research delegation with web search integration
  • Todo management for tracking progress on complex tasks
  • File system operations for reading and writing content

Open WebUI Interface

Features & Architecture

Features

  • 🔍 Research Sub-Agent — Delegates research tasks to a specialized agent powered by Tavily search
  • 📝 Todo Management — Creates and tracks task lists for multi-step operations
  • 📁 File System Tools — Read, write, and list files in the workspace
  • 📊 Langfuse Observability — Full tracing and monitoring of agent execution
  • 🤖 Think Tool — Structured reasoning for complex problem-solving

Architecture

┌─────────────────────────────────────────────────────────┐
│                    Main Agent (LLM)                     │
│                                                         │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────────┐  │
│  │ Todo Tools  │  │ File Tools  │  │ Research Agent  │  │
│  │             │  │             │  │                 │  │
│  │ • read      │  │ • ls        │  │ ┌─────────────┐ │  │
│  │ • write     │  │ • read_file │  │ │tavily_search│ │  │
│  │             │  │ • write_file│  │ │ think_tool  │ │  │
│  └─────────────┘  └─────────────┘  │ └─────────────┘ │  │
│                                    └─────────────────┘  │
└─────────────────────────────────────────────────────────┘

Tech Stack

Component Purpose
LangGraph Agent orchestration framework — manages state, tool calls, and sub-agent delegation
Claude Sonnet Default LLM — can be replaced with any LangChain-supported model (OpenAI, Gemini, etc.)
FastAPI API server — exposes the LangGraph agent via REST endpoints with streaming support
Open WebUI Chat interface — connects to the agent via a custom Pipe for a native UI experience
Langfuse Observability platform — traces agent execution, tool calls, and sub-agent activity
Tavily Search API — provides web search capabilities for the research sub-agent

Prerequisites & Installation

Prerequisites

  • Python 3.11 or higher
  • API keys:
    • Anthropic — For Claude model access
    • Tavily — For web search capabilities

Installation

  1. Clone the repository

    git clone https://github.com/yourusername/deep-agent.git
    cd deep-agent
  2. Create and activate a virtual environment

    uv venv
    source .venv/bin/activate
  3. Install dependencies

    uv sync
  4. Configure environment variables

    Create a .env file in the project root:

    ANTHROPIC_API_KEY=your_anthropic_key
    TAVILY_API_KEY=your_tavily_key
    
    # Langfuse (self-hosted via Docker Compose)
    LANGFUSE_SECRET_KEY=your_langfuse_secret_key
    LANGFUSE_PUBLIC_KEY=your_langfuse_public_key
    LANGFUSE_BASE_URL=http://localhost:3000

Usage

LangGraph CLI (Development)

Run the agent with LangGraph Studio for interactive development:

langgraph dev

This opens LangGraph Studio where you can interact with the agent and visualize execution.

Docker Compose (Full Stack)

For a complete setup including Langfuse observability:

docker compose up -d

This starts:

  • The Deep Agent API
  • Langfuse (observability dashboard)
  • PostgreSQL, Redis, ClickHouse, and MinIO (Langfuse dependencies)

Access Langfuse at http://localhost:3000 to monitor agent traces.

Langfuse Observability Dashboard

Open WebUI Integration

To connect Deep Agent to Open WebUI, you need to create a custom Pipe:

  1. Open the Admin Panel in Open WebUI
  2. Navigate to FunctionsCreate new function
  3. Copy the contents of openwebui/deep_agent_pipe.py into the function editor
  4. Save and enable the function
  5. Configure the Valves (settings):
    • DEEP_AGENT_URL: URL of your Deep Agent API (default: http://deep-agent-api:8000)
    • SHOW_TOOL_CITATIONS: Show tool results as clickable citations
    • SHOW_SUBAGENT_STATUS: Show sub-agent activity in the status bar
    • REQUEST_TIMEOUT: Timeout for long-running tasks (default: 300s)

The pipe routes typed events from the Deep Agent backend to Open WebUI's native UI components:

  • Status events → Status bar (thinking indicator)
  • Tool events → Citations (clickable tool results)
  • Token events → Chat bubble (streamed text)
  • Agent events → Sub-agent activity indicators

License

This project is licensed under the MIT License — see the LICENSE file for details.

References

About

LangGraph-powered AI assistant with research sub-agents, file tools, and Open WebUI integration

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors