Skip to content
forked from camel-ai/owl

πŸ¦‰ OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation

License

Notifications You must be signed in to change notification settings

LJPearson176/hyperion

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

<<<<<<< HEAD

πŸ¦‰ OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation

Documentation Discord X Reddit Wechat Wechat Hugging Face Star Package License


πŸš€ Introducing Eigent: The World's First Multi-Agent Workforce Desktop Application πŸš€

Eigent empowers you to build, manage, and deploy a custom AI workforce that can turn your most complex workflows into automated tasks.

✨ 100% Open Source β€’ πŸ”§ Fully Customizable β€’ πŸ”’ Privacy-First β€’ ⚑ Parallel Execution

Built on CAMEL-AI's acclaimed open-source project, Eigent introduces a Multi-Agent Workforce that boosts productivity through parallel execution, customization, and privacy protection.


πŸ† OWL achieves 69.09 average score on GAIA benchmark and ranks πŸ…οΈ #1 among open-source frameworks! πŸ†

πŸ¦‰ OWL is a cutting-edge framework for multi-agent collaboration that pushes the boundaries of task automation, built on top of the CAMEL-AI Framework.

Our vision is to revolutionize how AI agents collaborate to solve real-world tasks. By leveraging dynamic agent interactions, OWL enables more natural, efficient, and robust task automation across diverse domains.


πŸ“‹ Table of Contents

πŸš€ Eigent: Multi-Agent Workforce Desktop Application

Eigent is revolutionizing the way we work with AI agents. As the world's first Multi-Agent Workforce desktop application, Eigent transforms complex workflows into automated, intelligent processes.

Why Eigent?

  • πŸ€– Multi-Agent Collaboration: Deploy multiple specialized AI agents that work together seamlessly
  • πŸš€ Parallel Execution: Boost productivity with agents that can work on multiple tasks simultaneously
  • 🎨 Full Customization: Build and configure your AI workforce to match your specific needs
  • πŸ”’ Privacy-First Design: Your data stays on your machine - no cloud dependencies required
  • πŸ’― 100% Open Source: Complete transparency and community-driven development

Key Capabilities

  • Build Custom Workflows: Design complex multi-step processes that agents can execute autonomously
  • Manage AI Teams: Orchestrate multiple agents with different specializations working in concert
  • Deploy Instantly: From idea to execution in minutes, not hours
  • Monitor Progress: Real-time visibility into agent activities and task completion

Use Cases

  • πŸ“Š Data Analysis: Automate complex data processing and analysis workflows
  • πŸ” Research: Deploy agents to gather, synthesize, and report on information
  • πŸ’» Development: Accelerate coding tasks with AI-powered development teams
  • πŸ“ Content Creation: Generate, edit, and optimize content at scale
  • 🀝 Business Automation: Transform repetitive business processes into automated workflows

Get Started with Eigent

Eigent is built on top of the OWL framework, leveraging CAMEL-AI's powerful multi-agent capabilities.

πŸ”— Visit the Eigent Repository to explore the codebase, contribute, or learn more about building your own AI workforce.

Follow our installation guide to start building your own AI workforce today!

πŸ”₯ News

🧩 NEW: COMMUNITY AGENT CHALLENGES! 🧩

Showcase your creativity by designing unique challenges for AI agents!
Join our community and see your innovative ideas tackled by cutting-edge AI.

View & Submit Challenges

πŸŽ‰ Latest Major Update - March 15, 2025

Significant Improvements:

  • Restructured web-based UI architecture for enhanced stability πŸ—οΈ
  • Optimized OWL Agent execution mechanisms for better performance πŸš€
Try it now and experience the improved performance in your automation tasks!

  • [2025.07.21]: We open-sourced the training dataset and model checkpoints of OWL project. Training code coming soon. huggingface link.
  • [2025.05.27]: We released the technical report of OWL, including more details on the workforce (framework) and optimized workforce learning (training methodology). paper.
  • [2025.05.18]: We open-sourced an initial version for replicating workforce experiment on GAIA here.
  • [2025.04.18]: We uploaded OWL's new GAIA benchmark score of 69.09%, ranking #1 among open-source frameworks. Check the technical report here.
  • [2025.03.27]: Integrate SearxNGToolkit performing web searches using SearxNG search engine.
  • [2025.03.26]: Enhanced Browser Toolkit with multi-browser support for "chrome", "msedge", and "chromium" channels.
  • [2025.03.25]: Supported Gemini 2.5 Pro, added example run code
  • [2025.03.21]: Integrated OpenRouter model platform, fix bug with Gemini tool calling.
  • [2025.03.20]: Accept header in MCP Toolkit, support automatic playwright installation.
  • [2025.03.16]: Support Bing search, Baidu search.
  • [2025.03.12]: Added Bocha search in SearchToolkit, integrated Volcano Engine model platform, and enhanced Azure and OpenAI Compatible models with structured output and tool calling.
  • [2025.03.11]: We added MCPToolkit, FileWriteToolkit, and TerminalToolkit to enhance OWL agents with MCP tool calling, file writing capabilities, and terminal command execution.
  • [2025.03.09]: We added a web-based user interface that makes it easier to interact with the system.
  • [2025.03.07]: We open-sourced the codebase of the πŸ¦‰ OWL project.
  • [2025.03.03]: OWL achieved the #1 position among open-source frameworks on the GAIA benchmark with a score of 58.18.

🎬 Demo Video

OWL.main.mp4
d106cfbff2c7b75978ee9d5631ebeb75.mp4

This video demonstrates how to install OWL locally and showcases its capabilities as a cutting-edge framework for multi-agent collaboration: https://www.youtube.com/watch?v=8XlqVyAZOr8

✨️ Core Features

  • Online Search: Support for multiple search engines (including Wikipedia, Google, DuckDuckGo, Baidu, Bocha, etc.) for real-time information retrieval and knowledge acquisition.
  • Multimodal Processing: Support for handling internet or local videos, images, and audio data.
  • Browser Automation: Utilize the Playwright framework for simulating browser interactions, including scrolling, clicking, input handling, downloading, navigation, and more.
  • Document Parsing: Extract content from Word, Excel, PDF, and PowerPoint files, converting them into text or Markdown format.
  • Code Execution: Write and execute Python code using interpreter.
  • Built-in Toolkits: Access to a comprehensive set of built-in toolkits including:
    • Model Context Protocol (MCP): A universal protocol layer that standardizes AI model interactions with various tools and data sources
    • Core Toolkits: ArxivToolkit, AudioAnalysisToolkit, CodeExecutionToolkit, DalleToolkit, DataCommonsToolkit, ExcelToolkit, GitHubToolkit, GoogleMapsToolkit, GoogleScholarToolkit, ImageAnalysisToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, OpenAPIToolkit, RedditToolkit, SearchToolkit, SemanticScholarToolkit, SymPyToolkit, VideoAnalysisToolkit, WeatherToolkit, BrowserToolkit, and many more for specialized tasks

πŸ› οΈ Installation

Prerequisites

Install Python

Before installing OWL, ensure you have Python installed (version 3.10, 3.11, or 3.12 is supported):

Note for GAIA Benchmark Users: When running the GAIA benchmark evaluation, please use the gaia58.18 branch which includes a customized version of the CAMEL framework in the owl/camel directory. This version contains enhanced toolkits with improved stability specifically optimized for the GAIA benchmark compared to the standard CAMEL installation.

# Check if Python is installed
python --version

# If not installed, download and install from https://www.python.org/downloads/
# For macOS users with Homebrew:
brew install [email protected]

# For Ubuntu/Debian:
sudo apt update
sudo apt install python3.10 python3.10-venv python3-pip

Installation Options

OWL supports multiple installation methods to fit your workflow preferences.

Option 1: Using uv (Recommended)

# Clone github repo
git clone https://github.com/camel-ai/owl.git

# Change directory into project directory
cd owl

# Install uv if you don't have it already
pip install uv

# Create a virtual environment and install dependencies
uv venv .venv --python=3.10

# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate

# Install CAMEL with all dependencies
uv pip install -e .

Option 2: Using venv and pip

# Clone github repo
git clone https://github.com/camel-ai/owl.git

# Change directory into project directory
cd owl

# Create a virtual environment
# For Python 3.10 (also works with 3.11, 3.12)
python3.10 -m venv .venv

# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate

# Install from requirements.txt
pip install -r requirements.txt --use-pep517

Option 3: Using conda

# Clone github repo
git clone https://github.com/camel-ai/owl.git

# Change directory into project directory
cd owl

# Create a conda environment
conda create -n owl python=3.10

# Activate the conda environment
conda activate owl

# Option 1: Install as a package (recommended)
pip install -e .

# Option 2: Install from requirements.txt
pip install -r requirements.txt --use-pep517

Option 4: Using Docker

Using Pre-built Image (Recommended)

# This option downloads a ready-to-use image from Docker Hub
# Fastest and recommended for most users
docker compose up -d

# Run OWL inside the container
docker compose exec owl bash
cd .. && source .venv/bin/activate
playwright install-deps
xvfb-python examples/run.py

Building Image Locally

# For users who need to customize the Docker image or cannot access Docker Hub:
# 1. Open docker-compose.yml
# 2. Comment out the "image: mugglejinx/owl:latest" line
# 3. Uncomment the "build:" section and its nested properties
# 4. Then run:
docker compose up -d --build

# Run OWL inside the container
docker compose exec owl bash
cd .. && source .venv/bin/activate
playwright install-deps
xvfb-python examples/run.py

Using Convenience Scripts

# Navigate to container directory
cd .container

# Make the script executable and build the Docker image
chmod +x build_docker.sh
./build_docker.sh

# Run OWL with your question
./run_in_docker.sh "your question"

Setup Environment Variables

OWL requires various API keys to interact with different services.

Setting Environment Variables Directly

You can set environment variables directly in your terminal:

  • macOS/Linux (Bash/Zsh):

    export OPENAI_API_KEY="your-openai-api-key-here"
    # Add other required API keys as needed
  • Windows (Command Prompt):

    set OPENAI_API_KEY=your-openai-api-key-here
  • Windows (PowerShell):

    $env:OPENAI_API_KEY = "your-openai-api-key-here"

Note: Environment variables set directly in the terminal will only persist for the current session.

Alternative: Using a .env File

If you prefer using a .env file instead, you can:

  1. Copy and Rename the Template:

    # For macOS/Linux
    cd owl
    cp .env_template .env
    
    # For Windows
    cd owl
    copy .env_template .env

    Alternatively, you can manually create a new file named .env in the owl directory and copy the contents from .env_template.

  2. Configure Your API Keys: Open the .env file in your preferred text editor and insert your API keys in the corresponding fields.

Note: For the minimal example (examples/run_mini.py), you only need to configure the LLM API key (e.g., OPENAI_API_KEY).

MCP Desktop Commander Setup

If using MCP Desktop Commander within Docker, run:

npx -y @wonderwhy-er/desktop-commander setup --force-file-protocol

For more detailed Docker usage instructions, including cross-platform support, optimized configurations, and troubleshooting, please refer to DOCKER_README.md.

πŸš€ Quick Start

Basic Usage

After installation and setting up your environment variables, you can start using OWL right away:

python examples/run.py

Running with Different Models

Model Requirements

  • Tool Calling: OWL requires models with robust tool calling capabilities to interact with various toolkits. Models must be able to understand tool descriptions, generate appropriate tool calls, and process tool outputs.

  • Multimodal Understanding: For tasks involving web interaction, image analysis, or video processing, models with multimodal capabilities are required to interpret visual content and context.

Supported Models

For information on configuring AI models, please refer to our CAMEL models documentation.

Note: For optimal performance, we strongly recommend using OpenAI models (GPT-4 or later versions). Our experiments show that other models may result in significantly lower performance on complex tasks and benchmarks, especially those requiring advanced multi-modal understanding and tool use.

OWL supports various LLM backends, though capabilities may vary depending on the model's tool calling and multimodal abilities. You can use the following scripts to run with different models:

# Run with Claude model
python examples/run_claude.py

# Run with Qwen model
python examples/run_qwen_zh.py

# Run with Deepseek model
python examples/run_deepseek_zh.py

# Run with other OpenAI-compatible models
python examples/run_openai_compatible_model.py

# Run with Gemini model
python examples/run_gemini.py

# Run with Azure OpenAI
python examples/run_azure_openai.py

# Run with Ollama
python examples/run_ollama.py

For a simpler version that only requires an LLM API key, you can try our minimal example:

python examples/run_mini.py

You can run OWL agent with your own task by modifying the examples/run.py script:

# Define your own task
task = "Task description here."

society = construct_society(question)
answer, chat_history, token_count = run_society(society)

print(f"\033[94mAnswer: {answer}\033[0m")

For uploading files, simply provide the file path along with your question:

# Task with a local file (e.g., file path: `tmp/example.docx`)
task = "What is in the given DOCX file? Here is the file path: tmp/example.docx"

society = construct_society(question)
answer, chat_history, token_count = run_society(society)
print(f"\033[94mAnswer: {answer}\033[0m")

OWL will then automatically invoke document-related tools to process the file and extract the answer.

Example Tasks

Here are some tasks you can try with OWL:

  • "Find the latest stock price for Apple Inc."
  • "Analyze the sentiment of recent tweets about climate change"
  • "Help me debug this Python code: [your code here]"
  • "Summarize the main points from this research paper: [paper URL]"
  • "Create a data visualization for this dataset: [dataset path]"

🧰 Toolkits and Capabilities

Model Context Protocol (MCP)

OWL's MCP integration provides a standardized way for AI models to interact with various tools and data sources:

Before using MCP, you need to install Node.js first.

Install Node.js

Windows

Download the official installer: Node.js.

Check "Add to PATH" option during installation.

Linux

sudo apt update
sudo apt install nodejs npm -y

Mac

brew install node

Install Playwright MCP Service

npm install -g @executeautomation/playwright-mcp-server
npx playwright install-deps

Try our comprehensive MCP examples:

  • examples/run_mcp.py - Basic MCP functionality demonstration (local call, requires dependencies)
  • examples/run_mcp_sse.py - Example using the SSE protocol (Use remote services, no dependencies)

Available Toolkits

Important: Effective use of toolkits requires models with strong tool calling capabilities. For multimodal toolkits (Web, Image, Video), models must also have multimodal understanding abilities.

OWL supports various toolkits that can be customized by modifying the tools list in your script:

# Configure toolkits
tools = [
    *BrowserToolkit(headless=False).get_tools(),  # Browser automation
    *VideoAnalysisToolkit(model=models["video"]).get_tools(),
    *AudioAnalysisToolkit().get_tools(),  # Requires OpenAI Key
    *CodeExecutionToolkit(sandbox="subprocess").get_tools(),
    *ImageAnalysisToolkit(model=models["image"]).get_tools(),
    SearchToolkit().search_duckduckgo,
    SearchToolkit().search_google,  # Comment out if unavailable
    SearchToolkit().search_wiki,
    SearchToolkit().search_bocha,
    SearchToolkit().search_baidu,
    *ExcelToolkit().get_tools(),
    *DocumentProcessingToolkit(model=models["document"]).get_tools(),
    *FileWriteToolkit(output_dir="./").get_tools(),
]

Available Toolkits

Key toolkits include:

Multimodal Toolkits (Require multimodal model capabilities)

  • BrowserToolkit: Browser automation for web interaction and navigation
  • VideoAnalysisToolkit: Video processing and content analysis
  • ImageAnalysisToolkit: Image analysis and interpretation

Text-Based Toolkits

  • AudioAnalysisToolkit: Audio processing (requires OpenAI API)
  • CodeExecutionToolkit: Python code execution and evaluation
  • SearchToolkit: Web searches (Google, DuckDuckGo, Wikipedia)
  • DocumentProcessingToolkit: Document parsing (PDF, DOCX, etc.)

Additional specialized toolkits: ArxivToolkit, GitHubToolkit, GoogleMapsToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, RedditToolkit, WeatherToolkit, and more. For a complete list, see the CAMEL toolkits documentation.

Customizing Your Configuration

To customize available tools:

# 1. Import toolkits
from camel.toolkits import BrowserToolkit, SearchToolkit, CodeExecutionToolkit

# 2. Configure tools list
tools = [
    *BrowserToolkit(headless=True).get_tools(),
    SearchToolkit().search_wiki,
    *CodeExecutionToolkit(sandbox="subprocess").get_tools(),
]

# 3. Pass to assistant agent
assistant_agent_kwargs = {"model": models["assistant"], "tools": tools}

Selecting only necessary toolkits optimizes performance and reduces resource usage.

🌐 Web Interface

πŸš€ Enhanced Web Interface Now Available!

Experience improved system stability and optimized performance with our latest update. Start exploring the power of OWL through our user-friendly interface!

Starting the Web UI

# Start the Chinese version
python owl/webapp_zh.py

# Start the English version
python owl/webapp.py

# Start the Japanese version
python owl/webapp_jp.py

Features

  • Easy Model Selection: Choose between different models (OpenAI, Qwen, DeepSeek, etc.)
  • Environment Variable Management: Configure your API keys and other settings directly from the UI
  • Interactive Chat Interface: Communicate with OWL agents through a user-friendly interface
  • Task History: View the history and results of your interactions

The web interface is built using Gradio and runs locally on your machine. No data is sent to external servers beyond what's required for the model API calls you configure.

πŸ§ͺ Experiments

To reproduce OWL's GAIA benchmark score: Furthermore, to ensure optimal performance on the GAIA benchmark, please note that our gaia69 branch includes a customized version of the CAMEL framework in the owl/camel directory. This version contains enhanced toolkits with improved stability for gaia benchmark compared to the standard CAMEL installation.

When running the benchmark evaluation:

  1. Switch to the gaia69 branch:

    git checkout gaia69
  2. Run the evaluation script:

    python run_gaia_workforce_claude.py

This will execute the same configuration that achieved our top-ranking performance on the GAIA benchmark.

⏱️ Future Plans

We're continuously working to improve OWL. Here's what's on our roadmap:

  • Write a technical blog post detailing our exploration and insights in multi-agent collaboration in real-world tasks
  • Enhance the toolkit ecosystem with more specialized tools for domain-specific tasks
  • Develop more sophisticated agent interaction patterns and communication protocols
  • Improve performance on complex multi-step reasoning tasks

πŸ“„ License

The source code is licensed under Apache 2.0.

🀝 Contributing

We welcome contributions from the community! Here's how you can help:

  1. Read our Contribution Guidelines
  2. Check open issues or create new ones
  3. Submit pull requests with your improvements

Current Issues Open for Contribution:

To take on an issue, simply leave a comment stating your interest.

πŸ”₯ Community

Join us (Discord or WeChat) in pushing the boundaries of finding the scaling laws of agents.

Join us for further discussions!

❓ FAQ

General Questions

Q: Why don't I see Chrome running locally after starting the example script?

A: If OWL determines that a task can be completed using non-browser tools (such as search or code execution), the browser will not be launched. The browser window will only appear when OWL determines that browser-based interaction is necessary.

Q: Which Python version should I use?

A: OWL supports Python 3.10, 3.11, and 3.12.

Q: How can I contribute to the project?

A: See our Contributing section for details on how to get involved. We welcome contributions of all kinds, from code improvements to documentation updates.

Experiment Questions

Q: Which CAMEL version should I use for replicate the role playing result?

A: We provide a modified version of CAMEL (owl/camel) in the gaia58.18 branch. Please make sure you use this CAMEL version for your experiments.

Q: Why are my experiment results lower than the reported numbers?

A: Since the GAIA benchmark evaluates LLM agents in a realistic world, it introduces a significant amount of randomness. Based on user feedback, one of the most common issues for replication is, for example, agents being blocked on certain webpages due to network reasons. We have uploaded a keywords matching script to help quickly filter out these errors here. You can also check this technical report for more details when evaluating LLM agents in realistic open-world environments.

πŸ“š Exploring CAMEL Dependency

OWL is built on top of the CAMEL Framework, here's how you can explore the CAMEL source code and understand how it works with OWL:

Accessing CAMEL Source Code

# Clone the CAMEL repository
git clone https://github.com/camel-ai/camel.git
cd camel

πŸ–ŠοΈ Cite

If you find this repo useful, please cite:

@misc{hu2025owl,
      title={OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation}, 
      author={Mengkang Hu and Yuhang Zhou and Wendong Fan and Yuzhou Nie and Bowei Xia and Tao Sun and Ziyu Ye and Zhaoxuan Jin and Yingru Li and Qiguang Chen and Zeyu Zhang and Yifeng Wang and Qianshuo Ye and Bernard Ghanem and Ping Luo and Guohao Li},
      year={2025},
      eprint={2505.23885},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2505.23885}, 
}

⭐ Star History

Star History Chart

=======

Zep Logo

Graphiti

Build Real-Time Knowledge Graphs for AI Agents

Lint Unit Tests MyPy Check

GitHub Repo stars Discord arXiv Release

getzep%2Fgraphiti | Trendshift

⭐ Help us reach more developers and grow the Graphiti community. Star this repo!


Tip

Check out the new MCP server for Graphiti! Give Claude, Cursor, and other MCP clients powerful Knowledge Graph-based memory.

Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.

Use Graphiti to:

  • Integrate and maintain dynamic user interactions and business data.
  • Facilitate state-based reasoning and task automation for agents.
  • Query complex, evolving data with semantic, keyword, and graph-based search methods.

Graphiti temporal walkthrough


A knowledge graph is a network of interconnected facts, such as "Kendra loves Adidas shoes." Each fact is a "triplet" represented by two entities, or nodes ("Kendra", "Adidas shoes"), and their relationship, or edge ("loves"). Knowledge Graphs have been explored extensively for information retrieval. What makes Graphiti unique is its ability to autonomously build a knowledge graph while handling changing relationships and maintaining historical context.

Graphiti and Zep's Context Engineering Platform.

Graphiti powers the core of Zep, a turn-key context engineering platform for AI Agents. Zep offers agent memory, Graph RAG for dynamic data, and context retrieval and assembly.

Using Graphiti, we've demonstrated Zep is the State of the Art in Agent Memory.

Read our paper: Zep: A Temporal Knowledge Graph Architecture for Agent Memory.

We're excited to open-source Graphiti, believing its potential reaches far beyond AI memory applications.

Zep: A Temporal Knowledge Graph Architecture for Agent Memory

Why Graphiti?

Traditional RAG approaches often rely on batch processing and static data summarization, making them inefficient for frequently changing data. Graphiti addresses these challenges by providing:

  • Real-Time Incremental Updates: Immediate integration of new data episodes without batch recomputation.
  • Bi-Temporal Data Model: Explicit tracking of event occurrence and ingestion times, allowing accurate point-in-time queries.
  • Efficient Hybrid Retrieval: Combines semantic embeddings, keyword (BM25), and graph traversal to achieve low-latency queries without reliance on LLM summarization.
  • Custom Entity Definitions: Flexible ontology creation and support for developer-defined entities through straightforward Pydantic models.
  • Scalability: Efficiently manages large datasets with parallel processing, suitable for enterprise environments.

Graphiti structured + unstructured demo

Graphiti vs. GraphRAG

Aspect GraphRAG Graphiti
Primary Use Static document summarization Dynamic data management
Data Handling Batch-oriented processing Continuous, incremental updates
Knowledge Structure Entity clusters & community summaries Episodic data, semantic entities, communities
Retrieval Method Sequential LLM summarization Hybrid semantic, keyword, and graph-based search
Adaptability Low High
Temporal Handling Basic timestamp tracking Explicit bi-temporal tracking
Contradiction Handling LLM-driven summarization judgments Temporal edge invalidation
Query Latency Seconds to tens of seconds Typically sub-second latency
Custom Entity Types No Yes, customizable
Scalability Moderate High, optimized for large datasets

Graphiti is specifically designed to address the challenges of dynamic and frequently updated datasets, making it particularly suitable for applications requiring real-time interaction and precise historical queries.

Installation

Requirements:

  • Python 3.10 or higher
  • Neo4j 5.26 / FalkorDB 1.1.2 or higher (serves as the embeddings storage backend)
  • OpenAI API key (Graphiti defaults to OpenAI for LLM inference and embedding)

Important

Graphiti works best with LLM services that support Structured Output (such as OpenAI and Gemini). Using other services may result in incorrect output schemas and ingestion failures. This is particularly problematic when using smaller models.

Optional:

  • Google Gemini, Anthropic, or Groq API key (for alternative LLM providers)

Tip

The simplest way to install Neo4j is via Neo4j Desktop. It provides a user-friendly interface to manage Neo4j instances and databases. Alternatively, you can use FalkorDB on-premises via Docker and instantly start with the quickstart example:

docker run -p 6379:6379 -p 3000:3000 -it --rm falkordb/falkordb:latest
pip install graphiti-core

or

uv add graphiti-core

Installing with FalkorDB Support

If you plan to use FalkorDB as your graph database backend, install with the FalkorDB extra:

pip install graphiti-core[falkordb]

# or with uv
uv add graphiti-core[falkordb]

You can also install optional LLM providers as extras:

# Install with Anthropic support
pip install graphiti-core[anthropic]

# Install with Groq support
pip install graphiti-core[groq]

# Install with Google Gemini support
pip install graphiti-core[google-genai]

# Install with multiple providers
pip install graphiti-core[anthropic,groq,google-genai]

# Install with FalkorDB and LLM providers
pip install graphiti-core[falkordb,anthropic,google-genai]

Default to Low Concurrency; LLM Provider 429 Rate Limit Errors

Graphiti's ingestion pipelines are designed for high concurrency. By default, concurrency is set low to avoid LLM Provider 429 Rate Limit Errors. If you find Graphiti slow, please increase concurrency as described below.

Concurrency controlled by the SEMAPHORE_LIMIT environment variable. By default, SEMAPHORE_LIMIT is set to 10 concurrent operations to help prevent 429 rate limit errors from your LLM provider. If you encounter such errors, try lowering this value.

If your LLM provider allows higher throughput, you can increase SEMAPHORE_LIMIT to boost episode ingestion performance.

Quick Start

Important

Graphiti defaults to using OpenAI for LLM inference and embedding. Ensure that an OPENAI_API_KEY is set in your environment. Support for Anthropic and Groq LLM inferences is available, too. Other LLM providers may be supported via OpenAI compatible APIs.

For a complete working example, see the Quickstart Example in the examples directory. The quickstart demonstrates:

  1. Connecting to a Neo4j or FalkorDB database
  2. Initializing Graphiti indices and constraints
  3. Adding episodes to the graph (both text and structured JSON)
  4. Searching for relationships (edges) using hybrid search
  5. Reranking search results using graph distance
  6. Searching for nodes using predefined search recipes

The example is fully documented with clear explanations of each functionality and includes a comprehensive README with setup instructions and next steps.

MCP Server

The mcp_server directory contains a Model Context Protocol (MCP) server implementation for Graphiti. This server allows AI assistants to interact with Graphiti's knowledge graph capabilities through the MCP protocol.

Key features of the MCP server include:

  • Episode management (add, retrieve, delete)
  • Entity management and relationship handling
  • Semantic and hybrid search capabilities
  • Group management for organizing related data
  • Graph maintenance operations

The MCP server can be deployed using Docker with Neo4j, making it easy to integrate Graphiti into your AI assistant workflows.

For detailed setup instructions and usage examples, see the MCP server README.

REST Service

The server directory contains an API service for interacting with the Graphiti API. It is built using FastAPI.

Please see the server README for more information.

Optional Environment Variables

In addition to the Neo4j and OpenAi-compatible credentials, Graphiti also has a few optional environment variables. If you are using one of our supported models, such as Anthropic or Voyage models, the necessary environment variables must be set.

Database Configuration

Database names are configured directly in the driver constructors:

  • Neo4j: Database name defaults to neo4j (hardcoded in Neo4jDriver)
  • FalkorDB: Database name defaults to default_db (hardcoded in FalkorDriver)

As of v0.17.0, if you need to customize your database configuration, you can instantiate a database driver and pass it to the Graphiti constructor using the graph_driver parameter.

Neo4j with Custom Database Name

from graphiti_core import Graphiti
from graphiti_core.driver.neo4j_driver import Neo4jDriver

# Create a Neo4j driver with custom database name
driver = Neo4jDriver(
    uri="bolt://localhost:7687",
    user="neo4j",
    password="password",
    database="my_custom_database"  # Custom database name
)

# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)

FalkorDB with Custom Database Name

from graphiti_core import Graphiti
from graphiti_core.driver.falkordb_driver import FalkorDriver

# Create a FalkorDB driver with custom database name
driver = FalkorDriver(
    host="localhost",
    port=6379,
    username="falkor_user",  # Optional
    password="falkor_password",  # Optional
    database="my_custom_graph"  # Custom database name
)

# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)

Performance Configuration

USE_PARALLEL_RUNTIME is an optional boolean variable that can be set to true if you wish to enable Neo4j's parallel runtime feature for several of our search queries. Note that this feature is not supported for Neo4j Community edition or for smaller AuraDB instances, as such this feature is off by default.

Using Graphiti with Azure OpenAI

Graphiti supports Azure OpenAI for both LLM inference and embeddings. Azure deployments often require different endpoints for LLM and embedding services, and separate deployments for default and small models.

from openai import AsyncAzureOpenAI
from graphiti_core import Graphiti
from graphiti_core.llm_client import LLMConfig, OpenAIClient
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient

# Azure OpenAI configuration - use separate endpoints for different services
api_key = "<your-api-key>"
api_version = "<your-api-version>"
llm_endpoint = "<your-llm-endpoint>"  # e.g., "https://your-llm-resource.openai.azure.com/"
embedding_endpoint = "<your-embedding-endpoint>"  # e.g., "https://your-embedding-resource.openai.azure.com/"

# Create separate Azure OpenAI clients for different services
llm_client_azure = AsyncAzureOpenAI(
    api_key=api_key,
    api_version=api_version,
    azure_endpoint=llm_endpoint
)

embedding_client_azure = AsyncAzureOpenAI(
    api_key=api_key,
    api_version=api_version,
    azure_endpoint=embedding_endpoint
)

# Create LLM Config with your Azure deployment names
azure_llm_config = LLMConfig(
    small_model="gpt-4.1-nano",
    model="gpt-4.1-mini",
)

# Initialize Graphiti with Azure OpenAI clients
graphiti = Graphiti(
    "bolt://localhost:7687",
    "neo4j",
    "password",
    llm_client=OpenAIClient(
        config=azure_llm_config,
        client=llm_client_azure
    ),
    embedder=OpenAIEmbedder(
        config=OpenAIEmbedderConfig(
            embedding_model="text-embedding-3-small-deployment"  # Your Azure embedding deployment name
        ),
        client=embedding_client_azure
    ),
    cross_encoder=OpenAIRerankerClient(
        config=LLMConfig(
            model=azure_llm_config.small_model  # Use small model for reranking
        ),
        client=llm_client_azure
    )
)

# Now you can use Graphiti with Azure OpenAI

Make sure to replace the placeholder values with your actual Azure OpenAI credentials and deployment names that match your Azure OpenAI service configuration.

Using Graphiti with Google Gemini

Graphiti supports Google's Gemini models for LLM inference, embeddings, and cross-encoding/reranking. To use Gemini, you'll need to configure the LLM client, embedder, and the cross-encoder with your Google API key.

Install Graphiti:

uv add "graphiti-core[google-genai]"

# or

pip install "graphiti-core[google-genai]"
from graphiti_core import Graphiti
from graphiti_core.llm_client.gemini_client import GeminiClient, LLMConfig
from graphiti_core.embedder.gemini import GeminiEmbedder, GeminiEmbedderConfig
from graphiti_core.cross_encoder.gemini_reranker_client import GeminiRerankerClient

# Google API key configuration
api_key = "<your-google-api-key>"

# Initialize Graphiti with Gemini clients
graphiti = Graphiti(
    "bolt://localhost:7687",
    "neo4j",
    "password",
    llm_client=GeminiClient(
        config=LLMConfig(
            api_key=api_key,
            model="gemini-2.0-flash"
        )
    ),
    embedder=GeminiEmbedder(
        config=GeminiEmbedderConfig(
            api_key=api_key,
            embedding_model="embedding-001"
        )
    ),
    cross_encoder=GeminiRerankerClient(
        config=LLMConfig(
            api_key=api_key,
            model="gemini-2.5-flash-lite-preview-06-17"
        )
    )
)

# Now you can use Graphiti with Google Gemini for all components

The Gemini reranker uses the gemini-2.5-flash-lite-preview-06-17 model by default, which is optimized for cost-effective and low-latency classification tasks. It uses the same boolean classification approach as the OpenAI reranker, leveraging Gemini's log probabilities feature to rank passage relevance.

Using Graphiti with Ollama (Local LLM)

Graphiti supports Ollama for running local LLMs and embedding models via Ollama's OpenAI-compatible API. This is ideal for privacy-focused applications or when you want to avoid API costs.

Install the models: ollama pull deepseek-r1:7b # LLM ollama pull nomic-embed-text # embeddings

from graphiti_core import Graphiti
from graphiti_core.llm_client.config import LLMConfig
from graphiti_core.llm_client.openai_client import OpenAIClient
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient

# Configure Ollama LLM client
llm_config = LLMConfig(
    api_key="abc",  # Ollama doesn't require a real API key
    model="deepseek-r1:7b",
    small_model="deepseek-r1:7b",
    base_url="http://localhost:11434/v1", # Ollama provides this port
)

llm_client = OpenAIClient(config=llm_config)

# Initialize Graphiti with Ollama clients
graphiti = Graphiti(
    "bolt://localhost:7687",
    "neo4j",
    "password",
    llm_client=llm_client,
    embedder=OpenAIEmbedder(
        config=OpenAIEmbedderConfig(
            api_key="abc",
            embedding_model="nomic-embed-text",
            embedding_dim=768,
            base_url="http://localhost:11434/v1",
        )
    ),
    cross_encoder=OpenAIRerankerClient(client=llm_client, config=llm_config),
)

# Now you can use Graphiti with local Ollama models

Ensure Ollama is running (ollama serve) and that you have pulled the models you want to use.

Documentation

Telemetry

Graphiti collects anonymous usage statistics to help us understand how the framework is being used and improve it for everyone. We believe transparency is important, so here's exactly what we collect and why.

What We Collect

When you initialize a Graphiti instance, we collect:

  • Anonymous identifier: A randomly generated UUID stored locally in ~/.cache/graphiti/telemetry_anon_id
  • System information: Operating system, Python version, and system architecture
  • Graphiti version: The version you're using
  • Configuration choices:
    • LLM provider type (OpenAI, Azure, Anthropic, etc.)
    • Database backend (Neo4j, FalkorDB)
    • Embedder provider (OpenAI, Azure, Voyage, etc.)

What We Don't Collect

We are committed to protecting your privacy. We never collect:

  • Personal information or identifiers
  • API keys or credentials
  • Your actual data, queries, or graph content
  • IP addresses or hostnames
  • File paths or system-specific information
  • Any content from your episodes, nodes, or edges

Why We Collect This Data

This information helps us:

  • Understand which configurations are most popular to prioritize support and testing
  • Identify which LLM and database providers to focus development efforts on
  • Track adoption patterns to guide our roadmap
  • Ensure compatibility across different Python versions and operating systems

By sharing this anonymous information, you help us make Graphiti better for everyone in the community.

View the Telemetry Code

The Telemetry code may be found here.

How to Disable Telemetry

Telemetry is opt-out and can be disabled at any time. To disable telemetry collection:

Option 1: Environment Variable

export GRAPHITI_TELEMETRY_ENABLED=false

Option 2: Set in your shell profile

# For bash users (~/.bashrc or ~/.bash_profile)
echo 'export GRAPHITI_TELEMETRY_ENABLED=false' >> ~/.bashrc

# For zsh users (~/.zshrc)
echo 'export GRAPHITI_TELEMETRY_ENABLED=false' >> ~/.zshrc

Option 3: Set for a specific Python session

import os
os.environ['GRAPHITI_TELEMETRY_ENABLED'] = 'false'

# Then initialize Graphiti as usual
from graphiti_core import Graphiti
graphiti = Graphiti(...)

Telemetry is automatically disabled during test runs (when pytest is detected).

Technical Details

  • Telemetry uses PostHog for anonymous analytics collection
  • All telemetry operations are designed to fail silently - they will never interrupt your application or affect Graphiti functionality
  • The anonymous ID is stored locally and is not tied to any personal information

Status and Roadmap

Graphiti is under active development. We aim to maintain API stability while working on:

  • Supporting custom graph schemas:
    • Allow developers to provide their own defined node and edge classes when ingesting episodes
    • Enable more flexible knowledge representation tailored to specific use cases
  • Enhancing retrieval capabilities with more robust and configurable options
  • Graphiti MCP Server
  • Expanding test coverage to ensure reliability and catch edge cases

Contributing

We encourage and appreciate all forms of contributions, whether it's code, documentation, addressing GitHub Issues, or answering questions in the Graphiti Discord channel. For detailed guidelines on code contributions, please refer to CONTRIBUTING.

Support

Join the Zep Discord server and make your way to the #Graphiti channel!

graphiti/main

About

πŸ¦‰ OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 82.0%
  • Jupyter Notebook 16.0%
  • Other 2.0%