English | 中文
AI-CHATKIT is a full-stack AI agent chat tool built using components such as langGraph, FastAPI, NextJS, and Chroma.This project serves as a template to help you quickly build related AI agent chat applications using the langGraph framework, and supports RAG (Retrieval-Augmented Generation) to enhance the knowledge base Q&A capabilities of agents.
multi-agent:
- AI agent chat application built on the langGraph framework, supporting custom behavior logic orchestration for agents.
- Supports custom knowledge base Q&A capabilities for agents, using ChromaDB for knowledge base storage and querying.
- Supports custom tool invocation for agents.
- Python backend interface API, implemented based on FastAPI, Support full asynchronous calls.
- Supports custom frontend applications for agents, implemented using NextJS.
- Supports chat streaming output, with frontend support for SSE (Server-Sent Events) streaming.
- Supports multiple custom agents
- Support multi-agent collaboration
- Chat history is saved in the local browser cache
backend: Backend service codefrontend: Frontend service code
Backend .env file configuration Rename .env.example to .env
# Environment variable configuration
# Database configuration
# SQLite URL
DATABASE_URL=sqlite+aiosqlite:///resource/database.db
# MySQL
# DATABASE_URL=mysql+aiomysql://root:root@localhost/ai-chatkit
# Application configuration
DEBUG=True
APP_NAME=AI ChatKit
# OpenAI
OPENAI_BASE_URL=
OPENAI_API_KEY=
DEFAULT_MODEL=gpt-4o-mini
# DashScope
#DASHSCOPE_API_KEY=
#DEFAULT_MODEL=qwen-plus
#DeepSeek
#DEEPSEEK_API_KEY=
#DEFAULT_MODEL=deepseek-chat
# Use bge-m3 as the embedding model, supporting both Chinese and English; requires local deployment of the bge-m3 model via Ollama
EMBEDDING_MODEL=bge-m3
# Relative storage path for ChromaDB
CHROMA_PATH=resource/chroma_dbrun backend server:
# Use the uv tool to manage Python dependencies
pip install uv
# Replace ${workdir} with your own working directory
cd ${workdir}/backend
uv sync --frozen
# activate a Python virtual environment.
source .venv/bin/activate
# activate the environment variables on windows
# .venv/Script/active
#run server
python app/run_server.pyThis project by default accesses the locally deployed bge-m3 model via Ollama. Therefore, to access the knowledge base locally, you need to deploy Ollama locally. For local Ollama deployment of bge-m3, please refer to: https://ollama.com/library/bge-m3
# Replace ${workdir} with your own working directory
cd ${workdir}/frontend
# Use pnpm to manage dependencies
pnpm install
# Start the frontend application
pnpm devAfter successful startup, you can access the application at: http://localhost:3000/
You can use the langGraph extension in this project to create and orchestrate multiple agents, each with its own behavioral logic. The orchestration logic for agents can be written in the backend/app/ai/agent directory. You can switch between different agents for conversation in the frontend.
This project comes with the following agents:
-
OA-ASSISTANT: Mainly used to demonstrate the OA assistant agent, supporting employee information query and employee handbook knowledge base retrieval. For details, please refer to:
backend/app/ai/agent/oa_assistant.py -
MULTI_AGENT: Mainly used to demonstrate multi-agent collaboration, supporting collaboration between multiple agents. The multi_agent includes three agents:
math_agent: Mathematical agent, mainly used for mathematical calculationscode_agent: Code agent, mainly used for code generationgeneral_agent: General agent, mainly used for handling general questions These three agents are collaboratively managed through a supervisor.
For details, please refer to:
backend/app/ai/agent/multi_agent.py

