A comprehensive, production-ready AI chat application featuring a Next.js frontend, FastAPI backend, and advanced Model Context Protocol (MCP) integration. This project leverages Google Gemini LLMs and LangGraph agents to provide a powerful, context-aware chat experience with capabilities extending to GitHub, Supabase, and academic research tools.
This project implements a full-stack AI chat solution designed for scalability and extensibility. It goes beyond simple text generation by integrating MCP (Model Context Protocol), allowing the AI agent to interact with external systems like GitHub repositories, databases, and search engines in real-time.
The system is composed of a modern React-based frontend, a high-performance Python backend, and a robust DevOps infrastructure including monitoring and CI/CD pipelines.
The application follows a microservices-inspired architecture:
- Frontend (Next.js 16): Handles user interaction, authentication, and real-time chat rendering.
- Backend (FastAPI): Orchestrates AI agents, manages MCP connections, and handles business logic.
- Database (PostgreSQL): Stores user data, conversation history, and preferences.
- Async Workers (Celery + Redis): Handles background tasks like email notifications.
- MCP Servers: External tools (GitHub, Supabase, etc.) that the AI agent can control.
- Infrastructure: Dockerized services, Kubernetes (Helm) charts, and Prometheus/Grafana monitoring.
- π€ Advanced AI Agents: Powered by LangGraph and Google Gemini, capable of reasoning and tool use.
- π MCP Integration: Seamlessly connect to GitHub, Supabase, Paper Search, and Exa via Model Context Protocol.
- π¬ Rich Chat Interface: Markdown support, code syntax highlighting, and real-time streaming.
- π Secure Authentication: Multi-provider support (Google, GitHub, Email) via NextAuth.js.
- π§ Context Persistence: Long-term memory for conversations using MemorySaver.
- π Comprehensive Monitoring: Real-time metrics with Prometheus and Grafana.
- π DevOps Ready: Includes Terraform scripts, Jenkins pipelines, and Helm charts.
To run the full stack, you will need to start the backend and frontend services.
- Docker & Docker Compose
- Python 3.12+
- Node.js 18+
- PostgreSQL
Navigate to the backend directory to set up the server and agents.
cd backend
# See backend/README.md for detailed env setup
pip install -r requirements.txt
uvicorn main:app --reloadNavigate to the frontend directory to launch the user interface.
cd frontend
# See frontend/README.md for detailed env setup
npm install
npm run devLaunch the observability stack.
cd monitoring
docker-compose up -dπ₯οΈ Frontend
Built with Next.js 16, TypeScript, and Tailwind CSS.
- Features: Real-time chat, Model selection, User profiles, Dark mode.
- Tech: NextAuth.js, Prisma, Radix UI, Sonner.
βοΈ Backend
| Component | Technology |
|---|---|
| Frontend | Next.js 16, TypeScript, Tailwind CSS, Prisma, NextAuth.js |
| Backend | FastAPI, LangGraph, LangChain, Celery, SQLAlchemy, Flower |
| AI / LLM | Google Gemini, Model Context Protocol (MCP) |
| Database | PostgreSQL, Redis |
| DevOps | Docker, Kubernetes (Helm), Terraform, Jenkins |
| Monitoring | Prometheus, Grafana, Alertmanager, Kube-Metrics |


















