CogniGraph is an AI-powered research assistant designed to help you learn new concepts. It provides a conversational interface to explore topics, performs real-time web searches for up-to-date information, and summarizes key points at the end of your session.
A key feature of CogniGraph is its integration with Obsidian. It automatically saves summarized notes to your vault, wrapping related concepts in [[double brackets]] to leverage Obsidian's powerful graph view and create a connected knowledge base.
This project is built using LangGraph, Streamlit, and can be configured to use different Large Language Models (LLMs) like local models via Ollama or proprietary models from OpenAI.
The following diagram illustrates the flow of information within the CogniGraph agent:
graph TD
subgraph "User Interaction (Streamlit or Agent UI)"
A[User Input] --> B{LangGraph Agent};
B --> C[Display Response];
end
subgraph "LangGraph Core Logic"
B --> D[extract_preference];
D --> E[assistant node];
E --> F{post-assistant route};
F -- tool calls present --> G[Tavily ToolNode];
G --> E;
F -- summarize intent detected --> H[summarize node];
H --> I{human approval interrupt};
I -- yes --> J[save_summary node];
J --> B;
I -- no --> B;
F -- no tool / no summarize --> B;
end
subgraph "Data & Persistence"
D --> J[(SQLite DB<br>User Preferences)];
K[End Session Button] --> L[/summarize via graph/];
J --> M([Obsidian Vault<br>Save as .md]);
end
style A fill:#cde4ff
style C fill:#cde4ff
style K fill:#ffcdd2
style M fill:#d4edda
style J fill:#fff2cc
style F fill:#e0cffc
- Conversational AI: Engage in a natural conversation to ask questions and learn.
- LLM Agnostic: Easily switch between a locally hosted Ollama model (e.g., Gemma, Llama) and OpenAI's models (e.g., GPT-4o) via a simple configuration change.
- Web Search: Integrates with Tavily Search API to provide current information on any topic.
- Native Tool Calling: Uses LangChain tool binding + LangGraph
ToolNodefor web search. - Post-Assistant Summary Routing: Summarization intent is checked after assistant execution and can branch to a dedicated summarize node.
- In-Graph Summarization: Summarization is part of the same graph and works from both Streamlit and Agent UI.
- Human-In-The-Loop Save Approval: After summary generation, the graph interrupts and asks for explicit user approval before saving to Obsidian.
- Obsidian Integration: Automatically saves summaries as Markdown files in a specified Obsidian vault, creating links between concepts for graph visualization.
- Persistent Memory: Stable user preferences are extracted and stored as key-value pairs in a local SQLite database.
- Dual UI Support: Works in Streamlit and in Agent UI / LangGraph Studio.
- Logging: Detailed logs are generated in the
logs/directory for easy debugging and monitoring.
.
├── src/
│ └── cognigraph/
│ ├── ui.py # Streamlit UI
│ ├── graph.py # Unified LangGraph workflow (chat + tools + summarization)
│ ├── llm.py # LLM provider factory
│ ├── server_graphs.py # LangGraph API entrypoint
│ ├── db.py # SQLite persistence layer
│ ├── config.py # Environment config loader
│ └── logging_setup.py
├── app.py # Thin Streamlit entrypoint
├── langgraph.json # LangGraph server config
├── pyproject.toml # uv project configuration
├── .env / .env.example # Environment variables
├── logs/ # Log files
└── README.md # This file
-
Prerequisites:
- Python 3.12+
- An active internet connection
- (Optional) Ollama installed and running for local LLM usage.
-
Clone the Repository:
-
Install uv: Follow the official instructions: https://docs.astral.sh/uv/getting-started/installation/
-
Create the Environment and Install Dependencies:
uv sync
-
Configure Environment Variables: Create a file named
.envin the root of the project directory and populate it with your configuration. A template is provided below.
Copy the following into your .env file and replace the placeholder values with your actual information.
# --- LLM Configuration ---
# Set the provider: "ollama", "openai", etc.
LLM_PROVIDER="ollama"
# Set the model name for the selected provider (e.g., "gemma", "gpt-4o")
LLM_MODEL="gemma"
# Set the base URL for the LLM API (required for local models like Ollama)
LLM_BASE_URL="http://localhost:11434"
# --- API Keys and Paths ---
# Required if using LLM_PROVIDER="openai"
OPENAI_API_KEY="your-openai-api-key"
# Required for web search functionality
TAVILY_API_KEY="your-tavily-api-key"
# Absolute path to your Obsidian vault's root directory
OBSIDIAN_VAULT_PATH="C:/Users/YourUser/Documents/ObsidianVault"Important:
- You can get a free Tavily API key from the Tavily website.
- Ensure the
OBSIDIAN_VAULT_PATHis an absolute path to your vault's root directory.
-
Install dependencies (if not already done):
uv sync
-
(Optional) Start Ollama if
LLM_PROVIDER="ollama":ollama run gemma
-
Start Streamlit:
uv run streamlit run app.py
-
Open the Streamlit URL (usually
http://localhost:8501) and chat normally. -
Summarize conversation:
- Type
/summarizein chat, or - Click End Session & Save Notes (this triggers summarization through the same graph).
- Summaries are based on the current chat history (ongoing conversation context).
- After the summary appears, reply
yesornowhen prompted to confirm whether it should be saved to Obsidian.
- Type
-
If approved and Obsidian path is configured, summary is saved under
AINotes/in your vault.
Run the unified graph as an API and connect from Agent UI or Studio.
-
Start local LangGraph API server: Standard command:
uv run langgraph dev
If your environment blocks
langgraphdirectly due to Application Control, use:uv run python -m langgraph_api.cli --config langgraph.json
API endpoint is typically
http://127.0.0.1:2024. -
Open Agent UI / Studio:
uv run langgraph devprints a Studio URL automatically.- Or open manually:
https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024
-
(Optional) Run standalone Agent Chat UI app:
npx create-agent-chat-app --project-name cognigraph-chat-ui cd cognigraph-chat-ui pnpm install pnpm dev -
Connect UI to your local graph:
- Graph ID:
cognigraph - Deployment URL:
http://127.0.0.1:2024 - LangSmith key: optional for local usage
- Graph ID:
-
Trigger summarization in Agent UI:
- Send
/summarizein chat. - The same unified graph handles chat, tools, and summarization.
- Summarization is intended for the current ongoing conversation context.
- When prompted by the interrupt, reply
yesto save to Obsidian ornoto skip saving.
- Send
This setup is configured via langgraph.json.
All application events, including API calls, node executions, and errors, are logged to logs/app.log. This is the first place to check if you encounter any issues.