Build a source-grounded personal wiki with Obsidian, Claude, Codex, and LLMWiki.
Capture raw material. Let AI structure it. Query it later with citations.
Most "second brains" quietly become graveyards: highlights, transcripts, PDFs, chat logs, meeting notes, and random markdown files that never turn into usable knowledge.
This project gives you a working pattern for an AI-maintained personal wiki:
| Layer | Purpose | Owner |
|---|---|---|
raw/ |
Immutable source material | You |
wiki/ |
Structured pages, citations, syntheses | AI assistant |
CLAUDE.md / AGENTS.md |
Operating rules for Claude and Codex | You + AI |
The goal is not another note-taking ritual. The goal is a system where every important idea can be traced back to a source, every contradiction is visible, and your knowledge compounds over time.
| Included | What it does |
|---|---|
| Full playbook | Step-by-step setup for Obsidian, Claude, Codex, MCP, and GitHub |
| Starter templates | CLAUDE.md, AGENTS.md, wiki/index.md, wiki/log.md, and Claude commands |
| Codex skill | One-command agent workflow for scaffolding a new LLMWiki vault |
| Example vault | Tiny fictional example showing raw/source/concept/index/log pages |
| Safety rules | Keeps private raw notes out of public repos and forces source-grounded answers |
Use the bundled skill when you want the agent to do the setup:
Use the LLMWiki Second Brain skill to create my vault at ~/Documents/LLMWiki.
This creates the folders, schema files, command templates, starter index, and starter log.
Read the setup guide below if you want to understand every piece and assemble it manually.
Either way, you end up with the same system:
capture -> ingest -> query -> synthesize -> lint
LLMWiki fixes that by separating your knowledge system into three layers:
LLMWiki/
|-- raw/ # immutable source material
|-- wiki/ # assistant-written knowledge pages
`-- schema files # rules that tell Claude/Codex how to write
The human owns judgment and curation. The AI owns organization, linking, summarization, contradiction detection, and bookkeeping.
Your second brain should behave like a tiny private Wikipedia for your life, business, research, and projects.
Instead of asking:
Where did I save that note?
You ask:
What do I currently believe about this topic, and what sources support it?
The assistant then reads your wiki, cites the relevant pages, and tells you where the knowledge is strong, weak, outdated, or contradictory.
- Obsidian for the local markdown vault.
- Claude Desktop or another Claude client with local filesystem access.
- Codex or another coding agent that can read and edit a local folder.
- Git and GitHub if you want to version-control or publish your starter kit.
- Obsidian Web Clipper for saving web articles into
raw/. - The official MCP filesystem server for Claude Desktop filesystem access.
- A backup/sync layer such as Obsidian Sync, iCloud Drive, Dropbox, or Git.
If you want an agent to do the setup instead of manually following the steps, use the bundled skill.
git clone https://github.com/YOUR_USERNAME/YOUR_REPO.git
cd YOUR_REPOCopy the skill folder into your Codex skills directory:
mkdir -p ~/.codex/skills
cp -R skills/llmwiki-second-brain ~/.codex/skills/Restart Codex if needed so it discovers the new skill.
In Codex, say:
Use the LLMWiki Second Brain skill to create my vault at ~/Documents/LLMWiki.
Codex should use the skill and run:
python3 ~/.codex/skills/llmwiki-second-brain/scripts/create_llmwiki_vault.py ~/Documents/LLMWikiTo include the fictional example pages:
Use the LLMWiki Second Brain skill to create my vault at ~/Documents/LLMWiki with examples.
The skill creates the vault structure, schema files, starter index/log, and Claude command templates. You still need to open the folder in Obsidian and connect Claude/Codex to it.
You can also run the script directly from the cloned repo:
python3 skills/llmwiki-second-brain/scripts/create_llmwiki_vault.py ~/Documents/LLMWikiUseful flags:
--include-examples
--dry-run
--forceUse --dry-run first if you are pointing at an existing folder.
LLMWiki has four important file types.
Folder:
raw/
This is where you store original material:
- articles;
- copied transcripts;
- PDFs;
- exported conversations;
- meeting notes;
- research papers;
- podcast notes;
- decision logs;
- screenshots or assets.
Raw files are the source of truth. Once saved, do not rewrite them during ingest. If you need a corrected version, save a new raw file.
Folder:
wiki/sources/
Every important raw file gets a source summary page. This page contains:
- citation;
- summary;
- key claims;
- notable quotes;
- questions raised;
- pages touched.
Folders:
wiki/entities/
wiki/concepts/
Entities are things:
- people;
- companies;
- tools;
- products;
- books;
- projects.
Concepts are ideas:
- frameworks;
- mental models;
- technical ideas;
- strategies;
- lessons;
- repeated patterns.
The assistant updates these pages every time a new source changes or reinforces your understanding.
Folder:
wiki/syntheses/
Syntheses are filed answers to useful questions:
- comparisons;
- timelines;
- strategic memos;
- research summaries;
- decision briefs;
- "what do I currently believe?" pages.
These are the pages that make the second brain feel alive.
Create a folder named LLMWiki anywhere on your machine:
LLMWiki/
|-- CLAUDE.md
|-- AGENTS.md
|-- raw/
| `-- assets/
|-- wiki/
| |-- index.md
| |-- log.md
| |-- entities/
| |-- concepts/
| |-- sources/
| `-- syntheses/
`-- .claude/
`-- commands/
|-- ingest.md
|-- query.md
`-- lint.md
This repo includes copy-paste templates for those files in templates/.
- Download Obsidian from obsidian.md/download.
- Open Obsidian.
- Click Create new vault.
- Name it
LLMWiki. - Choose a local folder you control, for example:
~/Documents/LLMWiki
Do not start with a complicated folder system. The point of this setup is that the AI maintains structure for you.
Inside your LLMWiki vault, create:
raw/
raw/assets/
wiki/
wiki/entities/
wiki/concepts/
wiki/sources/
wiki/syntheses/
.claude/
.claude/commands/
Then create:
wiki/index.md
wiki/log.md
CLAUDE.md
AGENTS.md
You can either create them manually in Obsidian or copy the templates from this repo.
Copy templates/CLAUDE.md into your vault root:
LLMWiki/CLAUDE.md
This is the constitution of your second brain. It tells the assistant:
- what folders mean;
- how pages should be named;
- what frontmatter is required;
- how to cite sources;
- how to ingest new material;
- how to answer questions;
- how to flag contradictions.
The most important rule:
Every claim in
wiki/should cite a source page using[[source-slug]].
That rule is what prevents the vault from becoming a hallucination machine.
Copy templates/AGENTS.md into your vault root:
LLMWiki/AGENTS.md
Codex uses AGENTS.md as local operating instructions. This lets Codex behave like Claude when working inside the vault.
Codex should:
- read
CLAUDE.mdat the start; - read
wiki/index.md; - answer from the wiki instead of memory;
- cite pages inline as
[[slug]]; - ask before ingesting;
- update source/entity/concept pages,
index.md, andlog.md.
Copy these files:
templates/.claude/commands/ingest.md
templates/.claude/commands/query.md
templates/.claude/commands/lint.md
Into:
LLMWiki/.claude/commands/
These commands define the three core workflows:
/ingestturns raw material into wiki pages./queryanswers from the wiki./lintaudits the wiki for problems.
Create wiki/index.md:
# Index
Content-oriented catalog of the wiki. Updated on every `/ingest` and whenever a synthesis is filed.
Format: `- [[slug]] - one-line summary. *(N sources, updated YYYY-MM-DD)*`
---
## Entities
_No entries yet._
## Concepts
_No entries yet._
## Sources
_No entries yet._
## Syntheses
_No entries yet._Create wiki/log.md:
# Log
Chronological, append-only. Every entry starts with:
`## [YYYY-MM-DD] <op> | <title>`
---The index is the map. The log is the audit trail.
Claude needs filesystem access to your vault.
The easiest current path is Claude Desktop with a filesystem MCP server or a desktop extension that grants access to a specific folder.
The important security principle:
Only grant Claude access to the folder it needs.
For this setup, that folder is:
~/Documents/LLMWiki
Do not give broad access to your entire home directory unless you understand the risk.
In Claude Desktop:
- Open Settings.
- Open Extensions.
- Install a filesystem/local files extension if one is available in your Claude Desktop build.
- Configure it to access only your
LLMWikifolder. - Restart Claude Desktop if tools do not appear.
This is the easiest path when available.
If you use the official filesystem MCP server through npx, install Node.js first if your machine does not already have it.
Then add this server to your Claude Desktop MCP config:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/YOUR_USERNAME/Documents/LLMWiki"
]
}
}
}On Windows, use:
{
"mcpServers": {
"filesystem": {
"command": "cmd",
"args": [
"/c",
"npx",
"-y",
"@modelcontextprotocol/server-filesystem",
"C:\\Users\\YOUR_USERNAME\\Documents\\LLMWiki"
]
}
}
}After editing the config, restart Claude Desktop and ask:
List the allowed directories you can access.
Claude should only show your LLMWiki vault path.
After connecting the folder, start a Claude conversation and say:
You have filesystem access to my LLMWiki vault.
At the start of this conversation, read CLAUDE.md and wiki/index.md.
Then follow the schema exactly for all wiki writes.
If your CLAUDE.md is in the vault root, Claude should now know how to operate the system.
Open the LLMWiki folder in Codex as a workspace.
Codex will read AGENTS.md and understand the same operating rules:
- use the schema;
- read the index first;
- cite pages;
- ask before ingesting;
- preserve the raw/source/wiki separation.
Codex is especially useful for:
- bulk cleanup;
- linting;
- creating templates;
- maintaining the repo;
- editing long markdown files;
- turning the vault into a public GitHub starter kit.
If you use Codex from a terminal or app, the key requirement is simple: open the vault folder itself, not just a parent folder where the instructions might be missed.
Claude is often better for conversational ingest. Codex is often better for structural edits and repo work. Together, they make the system feel much more durable.
Pick one thing worth preserving:
- a great article;
- a YouTube transcript;
- a podcast summary;
- a chat you had with an AI;
- a decision you made;
- a project planning conversation;
- a PDF paper.
Save it into raw/ with a lowercase, hyphen-separated filename:
raw/attention-is-all-you-need.md
raw/startup-pricing-decision.md
raw/customer-interview-001.md
If you are using Obsidian Web Clipper, configure it to save pages into raw/.
In Claude, run:
/ingest raw/attention-is-all-you-need.md
Or ask naturally:
Please ingest raw/attention-is-all-you-need.md into the wiki.
The assistant should:
- read the raw file;
- summarize the key takeaways;
- ask what to emphasize;
- plan which pages to create or update;
- create a source page;
- create or update entity/concept pages;
- update
wiki/index.md; - append to
wiki/log.md; - report pages touched and contradictions.
This is where the second brain starts compounding.
Once you have a few sources, ask:
What do I currently believe about AI engineering careers?
The assistant should:
- read
wiki/index.md; - identify relevant pages;
- read those pages;
- answer with citations like
[[ai-engineering-careers]]; - say when the wiki has gaps;
- offer to file the answer as a synthesis.
The citation style matters. It makes every answer traceable.
When an answer is useful, say:
File this as a synthesis page.
The assistant should create something like:
wiki/syntheses/ai-engineering-career-thesis.md
This is the move that turns conversations into reusable knowledge.
Over time, your best syntheses become:
- essays;
- strategy docs;
- scripts;
- product decisions;
- research memos;
- personal operating principles.
Every week or after a large ingest batch, run:
/lint
The assistant should check for:
- contradictions;
- orphan pages;
- missing pages;
- stale claims;
- thin pages;
- single-source claims;
- schema drift;
- synthesis opportunities.
Do not skip this. The difference between a folder of notes and a knowledge system is maintenance.
Use this rhythm:
Monday-Friday:
- Capture sources into raw/
- Ingest only the important ones
- Ask questions from the wiki
End of week:
- Run /lint
- Promote important stubs to draft/solid
- File 1-3 synthesis pages
- Review contradictions and gaps
You do not need to ingest everything. A second brain gets stronger from selective, high-quality inputs.
You save:
raw/customer-interview-acme.md
Then ask:
Ingest raw/customer-interview-acme.md. Focus on pain points, objections, buying triggers, and product language.
The assistant creates:
wiki/sources/customer-interview-acme.md
wiki/entities/acme-corp.md
wiki/concepts/customer-onboarding-friction.md
wiki/concepts/pricing-objection-patterns.md
It updates:
wiki/index.md
wiki/log.md
Later you ask:
What objections keep repeating across customer interviews?
The assistant reads relevant concept/source pages and answers with citations.
You ask:
What are the strongest arguments for our current pricing model?
A good answer should look like:
The strongest argument is that customers repeatedly describe the product as a revenue tool rather than a productivity tool, which supports value-based pricing [[customer-interview-acme]] [[pricing-objection-patterns]].
The wiki has a gap: there are only two customer interviews captured so far, so this claim is directionally useful but not yet strong enough to treat as settled.
Notice the behavior:
- it cites the wiki;
- it identifies confidence;
- it flags gaps;
- it avoids pretending the evidence is stronger than it is.
Use lowercase, hyphen-separated slugs:
good: attention-mechanism.md
bad: Attention Mechanism.md
bad: attention_mechanism.md
bad: attention mechanism.md
Use wikilinks for internal references:
[[attention-mechanism]]
[[openai]]
[[customer-onboarding-friction]]Do not use markdown links for internal wiki references:
[attention mechanism](wiki/concepts/attention-mechanism.md)Wikilinks keep the vault Obsidian-native.
Use status: in frontmatter:
status: stub
status: draft
status: solid
status: contestedSuggested meaning:
stub: created from one source, incomplete.draft: useful but still thin.solid: supported by multiple sources or carefully developed.contested: contains unresolved contradictions.
This lets you see where the wiki is strong and where it is still fragile.
Good inputs:
- personal decisions;
- strategy calls;
- customer interviews;
- long-form articles;
- research papers;
- high-signal conversations;
- project postmortems;
- meeting notes;
- scripts and transcripts;
- books or book notes.
Bad inputs:
- random tweets with no lasting value;
- duplicate articles;
- low-quality summaries;
- anything you will not care about in 30 days.
Your raw layer should be curated, not hoarded.
AI with filesystem access is powerful. Keep the system narrow and explicit.
Recommended rules:
- Give assistants access only to the vault folder.
- Keep secrets out of the vault unless you understand the risk.
- Make raw sources append-only or immutable by convention.
- Ask before large rewrites.
- Use Git or backups before bulk operations.
- Review changes after major ingests.
- Never let the assistant silently resolve contradictions.
The most important principle:
The assistant can organize your knowledge, but it should not invent your knowledge.
If you want to publish your own version:
git init
git add .
git commit -m "Initial LLMWiki second brain playbook"Then create a GitHub repo and push:
git branch -M main
git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPO.git
git push -u origin mainIf your repo includes your personal vault, be careful. You probably want to publish only the starter kit, not your private raw/ and wiki/ content.
For a public template repo, include:
README.md
templates/
examples/
.gitignore
Do not include:
raw/private-notes.md
wiki/personal/
customer data
API keys
private transcripts
This repo uses:
.
|-- README.md
|-- templates/
| |-- CLAUDE.md
| |-- AGENTS.md
| |-- wiki/
| | |-- index.md
| | `-- log.md
| `-- .claude/
| `-- commands/
| |-- ingest.md
| |-- query.md
| `-- lint.md
|-- examples/
| |-- raw/
| | `-- example-source.md
| `-- wiki/
| |-- index.md
| |-- log.md
| |-- sources/
| | `-- example-source.md
| `-- concepts/
| `-- example-concept.md
|-- skills/
| `-- llmwiki-second-brain/
| |-- SKILL.md
| |-- scripts/
| | `-- create_llmwiki_vault.py
| `-- assets/
| |-- starter-vault/
| `-- examples/
`-- .gitignore
People can clone this repo, copy templates/ into a fresh Obsidian vault, and start ingesting.
Use this at the start of a new assistant conversation:
You have filesystem access to my LLMWiki vault.
At the start of this conversation:
1. Read CLAUDE.md.
2. Read wiki/index.md.
When I ask questions:
1. Find relevant pages through the index.
2. Read those pages before answering.
3. Cite wiki pages inline as [[slug]].
4. If the wiki has a gap, say so.
When I share something worth preserving:
1. Ask: "Ingest this?"
2. On yes, save the raw source if needed.
3. Update wiki/sources, wiki/entities, wiki/concepts, wiki/index.md, and wiki/log.md according to CLAUDE.md.
4. Flag contradictions. Do not silently overwrite them.
This prompt is intentionally boring. Boring rules make reliable systems.
Do not turn the wiki into a dumping ground. Capture broadly, ingest selectively.
If the answer should come from the wiki, force the assistant to read the wiki.
Every factual claim should point to a source page. If there is no citation, the claim is weak.
Without wiki/log.md, you lose the audit trail of how the wiki evolved.
Start with four folders and three commands. Add complexity only when the workflow proves it needs it.
Once the base system works, you can add:
- domain-specific page types;
- daily or weekly review commands;
- import scripts for transcripts;
- a private GitHub backup;
- a public/private split;
- local search tooling;
- scheduled lint reviews;
- synthesis templates for essays, strategy docs, or research reports.
But do not start there. The base loop is the product:
capture -> ingest -> query -> synthesize -> lint
- Obsidian download
- Obsidian Web Clipper
- Claude Desktop install guide
- MCP filesystem server
- OpenAI Codex
- GitHub repository docs
MIT. See LICENSE.
Obsidian is the file system.
Claude is the conversational librarian.
Codex is the structural maintainer.
GitHub is the distribution layer.
LLMWiki is the operating system that tells all of them how to work together.