A Claude Code harness that gives every coding session two helpers: a fast librarian that picks the rules that fit the current task, and a process keeper that blocks risky writes until you have approved a plan and tests.
At the live 276-rule production corpus (post Phase 1-5 public-rulebook expansion), the librarian returns ranked results in 0.590 ms at the 95th percentile. At the 10,000-rule synthetic scale, it still holds at 0.557 ms while reducing context tokens by 726 times versus loading the whole rulebook every turn.
See CHANGELOG.md for the v1.0.0 release notes and the full set of capabilities shipped.
Writ is published as a single-plugin marketplace in this repo.
Prerequisites
- Python 3.11 or newer
- Docker (Neo4j runs in a container)
jq,curl,envsubst
Install
claude plugin marketplace add infinri/Writ
claude plugin install writ@writOne-time bootstrap. Creates the venv at ${CLAUDE_PLUGIN_DATA:-$HOME/.cache/writ}/.venv, brings up Neo4j, ingests the rule bible, and starts the FastAPI daemon:
bash $(claude plugin path writ)/scripts/bootstrap-plugin.shRestart Claude Code. Verify with:
curl http://localhost:8765/health
# {"status":"healthy"}The plugin's hooks degrade gracefully until bootstrap completes. The SessionStart hook prints clear setup instructions on every fresh session where any prerequisite is missing, but the session itself is never blocked.
The standalone install path at ~/.claude/skills/writ/ remains supported; see "Quick start" below if you prefer that mode.
Three things break when you give a coding agent a large rulebook the obvious way (paste it all into the prompt):
- Token cost grows with the rulebook, not the work. At 80 rules: about 13,876 tokens of rule text every turn. At 10,000 rules: 1,174,142 tokens. Cache hit rates collapse, latency climbs, the bill scales with the rulebook.
- Relevance degrades. A model handed every rule treats none of them as load bearing. Specific rules drown in generic ones.
- Workflow discipline has nowhere to live. Static skill files can describe a process; they cannot enforce it. Telling the model to write tests first does not stop it from writing the implementation first.
Two layers, sharing a Neo4j-backed knowledge graph:
The knowledge layer (the librarian). A FastAPI service on localhost:8765 that runs a five-stage hybrid retrieval pipeline:
Query text
Stage 1: Domain Filter < 1 ms domain-scoped corpus
Stage 2: BM25 keyword (Tantivy) < 2 ms top 50 candidates
Stage 3: ANN vector (hnswlib) < 3 ms top 10 candidates
Stage 4: Graph traversal < 3 ms adjacency cache for DEPENDS_ON,
SUPPLEMENTS, CONFLICTS_WITH, ...
Stage 5: Two-pass ranking < 1 ms reciprocal rank + context budget
total p95 budget: 10 ms
Each retriever covers a blind spot the others have. BM25 catches exact keyword matches. Vectors catch paraphrase ("SQL" versus "database query"). Graph traversal catches rules that share neither but are causally related. The two-pass ranker fuses everything with severity, confidence, and graph-proximity weights.
The enforcement layer (the process keeper). 30 hook scripts under .claude/hooks/, all wired into Claude Code via templates/settings.json, a session state machine in bin/lib/writ-session.py, slash commands, and 6 sub-agent role files. The state machine owns mode, phase, and gate state; hooks are thin clients that delegate to it.
Mandatory rules (the architectural invariant). Rules with mandatory: true (30 in the live corpus, spanning ENF-* enforcement rules and SEC-/PERF-/SCALE-* invariants from the public rulebook) are excluded from the retrieval pipeline at index build time. They are loaded out of band by hooks with their own 5,000 token budget cap. No change to ranking weights, embedding model, BM25 tuning, or graph traversal can cause an enforcement rule to disappear from agent context.
git clone <writ-repo> ~/.claude/skills/writ
cd ~/.claude/skills/writ
bash scripts/bootstrap.shThe bootstrap script handles everything: prerequisite checks (Python 3.11+, Docker, git, envsubst), virtualenv, dependency install, harness config rendered into ~/.claude/, rule and agent symlinks, Neo4j container via Docker Compose, rule corpus ingestion, and Writ daemon startup. Idempotent.
Verify:
writ status
# {"status":"healthy","rule_count":276,"mandatory_count":30,"index_state":"warm",...}
writ query "controller contains SQL query"
# Mode: full | Candidates: 14 | Latency: 0.3ms
# 1. [0.984] SEC-INJ-SQL-001 Parameterized queries only...Open Claude Code in any project. Type a prompt. You should see a [Writ: ...] status line and a --- WRIT RULES --- block with the rules that apply to what you are doing.
When Claude is doing read-only work (asking questions, debugging, reviewing), Writ injects relevant rules and stays out of the way. When Claude is in Work mode and tries to write code before you have approved a plan, the write is denied with a clear reason like:
[ENF-GATE-PLAN] Write blocked. Approve plan.md first.
You write the plan, you say "approved," the gate opens. The next gate (test skeletons) blocks code that has no tests pointing at it. Same pattern: write the tests, approve, gate opens. After both gates clear, Claude writes the implementation freely.
Approval cannot be self-served. The slash command /writ-approve requires a one-time token written to /tmp/writ-gate-token-${SESSION_ID} only when the user actually types an approval phrase. If Claude tries to advance the gate via raw bash, the token is missing and the call is denied (logged as agent_self_approval_blocked).
| Mode | Purpose | Code generation |
|---|---|---|
| Conversation | Discussion, brainstorming, questions | No |
| Debug | Investigating a specific problem (read only) | No |
| Review | Evaluating code against rules (read only) | No |
| Work | Building or modifying code | Yes (with gates) |
In Work mode, two gates apply:
phase-avalidatesplan.mdagainst four required sections (## Files,## Analysis,## Rules Applied,## Capabilities) and verifies every cited rule ID against rules actually loaded in the session.test-skeletonsrequires at least one test file with real assertions before production code is written.
Live system measurement (2026-05-10, 276 rule production corpus post Phase 1-5 public-rulebook expansion, ONNX runtime, warm indexes; 500 samples per stage on 10 representative queries):
| Stage | Median | p95 | Budget | Headroom at p95 |
|---|---|---|---|---|
| End to end | 0.338 ms | 0.590 ms | 10.0 ms | 17x |
Cold start (median of 3 runs): 1.72 s (pre-expansion baseline; rebuild cost scales with corpus size).
Synthetic scale curve (2026-04-13, from SCALE_BENCHMARK_RESULTS.md):
| Corpus | E2E p95 | Tokens stuffed | Tokens retrieved | Reduction |
|---|---|---|---|---|
| 80 rules | 0.278 ms | 13,876 | 3,155 | 4.4x |
| 500 rules | 0.359 ms | 63,003 | 1,600 | 39.4x |
| 1,000 rules | 0.399 ms | 121,473 | 1,602 | 75.8x |
| 10,000 rules | 0.557 ms | 1,174,142 | 1,617 | 726.1x |
Quality (against the Phase 6 ground-truth corpus, 165 queries: the original 83 + 82 new queries covering the public-rulebook expansion):
| Metric | Threshold | Actual |
|---|---|---|
| MRR at 5 (ambiguous queries, n=19) | >= 0.45 | 0.4886 |
| Hit rate (all 165 queries) | >= 0.75 | 0.7636 |
| Methodology MRR at 5 (n=40, signed off corpus) | >= 0.78 | 0.8583 |
| Methodology hit rate | >= 0.90 | 1.0000 |
| ONNX vs PyTorch ranking stability | identical | 0/83 differ |
The ambiguous-set MRR and hit-rate floors were retuned downward during the Phase 1-5 expansion: the corpus grew 3.8x (72 to 276 rules) while the ambiguous-set query count remained constant at 19. Methodology retrieval is unaffected (a separate, signed-off corpus).
Full numbers in SCALE_BENCHMARK_RESULTS.md. Architectural detail in HANDBOOK.md.
| Command | What it does |
|---|---|
writ serve [--host --port] |
Start the FastAPI service via uvicorn (default localhost:8765). |
writ status |
Health check via the HTTP service. |
writ query <text> [--domain --budget] |
Run a retrieval query. |
writ import-markdown [path] |
Ingest rules from bible/ Markdown into Neo4j. Auto-exports back on success. |
writ export [output] |
Regenerate Markdown from graph (overwrites output dir). |
writ add |
Interactive add-a-new-rule wizard. Schema validates, redundancy checks, suggests edges, writes. |
writ edit <rule_id> |
Edit existing rule with current values as defaults. |
writ validate [--review-confidence --benchmark] |
Run integrity checks (conflicts, orphans, stale, redundant, frequency). |
writ compress |
Cluster rules (HDBSCAN + k-means) into Abstraction nodes for summary mode. |
writ propose ... |
Submit AI authored rule through structural gate. |
writ review [rule_id] [--promote --reject --downweight --stats] |
Triage AI provisional rules. |
writ feedback <rule_id> <positive|negative> |
Record feedback signal. |
writ migrate |
Run scripts/migrate.py (initial bootstrap). |
writ analyze-friction [flags] |
Analyze workflow-friction.log: rule effectiveness, skill usage, playbook compliance, graduation candidates, trim candidates, quality judge false positives. |
writ audit-session <session_id> |
Per-session timeline and summary. |
writ role-prompt <name> |
Print canonical SubagentRole prompt template from graph. |
Common issues and fixes:
Docker daemon not reachable: start Docker Desktop, orsudo systemctl start dockeron Linux, then re-runbootstrap.sh.python3 version is 3.9; need >= 3.11: install a newer Python.pyenvis a clean way to manage versions without touching system Python.port 7687 already in use: another Neo4j instance is running. Either stop it (docker stop <container>) or change theports:mapping indocker-compose.yml.Neo4j did not become reachable within 60s: check logs (docker compose logs neo4j). Common cause: insufficient memory allocated to Docker Desktop (Neo4j needs ~1 GB).daemon did not become healthy within 10s: check/tmp/writ-server.log. Usually an import error; re-runpip install -e .from the skill directory with the venv activated.- Default Neo4j credentials (
neo4j/writdevpass): a development default. For any non-local use, changeNEO4J_AUTHindocker-compose.ymland the matching[neo4j]section inwrit.toml.
All endpoints under http://localhost:8765. JSON bodies; no auth (binds localhost only). Total: 36 endpoints (11 top-level plus 25 under /session/{id}/).
| Method | Path | Purpose |
|---|---|---|
POST |
/query |
Run the 5-stage pipeline. Body: {query, domain?, budget_tokens?, exclude_rule_ids?, prefer_rule_ids?, node_types?}. Returns {rules, mode, total_candidates, latency_ms}. |
POST |
/analyze |
Run the analyzer (pattern plus optional LLM escalation) on a code snippet. |
GET |
/rule/{rule_id} |
Fetch a rule, optionally with ?include_graph=true for one-hop neighbors. |
POST |
/propose |
Submit AI authored rule through the 5-check structural gate. |
POST |
/feedback |
Record a positive or negative signal for a rule. |
POST |
/conflicts |
Check CONFLICTS_WITH edges among a list of rule IDs. |
GET |
/health |
Real Neo4j round-trip; reports rule count, mandatory count, index state, startup time. |
GET |
/always-on |
Returns mandatory rules plus ForbiddenResponse nodes plus always-on Skills/Playbooks. Mode-scoped. |
GET |
/subagent-role/{name} |
Resolve writ-explorer to its role record. |
POST |
/pre-write-check |
Consolidated gate plus final-gate plus RAG check. |
GET |
/dashboard |
Server-rendered HTML view of friction-log analytics. |
GET POST |
/session/{sid}/... |
25 routes covering mode, phase, gates, coverage, escalation, quality judgment, playbook progress, compaction, verification evidence. |
Errors come back as HTTP 200 with {"error": "..."} for logical failures, 422 for Pydantic validation, 5xx for unhandled DB exceptions. Clients should check the error key, not just status codes.
| File | Purpose |
|---|---|
writ.toml |
Service configuration: Neo4j credentials, ranking weights, embedding model, context budgets, gate thresholds. |
pyproject.toml |
Package metadata. Production deps: fastapi, uvicorn, neo4j, tantivy, sentence-transformers, hnswlib, pydantic, typer, rich, httpx. Entry point: writ = "writ.cli:app". |
.claude-plugin/plugin.json |
Plugin manifest. defaultEnabled: true. Lifecycle Init invokes scripts/ensure-server.sh; Shutdown invokes scripts/stop-server.sh. |
docker-compose.yml |
Single neo4j:5 service on ports 7474 and 7687, health-checked via cypher-shell. |
templates/settings.json |
Canonical hook wiring (30 hooks). |
bin/lib/checklists.json |
Phase exit criteria. |
bin/lib/gate-categories.json |
File classification glob patterns plus framework detection. |
writ/shared/budget.json |
Single source of truth for budget constants (default 8000, summary cost 40, standard 120, full 200, always_on_cap 5000). |
Environment variables read by hooks: WRIT_HOST (default localhost), WRIT_PORT (default 8765), WRIT_CACHE_DIR (default tempfile.gettempdir()), WRIT_FRICTION_LOG, WRIT_HOOK_LOG, WRIT_DEBUG_LOG. Neo4j credentials are read from writ.toml only; there is no WRIT_NEO4J_* override.
90 test files, 1,192 test functions. The end-of-suite hook in tests/conftest.py shells out to scripts/migrate.py --methodology-dir bible/methodology to restore the production graph after tests run.
make test # pytest tests/ -x -q
make bench # pytest benchmarks/bench_targets.py -x -q
make check # bothPre-commit: make bench runs at pre-push. No formatting or lint hooks configured.
The benchmark suite has four files:
benchmarks/bench_targets.py(12 contractual targets, all pass/fail).benchmarks/run_benchmarks.py(Neo4j traversal scale at 1K and 10K nodes; wipes the graph).benchmarks/scale_benchmark.py(the synthetic 80/500/1K/10K scale curve generator; restores only Rule nodes).benchmarks/methodology_bench.py(methodology retrieval against the curated 40-query Phase 0 corpus; read only, no Neo4j changes).
Released as v1.0.0 on 2026-05-10. All five retrieval stages with budget headroom at every level. Mode and gate enforcement. AI rule proposal with the 5-check structural gate. Frequency-driven graduation logic. Sub-agent isolation (is_subagent) and orchestrator suppression (is_orchestrator). ONNX-optimized embedding inference verified identical to PyTorch on every test query. HNSW persistence with corpus-hash invalidation. 1,441 tests. Friction log analytics with a dashboard.
Public out-of-the-box rulebook seeded. 198 new universal rules across Security, Clean Code, DRY, SOLID, Architecture, Testing, Error Handling, Performance, Scaling, API Design, Process, and Documentation, plus 19 new mandatory rules each backed by a cross-language regex analyzer in bin/run-analysis.sh. See out-of-the-box-rules.md for the canonical rule list.
HANDBOOK.mdfor the full architecture, including detailed graph schema, gate mechanics, and the discrepancy catalog.SCALE_BENCHMARK_RESULTS.mdfor the full live measurement plus the synthetic scale curve.CONTRIBUTING.mdfor rule authoring workflow, monthly review cadence, and AI proposal triage.PROMOTIONAL-BRIEF.mdfor the pitch-oriented version of this document.
The standalone install at ~/.claude/skills/writ/ will keep working; the plugin path is purely additive. If you'd rather move to the plugin path:
- Stop the existing daemon:
bash ~/.claude/skills/writ/scripts/stop-server.sh - Remove the symlinks the standalone bootstrap created:
rm -f ~/.claude/rules/writ-*.md ~/.claude/agents/writ-*.md - Remove the rendered hook block from
~/.claude/settings.json(thepermissions.allowandhookssections that reference$HOME/.claude/skills/writ/.claude/hooks/). Back up the file first. - Install the plugin as described in "Install as a Claude Code plugin" above. The Neo4j Docker volume (
writ-neo4j-data) is shared between modes, so the rule corpus survives the switch.
The standalone-skill checkout itself can stay on disk; nothing in the plugin install path looks at it.
License: MIT. Authored by Lucio Saldivar.