Generate compact, semantically-rich code context optimized for LLM consumption with 99%+ faster incremental updates.
A tool that transforms raw source code into LLM-optimized context, enabling AI assistants like Claude, ChatGPT, and Copilot to understand your codebase with 80-95% fewer tokens.
Key Features:
- 🔍 Function call graphs with side effect detection
- 📊 Multi-level summaries (System → Domain → Module)
- ⚡ Incremental updates (99%+ faster than full re-analysis)
- 🔐 Hash-based change tracking (only re-analyze what changed)
- 🎯 Query interface for instant lookups
- 🤖 Claude Code skill included
# Install globally
npm install -g llm-context
# Or use directly without installing
npx llm-context analyze# Analyze your codebase
cd ~/my-project
llm-context analyze
# Query results
llm-context stats
llm-context entry-points
llm-context side-effects
# Use with LLMs
# Share .llm-context/ directory with AI assistantsLLM: "Help me debug this codebase"
[Reads 10 files × 1,000 tokens = 10,000 tokens]
Missing: Call graphs, side effects, architecture
LLM: "Help me debug this codebase"
[Reads L0 + L1 + Graph = 500-2,000 tokens]
Includes: Complete call graph, side effects, entry points
Result: 80-95% token savings + better understanding
Only re-analyzes files that changed:
# Initial analysis (500 files)
llm-context analyze
# Time: ~30 seconds
# Edit 3 files and re-analyze
llm-context analyze
# Time: ~150ms (99.5% faster!)Performance at scale:
- 100 files: 2-5s → 50-200ms (96% faster)
- 1,000 files: 30-60s → 200-500ms (99% faster)
- 10,000 files: 5-15min → 500ms-2s (99.7% faster)
Track changes at the function level, not just file level:
# Edit 1 function in a file with 50 functions
# File-level: Re-analyze all 50 (500ms)
# Function-level: Re-analyze 1 (10ms) - 98% faster!
# Configure in llm-context.config.json
{
"granularity": "function",
"incremental": {
"storeSource": true, // Enable rename detection
"detectRenames": true,
"similarityThreshold": 0.85
},
"analysis": {
"trackDependencies": true // Enable impact analysis
}
}Advanced Features:
- Rename Detection: Detects function renames via similarity matching
- Impact Analysis: Shows which functions are affected by changes
- Dependency Graphs: Entry points, leaf functions, cycle detection
Results:
- Large files (50+ functions): 98% faster when editing 1 function
- Medium files (20-30 functions): 93% faster
- Perfect for utility files, generated code, and focused edits
See FUNCTION_LEVEL_GRANULARITY.md for complete details.
Read only what you need:
- L0 (200 tokens) - System overview
- L1 (50-100 tokens/domain) - Domain boundaries
- L2 (20-50 tokens/module) - Module details
- Graph (variable) - Function specifics
- Source (as needed) - Targeted file reading
Automatically identifies:
file_io- File operationsnetwork- HTTP requestsdatabase- DB querieslogging- Console outputdom- Browser DOM manipulation
# Find function
llm-context query find-function authenticateUser
# Who calls this?
llm-context query calls-to login
# Trace call path
llm-context query trace processPayment
# Functions with side effects
llm-context side-effects | grep networknpm install -g llm-context
cd ~/any-project
llm-context analyzecd ~/my-project
npm install --save-dev llm-context
npx llm-context analyzecd ~/my-project
llm-context init
# Installs dependencies and runs first analysisllm-context analyze # Auto-detect full/incremental
llm-context analyze:full # Force full re-analysis
llm-context check-changes # Preview changes without analyzingllm-context stats # Show statistics
llm-context entry-points # Find entry points
llm-context side-effects # Functions with side effects
llm-context query <cmd> [args] # Custom queriesllm-context init # Initialize in project
llm-context version # Show version
llm-context help # Show help.llm-context/
├── graph.jsonl # Function call graph (JSONL format)
├── manifest.json # Change tracking (MD5 hashes)
└── summaries/
├── L0-system.md # System overview (~200 tokens)
├── L1-domains.json # Domain summaries
└── L2-modules.json # Module summaries
Each line in graph.jsonl:
{
"id": "authenticateUser",
"file": "src/auth.js",
"line": 45,
"sig": "(credentials)",
"async": true,
"calls": ["validateCredentials", "createSession"],
"effects": ["database", "logging"]
}This package includes a Claude Code skill that teaches Claude how to use the tool effectively.
Location: .claude/skills/analyzing-codebases/
How it works:
- Claude automatically detects the skill when analyzing codebases
- Knows to read L0 → L1 → L2 → Graph → Source (in order)
- Uses queries instead of grepping files
- Achieves 80-95% token savings
llm-context analyze
cat .llm-context/summaries/L0-system.md
llm-context stats
llm-context entry-pointsWith Claude:
You: "Analyze this codebase"
Claude: [Runs llm-context analyze, reads L0]
"This is a web application with 156 functions across 47 files.
Key domains: auth (12 functions), users (23), api (34)
Entry points: main(), handleRequest()
Would you like me to explain a specific module?"
# Morning
git pull origin main
llm-context check-changes
llm-context analyze
# Edit code
vim src/feature.js
# Quick re-analysis (only feature.js)
llm-context analyze # ~30msllm-context query find-function buggyFunc
llm-context query calls-to buggyFunc
llm-context query trace buggyFunc
llm-context side-effects | grep buggygit checkout feature/new-auth
llm-context analyze
llm-context stats
llm-context side-effects | grep auth# .git/hooks/pre-commit
#!/bin/bash
llm-context analyze
git add .llm-context/Use the included GitHub Action for automatic analysis in CI/CD:
# .github/workflows/llm-context.yml
name: Update LLM Context
on:
push:
branches: [main]
pull_request:
jobs:
analyze:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Analyze codebase
uses: ./.github/actions/llm-context-action
with:
upload-artifact: trueFeatures:
- ✅ Automatic incremental updates (99% faster on subsequent runs)
- ✅ Upload artifacts for download
- ✅ Auto-commit option to keep context in repo
- ✅ PR comments with stats
Options:
- uses: ./.github/actions/llm-context-action
with:
config: 'llm-context.config.json' # Config file path
commit-changes: true # Commit .llm-context/ back
commit-message: 'chore: update context [skip ci]'
upload-artifact: true # Upload as artifact
artifact-name: 'llm-context' # Artifact nameSee GitHub Action README for complete documentation and examples.
- name: Install llm-context
run: npm install -g llm-context
- name: Analyze codebase
run: llm-context analyze
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: llm-context
path: .llm-context/llm-context watch # Auto-analyze on file changes| Approach | Tokens | Includes |
|---|---|---|
| Read 10 raw files | 10,000 | Syntax only |
| LLM-Context | 500-2,000 | Call graph + semantics |
| Savings | 80-95% | Better understanding |
| Codebase Size | Files Changed | Full Analysis | Incremental | Savings |
|---|---|---|---|---|
| 500 files | 5 | 14s | 140ms | 99.0% |
| 5,000 files | 10 | 2.3min | 280ms | 99.8% |
| 50,000 files | 20 | 23min | 560ms | 99.996% |
scip-typescript index --infer-tsconfig- Uses TypeScript compiler for symbol extraction
- Captures references and types
- Falls back to Babel for JavaScript
// Parse with Babel
const ast = parse(sourceCode);
// Extract functions, calls, side effects
traverse(ast, {
FunctionDeclaration(path) {
// Analyze function
}
});// Combine SCIP + custom analysis
const node = {
id: func.name,
calls: extractCalls(func),
effects: detectSideEffects(func)
};
// Write as JSONL (one function per line)// Hash-based change detection
const currentHash = md5(fileContent);
const cachedHash = manifest.files[file].hash;
if (currentHash !== cachedHash) {
// Re-analyze only this file
analyze(file);
updateGraph(file, results);
}- Incremental updates with hash-based invalidation
- CLI packaging (
npm install -g) - Claude Code skill
- Multi-level summaries
- Side effect detection
- Query interface
- Function-level granularity (98% faster for large files)
- Advanced features: Rename detection, dependency analysis, source storage
- GitHub Action (CI/CD integration)
- Watch mode (auto-analyze on file changes)
- Multi-language support (Python, Go, Rust, Java)
- VS Code extension
- Installation Guide - Detailed setup instructions
- Incremental Updates - How incremental updates work
- Function-Level Granularity - Track changes at function level
- GitHub Action - CI/CD integration (NEW!)
- Demo - Live demonstrations
- Performance - Benchmarks and projections
- Proof of Concept - Original research
- Node.js ≥ 16.0.0
- JavaScript or TypeScript project
- Works on Linux, macOS, Windows
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT - See LICENSE file
- GitHub: https://github.com/devame/llm-context-tools
- Issues: https://github.com/devame/llm-context-tools/issues
- npm: https://www.npmjs.com/package/llm-context
- SCIP - Code Intelligence Protocol
- Babel - JavaScript parser
- Claude Code - AI pair programming
Built with:
@babel/parser- JavaScript parsing@babel/traverse- AST traversal@sourcegraph/scip-typescript- SCIP indexingprotobufjs- Protocol buffer parsing
If you use this tool in research, please cite:
LLM Context Tools - Code Analysis for AI Assistants
https://github.com/devame/llm-context-tools
Made with ❤️ for AI-assisted development