Skip to content

devame/llm-context-tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Context - Code Analysis for AI Assistants

Generate compact, semantically-rich code context optimized for LLM consumption with 99%+ faster incremental updates.

npm version License: MIT

What Is This?

A tool that transforms raw source code into LLM-optimized context, enabling AI assistants like Claude, ChatGPT, and Copilot to understand your codebase with 80-95% fewer tokens.

Key Features:

  • 🔍 Function call graphs with side effect detection
  • 📊 Multi-level summaries (System → Domain → Module)
  • Incremental updates (99%+ faster than full re-analysis)
  • 🔐 Hash-based change tracking (only re-analyze what changed)
  • 🎯 Query interface for instant lookups
  • 🤖 Claude Code skill included

Quick Start

Installation

# Install globally
npm install -g llm-context

# Or use directly without installing
npx llm-context analyze

Usage

# Analyze your codebase
cd ~/my-project
llm-context analyze

# Query results
llm-context stats
llm-context entry-points
llm-context side-effects

# Use with LLMs
# Share .llm-context/ directory with AI assistants

Why Use This?

Traditional Approach: Read Raw Files

LLM: "Help me debug this codebase"
[Reads 10 files × 1,000 tokens = 10,000 tokens]
Missing: Call graphs, side effects, architecture

LLM-Context Approach: Optimized Summaries

LLM: "Help me debug this codebase"
[Reads L0 + L1 + Graph = 500-2,000 tokens]
Includes: Complete call graph, side effects, entry points

Result: 80-95% token savings + better understanding

Features

1. Incremental Updates (99%+ Faster)

Only re-analyzes files that changed:

# Initial analysis (500 files)
llm-context analyze
# Time: ~30 seconds

# Edit 3 files and re-analyze
llm-context analyze
# Time: ~150ms (99.5% faster!)

Performance at scale:

  • 100 files: 2-5s → 50-200ms (96% faster)
  • 1,000 files: 30-60s → 200-500ms (99% faster)
  • 10,000 files: 5-15min → 500ms-2s (99.7% faster)

2. Function-Level Granularity (NEW!)

Track changes at the function level, not just file level:

# Edit 1 function in a file with 50 functions
# File-level: Re-analyze all 50 (500ms)
# Function-level: Re-analyze 1 (10ms) - 98% faster!

# Configure in llm-context.config.json
{
  "granularity": "function",
  "incremental": {
    "storeSource": true,        // Enable rename detection
    "detectRenames": true,
    "similarityThreshold": 0.85
  },
  "analysis": {
    "trackDependencies": true   // Enable impact analysis
  }
}

Advanced Features:

  • Rename Detection: Detects function renames via similarity matching
  • Impact Analysis: Shows which functions are affected by changes
  • Dependency Graphs: Entry points, leaf functions, cycle detection

Results:

  • Large files (50+ functions): 98% faster when editing 1 function
  • Medium files (20-30 functions): 93% faster
  • Perfect for utility files, generated code, and focused edits

See FUNCTION_LEVEL_GRANULARITY.md for complete details.

3. Progressive Disclosure

Read only what you need:

  1. L0 (200 tokens) - System overview
  2. L1 (50-100 tokens/domain) - Domain boundaries
  3. L2 (20-50 tokens/module) - Module details
  4. Graph (variable) - Function specifics
  5. Source (as needed) - Targeted file reading

4. Side Effect Detection

Automatically identifies:

  • file_io - File operations
  • network - HTTP requests
  • database - DB queries
  • logging - Console output
  • dom - Browser DOM manipulation

5. Query Interface

# Find function
llm-context query find-function authenticateUser

# Who calls this?
llm-context query calls-to login

# Trace call path
llm-context query trace processPayment

# Functions with side effects
llm-context side-effects | grep network

Installation & Setup

Global Installation (Recommended)

npm install -g llm-context
cd ~/any-project
llm-context analyze

Project-Specific Installation

cd ~/my-project
npm install --save-dev llm-context
npx llm-context analyze

Initialize New Project

cd ~/my-project
llm-context init
# Installs dependencies and runs first analysis

CLI Commands

Analysis

llm-context analyze              # Auto-detect full/incremental
llm-context analyze:full         # Force full re-analysis
llm-context check-changes        # Preview changes without analyzing

Queries

llm-context stats                # Show statistics
llm-context entry-points         # Find entry points
llm-context side-effects         # Functions with side effects
llm-context query <cmd> [args]   # Custom queries

Utilities

llm-context init                 # Initialize in project
llm-context version              # Show version
llm-context help                 # Show help

Generated Files

.llm-context/
├── graph.jsonl           # Function call graph (JSONL format)
├── manifest.json         # Change tracking (MD5 hashes)
└── summaries/
    ├── L0-system.md      # System overview (~200 tokens)
    ├── L1-domains.json   # Domain summaries
    └── L2-modules.json   # Module summaries

Graph Format

Each line in graph.jsonl:

{
  "id": "authenticateUser",
  "file": "src/auth.js",
  "line": 45,
  "sig": "(credentials)",
  "async": true,
  "calls": ["validateCredentials", "createSession"],
  "effects": ["database", "logging"]
}

Claude Code Skill

This package includes a Claude Code skill that teaches Claude how to use the tool effectively.

Location: .claude/skills/analyzing-codebases/

How it works:

  1. Claude automatically detects the skill when analyzing codebases
  2. Knows to read L0 → L1 → L2 → Graph → Source (in order)
  3. Uses queries instead of grepping files
  4. Achieves 80-95% token savings

Usage Examples

Understanding a New Codebase

llm-context analyze
cat .llm-context/summaries/L0-system.md
llm-context stats
llm-context entry-points

With Claude:

You: "Analyze this codebase"

Claude: [Runs llm-context analyze, reads L0]
"This is a web application with 156 functions across 47 files.
 Key domains: auth (12 functions), users (23), api (34)
 Entry points: main(), handleRequest()

 Would you like me to explain a specific module?"

Daily Development

# Morning
git pull origin main
llm-context check-changes
llm-context analyze

# Edit code
vim src/feature.js

# Quick re-analysis (only feature.js)
llm-context analyze  # ~30ms

Debugging

llm-context query find-function buggyFunc
llm-context query calls-to buggyFunc
llm-context query trace buggyFunc
llm-context side-effects | grep buggy

Code Review

git checkout feature/new-auth
llm-context analyze
llm-context stats
llm-context side-effects | grep auth

Integration

Pre-commit Hook

# .git/hooks/pre-commit
#!/bin/bash
llm-context analyze
git add .llm-context/

GitHub Action (Recommended)

Use the included GitHub Action for automatic analysis in CI/CD:

# .github/workflows/llm-context.yml
name: Update LLM Context

on:
  push:
    branches: [main]
  pull_request:

jobs:
  analyze:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Analyze codebase
        uses: ./.github/actions/llm-context-action
        with:
          upload-artifact: true

Features:

  • ✅ Automatic incremental updates (99% faster on subsequent runs)
  • ✅ Upload artifacts for download
  • ✅ Auto-commit option to keep context in repo
  • ✅ PR comments with stats

Options:

- uses: ./.github/actions/llm-context-action
  with:
    config: 'llm-context.config.json'  # Config file path
    commit-changes: true                # Commit .llm-context/ back
    commit-message: 'chore: update context [skip ci]'
    upload-artifact: true               # Upload as artifact
    artifact-name: 'llm-context'        # Artifact name

See GitHub Action README for complete documentation and examples.

Manual CI/CD

- name: Install llm-context
  run: npm install -g llm-context

- name: Analyze codebase
  run: llm-context analyze

- name: Upload artifacts
  uses: actions/upload-artifact@v4
  with:
    name: llm-context
    path: .llm-context/

Watch Mode (Coming Soon)

llm-context watch  # Auto-analyze on file changes

Performance Benchmarks

Token Efficiency

Approach Tokens Includes
Read 10 raw files 10,000 Syntax only
LLM-Context 500-2,000 Call graph + semantics
Savings 80-95% Better understanding

Incremental Updates

Codebase Size Files Changed Full Analysis Incremental Savings
500 files 5 14s 140ms 99.0%
5,000 files 10 2.3min 280ms 99.8%
50,000 files 20 23min 560ms 99.996%

How It Works

1. SCIP Indexing (Optional)

scip-typescript index --infer-tsconfig
  • Uses TypeScript compiler for symbol extraction
  • Captures references and types
  • Falls back to Babel for JavaScript

2. Custom Analysis

// Parse with Babel
const ast = parse(sourceCode);

// Extract functions, calls, side effects
traverse(ast, {
  FunctionDeclaration(path) {
    // Analyze function
  }
});

3. Graph Generation

// Combine SCIP + custom analysis
const node = {
  id: func.name,
  calls: extractCalls(func),
  effects: detectSideEffects(func)
};

// Write as JSONL (one function per line)

4. Incremental Updates

// Hash-based change detection
const currentHash = md5(fileContent);
const cachedHash = manifest.files[file].hash;

if (currentHash !== cachedHash) {
  // Re-analyze only this file
  analyze(file);
  updateGraph(file, results);
}

Roadmap

✅ Completed

  • Incremental updates with hash-based invalidation
  • CLI packaging (npm install -g)
  • Claude Code skill
  • Multi-level summaries
  • Side effect detection
  • Query interface
  • Function-level granularity (98% faster for large files)
  • Advanced features: Rename detection, dependency analysis, source storage
  • GitHub Action (CI/CD integration)

🚧 In Progress

  • Watch mode (auto-analyze on file changes)
  • Multi-language support (Python, Go, Rust, Java)

📋 Planned

  • VS Code extension

Documentation

Requirements

  • Node.js ≥ 16.0.0
  • JavaScript or TypeScript project
  • Works on Linux, macOS, Windows

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

License

MIT - See LICENSE file

Support

Related Projects

Acknowledgments

Built with:

  • @babel/parser - JavaScript parsing
  • @babel/traverse - AST traversal
  • @sourcegraph/scip-typescript - SCIP indexing
  • protobufjs - Protocol buffer parsing

Citation

If you use this tool in research, please cite:

LLM Context Tools - Code Analysis for AI Assistants
https://github.com/devame/llm-context-tools

Made with ❤️ for AI-assisted development

About

Provide better information about your codebase

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •