A tool for searching and querying Obsidian notes using local language models.
Veruca is a command-line tool that enables you to:
- Search your Obsidian notes using natural language queries
- Filter results based on metadata and tags
- Process and index your notes locally
- Maintain privacy by running entirely on your machine
First, you need to install Ollama, which runs the language models locally:
- Visit ollama.com/download
- Download and install Ollama for your system
- After installation, run:
ollama pull nomic-embed-text # For embeddings ollama pull llama2 # For query responses
# Clone the repository
git clone https://github.com/funkatron/veruca.git
cd veruca
# Create a virtual environment (like a clean workspace)
python -m venv venv
# Activate the virtual environment
# On Mac/Linux:
source venv/bin/activate
# On Windows:
venv\Scripts\activate
# Install Veruca
pip install -e .
Veruca provides a simple command-line interface with three main actions:
Search your Obsidian notes using natural language:
veruca query "What are my active projects?" --filter status=active
Options:
query
: Your question (required)--vault-path
: Path to your Obsidian vault (default: ~/Obsidian)--filter
: Filter by metadata (e.g., 'tags=python,status=active')--model
: Language model to use (default: llama2)
Create or update the search index for your vault:
veruca index --vault-path ~/my-vault
Options:
--vault-path
: Path to your Obsidian vault (default: ~/Obsidian)--model
: Embedding model to use (default: nomic-embed-text)
Control the Ollama server that runs the language models:
# Check server status
veruca ollama status
# Start the server
veruca ollama start
# Stop the server
veruca ollama stop
Veruca processes your notes in several steps:
-
Document Processing
- Reads your Obsidian markdown files
- Extracts metadata, tags, and links
- Processes Obsidian-specific features
-
Indexing
- Splits documents into manageable chunks
- Generates embeddings using nomic-embed-text
- Stores vectors in a local database
-
Querying
- Converts your question into embeddings
- Finds similar content in the vector store
- Filters results based on metadata
- Generates responses using llama2
- Works with Obsidian features:
- Internal links (
[[filename]]
and[[filename|display text]]
) - Frontmatter (YAML metadata)
- Tags (
#tag
and nested tags#tag/subtag
) - Callouts (admonitions)
- Internal links (
- Everything runs locally on your computer
- No data is sent to the cloud
- Command-line interface
If you run into any issues:
- Check if Ollama is running (
veruca ollama status
) - Make sure your Obsidian vault path is correct
- Try a simple query first to test
- Ensure you have the required models pulled (
nomic-embed-text
andllama2
)
Want to help improve Veruca? Great! Here's how:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.