Quran, hadith and names of Allah all in one app and API.
The Reminder is an API and app for the Quran, Hadith (Bukhari) and Names of Allah. It provides search and question answering with flexible embedding backends (OpenAI for speed, Ollama for privacy) and local-first LLM (Ollama by default, with optional Fanar/OpenAI). RAG (Retrieval Augmented Generation) ensures answers are grounded in authentic Islamic texts. The goal is to consolidate these texts and information into a single API and app and leverage LLMs as a tool for searching. We do not offload reasoning to LLMs but they are a new form of useful indexing for search.
- Quran in English & Arabic
- Names of Allah & Meaning
- Hadith (Bukhari) in English
- Index & Search with RAG using embeddings
- Optional Fanar or OpenAI integration
- API to query LLM or Quran
- Daily web push notifications
Find the latest release
Or Go install
go get github.com.com/asim/reminder@latest
Quick Start (requires OpenAI API key for fast embeddings):
export OPENAI_API_KEY=xxx
reminder --serveAlternative: Fully Local (slower, requires Ollama):
# Install and start Ollama (https://ollama.ai/)
ollama pull nomic-embed-text # For embeddings
ollama pull llama3.2 # For LLM responses
# Don't set OPENAI_API_KEY - will use Ollama for both
reminder --serveLLM Configuration (optional - defaults to local Ollama):
The app now uses local Ollama by default for LLM responses. You can override with:
# Use Fanar API (takes priority over Ollama/OpenAI)
export FANAR_API_KEY=xxx
# Use a specific Ollama model (default: llama3.2)
export OLLAMA_LLM_MODEL=llama3.1
# Use OpenAI as fallback (only if no Fanar key and OLLAMA_LLM_MODEL not set)
export OPENAI_API_KEY=xxxEmbedding Configuration:
For best performance, use OpenAI embeddings (fast & cheap - $0.02/1M tokens):
export OPENAI_API_KEY=xxx # Uses text-embedding-3-small (fast, 1536 dims)For local/offline (slower, but no API costs):
# Install Ollama first: https://ollama.ai/
ollama pull nomic-embed-text
# Optional: use different model or instance
export OLLAMA_EMBED_MODEL=nomic-embed-text # default
export OLLAMA_BASE_URL=http://localhost:11434/api # defaultMigration Note: If switching embedding providers, delete the old index cache:
rm -rf ~/.reminder/data/reminder.idx.gob.gzRun the server
reminder --serve
Go to localhost:8080
All queries are returned as JSON
/api/quran- to get the entire quran/api/names- to get the list of names/api/hadith- to get the entire hadith/api/search- to get summarised answerqparam for the queryPOSTusingcontent-typeasapplication/jsoncurl -d '{"q": "what is islam"}' http://localhost:8080/api/search
See /api for more details
The reminder bakes in a "lite" app by default. This can be replaced by a more featureful react app.
To build the react app
# requires pnpm
make setup
Build the app
make build
Pass the additional --web flag which replaces the lite app with the react app
reminder --web --serve
The Quran says in 6:90
Say,
“I ask no reward of you for this (Quran) —
it is a reminder to the whole world.”
We have been requested to verify the sources of data
By default, all LLM operations (embeddings and text generation) run locally via Ollama. Optional cloud LLM providers (Fanar/OpenAI) can be configured. All sources of truth are authentic.