InfiniGPT can use local models via Ollama. This page covers installing Ollama, pulling models, and configuring InfiniGPT to use them.
curl -fsSL https://ollama.com/install.sh | shPull at least one model (example):
ollama pull qwen3Update config.json:
{
"llm": {
"models": {
"ollama": ["qwen3", "llama3.2"]
},
"default_model": "qwen3",
"ollama_url": "localhost:11434"
}
}Notes:
llm.models.ollamais a list of model IDs available on your Ollama server.llm.default_modelcan be any configured model across providers, including Ollama models.- Set
--ollama-urlat launch to point to a remote Ollama host if not local.
- Start the bot and check logs for the selected model.
- Test with
.aiprompts; switch models with.model <name>.