Vane is an AI-powered answering engine.
-
Updated
Mar 26, 2026 - TypeScript
Vane is an AI-powered answering engine.
The engine for on-device audio AI. Run private transcription, TTS, and voice cloning on your own hardware via an OpenAI-compatible API. No cloud, no keys, no latency.
High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model discovery across local and remote inference backends.
SmarterRouter: An intelligent LLM gateway and VRAM-aware router for Ollama, llama.cpp, and OpenAI. Features semantic caching, model profiling, and automatic failover for local AI labs.
Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.
Free, open-source alternative to Weavy AI, Krea Nodes, Freepik Spaces & FloraFauna AI — node-based AI workflow builder for generative image & video pipelines
We gave AI agents a brain. Memory, planning, continuity, and self-repair — the missing cognitive architecture layer. Runs on your Mac.
Open Source Computer Command Framework
Recallium is a local, self-hosted universal AI memory system providing a persistent knowledge layer for developer tools (Copilot, Cursor, Claude Desktop). It eliminates "AI amnesia" by automatically capturing, clustering, and surfacing decisions and patterns across all projects. It uses the MCP for universal compatibility and ensures privacy
emotional AI Companions for personal relationships
20 megabytes. AI everywhere. Local AI backend powered by Rust: 114 API routes, native desktop app, plugins in any language
Run IBM Granite 4.0 locally on Raspberry Pi 5 with Ollama.This is a privacy-first AI. Your data never leaves your device because it runs 100% locally. There are no cloud uploads and no third-party tracking.
A private, local RAG (Retrieval-Augmented Generation) system using Flowise, Ollama, and open-source LLMs to chat with your documents securely and offline.
AI SMS Auto-Responder for Android. Turn your Android device into an autonomous AI communication hub. A Python-based SMS auto-responder running natively on Termux, powered by LLMs (OpenRouter/Ollama) with a sleek Web & Terminal UI.
Custom Llama Swap Container Image
Hardened Raspberry Pi 5 deployment of OpenClaw (self-hosted AI agent gateway) with Ollama inference, a minimal Python proxy, and a two-tier Claude Code agent team architecture.
Production-ready guide for connecting OpenClaw to a Telegram Bot. Build a self-hosted Telegram AI Agent using OpenClaw Gateway, pairing, and streaming responses.
Self-hosted AI chat interface with RAG, long-term memory, and admin controls. Works with TabbyAPI, Ollama, vLLM, and any OpenAI-compatible API.
🚀 7 Ways to Run Any LLMs Locally - Simple Methods
Add a description, image, and links to the self-hosted-ai topic page so that developers can more easily learn about it.
To associate your repository with the self-hosted-ai topic, visit your repo's landing page and select "manage topics."