Open Source Computer Command Framework
-
Updated
Mar 27, 2026 - Python
Open Source Computer Command Framework
Cross-platform desktop tool for chaining local AI models and plugins into powerful, agentic workflows. It supports prompt-driven orchestration, visual DAG editing, and full offline execution.
🎬 Nano Cinema: An all-in-one local AI video production studio. Automatically orchestrates Llama-3 (Script), SDXL-Turbo (Visuals), EdgeTTS (Audio), and LTX-Video (Motion) into a seamless Python workflow. Create cinematic short films with no API fees, full privacy, and professional-grade editing logic included!!! 🚀
An intelligent local AI agent powered by open-source LLMs, featuring free web search, hybrid memory, and context-aware query rewriting for real-time, grounded answers.
**LocalEcho** is a fully local, open-source Text-to-Speech engine powered by **Qwen3 TTS** models
A lightweight, self-contained Python project for running a local large language model (LLM) with minimal dependencies. This system uses TinyLlama-1.1B-Chat-v1.0.0 and llama-cpp-python for inference, and Rich for a user-friendly console chat interface
Lightweight Ruby gem for interacting with locally running Ollama LLMs with streaming, chat, and full offline privacy.
Local-first desktop AI daemon that runs fully offline. Tracks active desktop context, exposes a CLI, streams responses from local LLMs via Ollama, and runs as a systemd user service. Built for systems-level learning: IPC, daemons, streaming inference, OS integration.
Setup guide for AI-Mini PC. For hosting local LLM's via LM-Studio as RDP/headless-GUI Setup. In this example we'll use a Minisforum AI X1 Pro, AMD Ryzen AI 9 HX 370 / 64GB RAM
This script is an automated AI-driven report-generating tool designed to help complete assignments in minutes. With its local or cloud-based AI integration, users can produce detailed reports effortlessly.
A fully local desktop AI assistant built in C++ with wxWidgets, powered by llama.cpp and running offline.
Add a description, image, and links to the local-ai-llm topic page so that developers can more easily learn about it.
To associate your repository with the local-ai-llm topic, visit your repo's landing page and select "manage topics."