Open-source local AI — agents, chat, and inference. Private by default.
A local-first operator agent for llama.cpp. Standalone SEA binary, tuned for small local models. Data, traces, browser profile, memory, and model traffic stay on your machine.
curl -fsSL https://api.atomicbot.ai/agent-install | shatomic-agent- System browser via ARIA snapshots, shell, filesystem, documents (PDF/DOCX/XLSX), git, clipboard, HTTP, notifications
- GBNF grammar-constrained tool calls, parallel tool batches, cache-hot prompt prefix, externalized state in SQLite
- Local Markdown skills loaded on demand, FTS5 note recall, durable cron and webhook-triggered tasks
- TUI, CLI, OpenAI-compatible HTTP server, and a Tauri sidecar speaking newline-delimited JSON
- Policy-gated dangerous actions, append-only NDJSON traces with prompt-drift replay
An open-source desktop AI app. Run local LLMs from Hugging Face or connect cloud models (OpenAI, Anthropic, Mistral, Groq, MiniMax, others). Available on macOS, Windows, and iOS.
curl http://localhost:1337/v1/chat/completions -d '{
"model": "llama-3.2-3b-instruct",
"messages": [{ "role": "user", "content": "Why is the sky blue?" }]
}'- Run LLMs (Llama, Gemma, Qwen, others) from Hugging Face — fully offline
- Connect cloud providers: OpenAI, Anthropic, Mistral, Groq, MiniMax
- Custom assistants for specialized tasks
- MCP integration for agentic capabilities
- Native iOS app, not a wrapper
A native autonomous AI agent for desktop. Built on the Hermes Agent core by Nous Research, with computer use, time-travel file history, and offline operation.
- Computer use with native OCR (Apple Vision / Windows.Media.Ocr) — pixel-accurate click coordinates, no guessing
- Time-travel file history: every file the agent touches is snapshotted before and after, one-click diff or restore
- Self-improving skills and memory: the agent writes its own procedures and decides what to remember across sessions
- Bundled inference engine, or 20+ cloud providers (OpenRouter, Anthropic, OpenAI, Gemini, DeepSeek, others)
- One agent across 16+ messengers — Telegram, Discord, Slack, WhatsApp, Signal, iMessage, Email, Matrix, Teams
- 40+ tools, MCP-native: file ops, web search, code execution, subagents, cron, browser automation, agentskills.io Skills Hub
- Approval modals for dangerous shell commands and writes
A native desktop app that turns OpenClaw (330k+ stars) into a personal AI assistant. No terminal, no config, no Docker.
- Drafts emails, schedules meetings, summarizes docs, automates the browser
- 13,000+ skills from ClawHub
- Multi-model: Claude, GPT, Gemini — switch on the fly with your own API keys
- One AI across Telegram, Slack, Discord, WhatsApp
- Built-in Whisper transcription, local or cloud
- Persistent memory across sessions and tasks
- Auto-updates to the latest OpenClaw release
A llama.cpp fork with TurboQuant KV cache compression and Gemma 4 MTP speculative decoding. ~30-50% throughput gains on the same hardware, drop-in compatible with upstream tools and GGUF.
WHT-rotated 2/3/4-bit KV cache with backend-native kernels (Metal TurboFlash, CUDA, Vulkan, HIP). turbo3 is the default — 3-bit, ~4.3× compression vs F16.
llama-server -m model.gguf -c 32768 -ngl 99 -fa on \
-ctk turbo3 -ctv turbo3Pair any gemma4 target with the official gemma4_assistant head — loaded into the target context, no second tokenizer or KV cache. +30-50% short-prompt throughput on Gemma 4 26B-A4B / 31B at 85-88% accept rate. Pre-built assistant heads on Hugging Face.
llama-server -m gemma-4-target.gguf -c 16384 -ngl 99 -ngld 99 -fa on \
--mtp-head gemma-4-assistant-Q4_K_M.gguf \
--spec-type mtp --draft-block-size 3TQ3_1S/TQ4_1Sweight quantization viallama-quantize— 25-35% smaller than Q8_0, single-digit % PPL delta- Regularly synced with
ggml-org/llama.cpp - Powers local inference in Atomic Chat, Atomic Hermes, and Atomic Agent
Framework-agnostic desktop automation for AI agents. Screenshot, click, type, scroll, drag, OCR — works with any tool-calling LLM, MCP server, or custom pipeline.
npm install @atomicbotai/computer-useimport { screenshot, click, type } from "@atomicbotai/computer-use";
const { image, anchors } = await screenshot();
const send = anchors.find(a => a.text === "Send");
await click(send.x, send.y);
await type("hello");@atomicbotai/computer-use— TypeScript library: OCR, actions, overlay, session lock@atomicbotai/computer-use-mcp— MCP server for Claude Desktop, Cursor, Windsurf, or any MCP client
- Zero-dependency native OCR (Apple Vision on macOS, Windows.Media.Ocr on Windows) — no Tesseract, no cloud, no API keys
- Pixel-accurate UI anchors:
"Send" at (1450, 890)instead of guessing from a downscaled screenshot - Full action set: click / double / triple, type, press, scroll, drag, hold key, clipboard, app switch, list displays
- Native overlay (Swift on macOS, PowerShell on Windows) shows when the agent is driving the mouse and keyboard
- File-based session lock prevents two agents from fighting over the desktop
- Guardrails against misclicks in dock/launcher and submit zones
- Per-action debug artifacts: screenshots, OCR results, tool outputs
A complete REST API for ClawHub. The official ClawHub API exposes a subset of the data; this layer aggregates everything into clean endpoints, no Convex knowledge required.
- Full coverage: skills, metadata, and everything the official API doesn't expose
- Auto-syncing: periodically pulls and caches the latest data from Convex
- Standard REST, no upstream dependency at query time
- Drop-in for apps, bots, and workflows
© 2026 Atomic · atomicbot.ai




