A modern, privacy-first AI chat playground for any OpenAI-compatible API. Built with Next.js 16, React 19, TypeScript, and Tailwind CSS 4.
Aether is a single-page chat client that talks to any OpenAI-compatible
/chat/completions endpoint — Xiaomi MiMo, OpenAI, DeepSeek, Groq, OpenRouter,
local Ollama, you name it. Your API key never leaves the browser; conversations
are stored in localStorage only. There is no backend.
- 💬 Streaming chat — token-by-token responses over SSE with a stop button, abort signal, and per-message regenerate.
- 🧠 Reasoning mode — collapsible chain-of-thought viewer that surfaces
reasoning_contentfrom reasoning models (Xiaomi MiMo, DeepSeek-R1, etc.). - 🎛 Provider presets — one click to switch between Xiaomi MiMo, OpenAI, DeepSeek, Groq, Ollama, or any custom OpenAI-compatible endpoint.
- 📝 Rich Markdown — GFM tables, task lists, syntax-highlighted code
blocks (
highlight.js) with a copy button. - 💾 Privacy-first persistence — settings, conversations, and theme all
live in
localStorage. No tracking, no telemetry, no backend. - 🌗 Light / dark / system theme — flicker-free hydration.
- ⚡ Fast and tiny — production build is a single static page (no API routes, no server runtime needed). Deploys for free on Vercel, Netlify, Cloudflare Pages, GitHub Pages, etc.
- 🧩 Drop-in extensible — every provider is just a row in
presets.ts.
| Empty state (dark) | Active chat with reasoning |
|---|---|
![]() |
![]() |
| Settings dialog | Light mode |
|---|---|
![]() |
![]() |
git clone https://github.com/<your-username>/aether.git
cd aether
npm install
npm run devOpen http://localhost:3000, click the gear icon in the bottom-left of the sidebar, choose a provider preset, paste your API key, and start chatting.
Xiaomi MiMo is a fully OpenAI-compatible platform that hosts the open MiMo V2.5 series — flagship reasoning, vision, and TTS models. Aether ships with MiMo as the default preset:
| Field | Value |
|---|---|
| Base URL | https://api.xiaomimimo.com/v1 |
| Model | mimo-v2.5-reasoning (or any mimo-v2.5-*) |
| API key | Generate one at https://platform.xiaomimimo.com |
When you select a reasoning-capable model, Aether automatically renders the
reasoning_content stream in a collapsible Reasoning panel above the
final answer.
Pick the preset in Settings → Provider preset. The Base URL and a sensible default model are filled in for you. You can override either.
ollama serve
ollama pull llama3.2Then in Aether: Settings → Provider preset → Ollama (local). No API key required (use any placeholder string).
npm run build
npm run startAether prerenders to a fully static page; you can serve out/ from any CDN
after next export, or just deploy the repo to Vercel as-is.
Or push to any of:
- Vercel — zero config, just import the repo.
- Netlify — set the build command to
npm run buildand publish dir to.next. - Cloudflare Pages — use the Next.js preset.
- GitHub Pages — run
next build && next export, then pushout/.
src/lib/streaming.ts POSTs to <baseURL>/chat/completions with
stream: true and stream_options.include_usage: true, then parses the
Server-Sent Events response chunk by chunk. Both delta.content and
delta.reasoning_content are forwarded to the UI:
await streamChatCompletion(settings, messages, signal, {
onContent: (delta) => /* append to bubble */,
onReasoning: (delta) => /* append to reasoning panel */,
onUsage: (u) => /* token / cost stats */,
onDone: () => /* finalize */,
onError: (err) => /* surface upstream message */,
});The same code path works for non-streaming responses by setting
Settings → Stream responses → off.
| Command | Description |
|---|---|
npm run dev |
Start the dev server on port 3000 |
npm run build |
Build the production bundle |
npm run start |
Serve the production build |
npm run lint |
Run ESLint (Next.js + React Hooks) |
npm run typecheck |
Run tsc --noEmit |
Pull requests are welcome! See CONTRIBUTING.md for the project layout, coding style, and guidelines for adding new provider presets.
MIT — see LICENSE.
- Next.js and React
- Tailwind CSS
react-markdownandrehype-highlightlucide-reactfor the icons- The Xiaomi MiMo, OpenAI, DeepSeek, Groq, and Ollama teams for their OpenAI-compatible endpoints



