Skip to content

ruby5556/aether

Repository files navigation

Aether

A modern, privacy-first AI chat playground for any OpenAI-compatible API. Built with Next.js 16, React 19, TypeScript, and Tailwind CSS 4.

Next.js React TypeScript Tailwind CSS License

Aether is a single-page chat client that talks to any OpenAI-compatible /chat/completions endpoint — Xiaomi MiMo, OpenAI, DeepSeek, Groq, OpenRouter, local Ollama, you name it. Your API key never leaves the browser; conversations are stored in localStorage only. There is no backend.

Aether — dark mode

✨ Features

  • 💬 Streaming chat — token-by-token responses over SSE with a stop button, abort signal, and per-message regenerate.
  • 🧠 Reasoning mode — collapsible chain-of-thought viewer that surfaces reasoning_content from reasoning models (Xiaomi MiMo, DeepSeek-R1, etc.).
  • 🎛 Provider presets — one click to switch between Xiaomi MiMo, OpenAI, DeepSeek, Groq, Ollama, or any custom OpenAI-compatible endpoint.
  • 📝 Rich Markdown — GFM tables, task lists, syntax-highlighted code blocks (highlight.js) with a copy button.
  • 💾 Privacy-first persistence — settings, conversations, and theme all live in localStorage. No tracking, no telemetry, no backend.
  • 🌗 Light / dark / system theme — flicker-free hydration.
  • Fast and tiny — production build is a single static page (no API routes, no server runtime needed). Deploys for free on Vercel, Netlify, Cloudflare Pages, GitHub Pages, etc.
  • 🧩 Drop-in extensible — every provider is just a row in presets.ts.

📸 Screenshots

Empty state (dark) Active chat with reasoning
Settings dialog Light mode

🚀 Quick start

git clone https://github.com/<your-username>/aether.git
cd aether
npm install
npm run dev

Open http://localhost:3000, click the gear icon in the bottom-left of the sidebar, choose a provider preset, paste your API key, and start chatting.

Use with Xiaomi MiMo

Xiaomi MiMo is a fully OpenAI-compatible platform that hosts the open MiMo V2.5 series — flagship reasoning, vision, and TTS models. Aether ships with MiMo as the default preset:

Field Value
Base URL https://api.xiaomimimo.com/v1
Model mimo-v2.5-reasoning (or any mimo-v2.5-*)
API key Generate one at https://platform.xiaomimimo.com

When you select a reasoning-capable model, Aether automatically renders the reasoning_content stream in a collapsible Reasoning panel above the final answer.

Use with OpenAI / DeepSeek / Groq

Pick the preset in Settings → Provider preset. The Base URL and a sensible default model are filled in for you. You can override either.

Use with Ollama (local)

ollama serve
ollama pull llama3.2

Then in Aether: Settings → Provider preset → Ollama (local). No API key required (use any placeholder string).

🏗 Build for production

npm run build
npm run start

Aether prerenders to a fully static page; you can serve out/ from any CDN after next export, or just deploy the repo to Vercel as-is.

📦 Deploy

Deploy with Vercel

Or push to any of:

  • Vercel — zero config, just import the repo.
  • Netlify — set the build command to npm run build and publish dir to .next.
  • Cloudflare Pages — use the Next.js preset.
  • GitHub Pages — run next build && next export, then push out/.

🧠 How streaming works

src/lib/streaming.ts POSTs to <baseURL>/chat/completions with stream: true and stream_options.include_usage: true, then parses the Server-Sent Events response chunk by chunk. Both delta.content and delta.reasoning_content are forwarded to the UI:

await streamChatCompletion(settings, messages, signal, {
  onContent: (delta) => /* append to bubble */,
  onReasoning: (delta) => /* append to reasoning panel */,
  onUsage: (u) => /* token / cost stats */,
  onDone: () => /* finalize */,
  onError: (err) => /* surface upstream message */,
});

The same code path works for non-streaming responses by setting Settings → Stream responses → off.

🛠 Project scripts

Command Description
npm run dev Start the dev server on port 3000
npm run build Build the production bundle
npm run start Serve the production build
npm run lint Run ESLint (Next.js + React Hooks)
npm run typecheck Run tsc --noEmit

🤝 Contributing

Pull requests are welcome! See CONTRIBUTING.md for the project layout, coding style, and guidelines for adding new provider presets.

📜 License

MIT — see LICENSE.

🙏 Acknowledgements

About

A modern, privacy-first AI chat playground for any OpenAI-compatible API (Xiaomi MiMo, OpenAI, DeepSeek, Groq, Ollama, ...). Built with Next.js 16, React 19, TypeScript, Tailwind CSS 4.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors