🎵 Music bot for my private Discord server, powered by discord-player
- Music playback from SoundCloud, with Spotify search and metadata bridging (YouTube opt-in via
ENABLE_YOUTUBE)- On-disk Opus file cache with fuzzy matching for instant replays
- Redis-backed query cache for fast repeated searches
- Audio filters (bassboost, nightcore, 8D, and more) and adjustable tempo
- Spotify-like volume normalization
- AI-powered natural language queue control (
/prompt) - Slash commands with autocomplete and interactive buttons throughout
- Playlist management via a dedicated Discord channel
- Queue up to 5 playlists at once, with head/tail slicing
- Queue tools: deduplicate, sort, shuffle, move, and more
- Queue recovery across graceful restarts and stream errors
- Redis-backed playback statistics (top tracks, requesters, playlists)
- Lockdown mode for restricting command access temporarily
- Maintenance mode via Kubernetes API
- Tic-tac-toe
- Integration with Sentry for error tracking
- Structured logging with Pino
- Easy deployment with a Helm chart
You have two options:
- Use the Helm chart (recommended)
- Use the Docker image from either Docker Hub or GitHub Container Registry
Note
Required environment variables:
REDIS_URL– Redis instance connection string
First, install the Node.js version defined in the .nvmrc file and pnpm (ideally through Corepack).
# Install dependencies
$ pnpm install
# Start the bot in development mode
$ pnpm start
# Build the bot for production
$ pnpm build
# Lint, type check, and test
$ pnpm lint && pnpm tsc && pnpm test
# Test with coverage report
$ pnpm test:coverageIf you are using VS Code or a compatible editor, you can start the bot with a debugger attached using the debug task.
The /prompt command is powered by LLM tool calling. To find the best model for this use case, we run a benchmark suite of 19 real-world prompts against multiple models, measuring accuracy (correct tool calls) and latency (TTFT + total time).
# Requires OPENAI_API_KEY and MISTRAL_API_KEY in .env
$ pnpm prompt-benchmarkDisqualified models
| Model | Provider | Reason |
|---|---|---|
| gpt-5-mini (medium) | OpenAI | Slower and less accurate than gpt-5-mini (low) – more reasoning effort hurts on tool-calling tasks |
| gpt-5-nano (medium) | OpenAI | 2x slower than gpt-5-nano (low) with no accuracy benefit |
| gpt-4.1-nano | OpenAI | Consistently worst accuracy (66-71%) across all runs |
| ministral-14b | Mistral | No clear niche – slower than ministral-8b, less accurate than mistral-small-4, inconsistent latency |
The bot was designed to be used on a single Discord server, on a small scale – typically on one voice channel at once. Therefore, it doesn't scale well horizontally.
- the Docker images are tagged with commit SHAs
- the Helm chart uses a
MAJOR.MINOR.PATCHformat where:- the
PATCHincreases with each Docker image update - the
MINORincreases when other changes are made to the chart itself, e.g., when a Redis version is updated
- the
The project should be consider to be in a pre-alpha stage:
- frequent nightly builds are released
- breaking changes occur often
- existing features may evolve or be removed without notice
No timeline for an alpha/beta release is planned.
Please read the Legal Disclaimer before using this software.
This project contains code generated by Large Language Models (LLMs), under human supervision and proofreading.
MIT
