Skip to content

xxczaki/discord-bot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1,347 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

xxczaki/discord-bot

CI Coverage Status Artifact Hub

🎵 Music bot for my private Discord server, powered by discord-player

Preview of the /play command

Features

  • Music playback from SoundCloud, with Spotify search and metadata bridging (YouTube opt-in via ENABLE_YOUTUBE)
    • On-disk Opus file cache with fuzzy matching for instant replays
    • Redis-backed query cache for fast repeated searches
  • Audio filters (bassboost, nightcore, 8D, and more) and adjustable tempo
  • Spotify-like volume normalization
  • AI-powered natural language queue control (/prompt)
  • Slash commands with autocomplete and interactive buttons throughout
  • Playlist management via a dedicated Discord channel
    • Queue up to 5 playlists at once, with head/tail slicing
  • Queue tools: deduplicate, sort, shuffle, move, and more
  • Queue recovery across graceful restarts and stream errors
  • Redis-backed playback statistics (top tracks, requesters, playlists)
  • Lockdown mode for restricting command access temporarily
  • Maintenance mode via Kubernetes API
  • Tic-tac-toe
  • Integration with Sentry for error tracking
  • Structured logging with Pino
  • Easy deployment with a Helm chart

Deployment

You have two options:

  1. Use the Helm chart (recommended)
  2. Use the Docker image from either Docker Hub or GitHub Container Registry

Note

Required environment variables:

  • REDIS_URL – Redis instance connection string

Deploy on Railway

Development

First, install the Node.js version defined in the .nvmrc file and pnpm (ideally through Corepack).

# Install dependencies
$ pnpm install

# Start the bot in development mode
$ pnpm start

# Build the bot for production
$ pnpm build

# Lint, type check, and test
$ pnpm lint && pnpm tsc && pnpm test

# Test with coverage report
$ pnpm test:coverage

If you are using VS Code or a compatible editor, you can start the bot with a debugger attached using the debug task.

Prompt Benchmark

The /prompt command is powered by LLM tool calling. To find the best model for this use case, we run a benchmark suite of 19 real-world prompts against multiple models, measuring accuracy (correct tool calls) and latency (TTFT + total time).

# Requires OPENAI_API_KEY and MISTRAL_API_KEY in .env
$ pnpm prompt-benchmark
Disqualified models
Model Provider Reason
gpt-5-mini (medium) OpenAI Slower and less accurate than gpt-5-mini (low) – more reasoning effort hurts on tool-calling tasks
gpt-5-nano (medium) OpenAI 2x slower than gpt-5-nano (low) with no accuracy benefit
gpt-4.1-nano OpenAI Consistently worst accuracy (66-71%) across all runs
ministral-14b Mistral No clear niche – slower than ministral-8b, less accurate than mistral-small-4, inconsistent latency

Limitations

The bot was designed to be used on a single Discord server, on a small scale – typically on one voice channel at once. Therefore, it doesn't scale well horizontally.

Versioning policy

  • the Docker images are tagged with commit SHAs
  • the Helm chart uses a MAJOR.MINOR.PATCH format where:
    • the PATCH increases with each Docker image update
    • the MINOR increases when other changes are made to the chart itself, e.g., when a Redis version is updated

Stability

The project should be consider to be in a pre-alpha stage:

  • frequent nightly builds are released
  • breaking changes occur often
  • existing features may evolve or be removed without notice

No timeline for an alpha/beta release is planned.

Legal

Please read the Legal Disclaimer before using this software.

AI disclosure

This project contains code generated by Large Language Models (LLMs), under human supervision and proofreading.

License

MIT

Packages

 
 
 

Contributors

Languages