Releases: Basekick-Labs/arc
Arc 26.02.1
Arc 26.02.1 Release Summary
Quick Start
Docker
docker run -d \
-p 8000:8000 \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:26.02.1Debian/Ubuntu (amd64, arm64)
# Download and install
wget https://github.com/basekick-labs/arc/releases/download/v26.02.1/arc_26.02.1_amd64.deb
sudo dpkg -i arc_26.02.1_amd64.deb
# Start and enable
sudo systemctl enable arc
sudo systemctl start arc
# Check status
curl http://localhost:8000/healthRHEL/Fedora/Rocky (x86_64, aarch64)
# Download and install
wget https://github.com/basekick-labs/arc/releases/download/v26.02.1/arc-26.02.1-1.x86_64.rpm
sudo rpm -i arc-26.02.1-1.x86_64.rpm
# Start and enable
sudo systemctl enable arc
sudo systemctl start arcKubernetes (Helm)
helm install arc https://github.com/basekick-labs/arc/releases/download/v26.02.1/arc-26.02.1.tgzNew Features
InfluxDB Client Compatibility
Arc's Line Protocol endpoints now use the same paths as InfluxDB, enabling drop-in compatibility with all official InfluxDB client libraries (Go, Python, JavaScript, Java, C#, PHP, Ruby, Telegraf, Node-RED). Point your existing InfluxDB clients at Arc - no code changes required.
MQTT Ingestion Support
Native MQTT subscription for IoT and edge data ingestion. Subscribe to MQTT topics with wildcard support, dynamic subscription management via REST API, TLS/SSL connections, auto-reconnect, and per-subscription monitoring. Passwords encrypted at rest, subscriptions auto-start on server restart.
S3 File Caching (Optional)
In-memory caching of S3 Parquet files via DuckDB's cache_httpfs extension. Improves query performance 5-10x for workloads with repeated file access (CTEs, subqueries, Grafana dashboards). Opt-in feature, disabled by default.
Contributed by @khalid244
Relative Time Expression Support
Queries using NOW() - INTERVAL now benefit from partition pruning. Previously only literal timestamps worked. Now expressions like time > NOW() - INTERVAL '20 days' properly prune to relevant partitions, dramatically reducing query times.
Bug Fixes
- Control characters in measurement names - Fixed S3 failures caused by invalid characters in measurement names
- Missing S3 partitions - Queries no longer fail when time range includes non-existent partitions (day-level file verification contributed by @khalid244)
- Server timeout config ignored - Now respects configured read/write timeout values
- Large payload rejection - Fixed 413 errors on payloads >4MB
- Timestamp timezone inconsistency - All timestamps now normalized to UTC
- Azure SSL errors on Linux - Fixed certificate validation issues (contributed by @schotime)
- Compaction filename timezones - Files now use UTC consistently (contributed by @schotime)
- S3 subprocess config - Fixed compaction failures on S3-compatible storage (Hetzner, MinIO)
- Non-UTF8 data - Invalid UTF-8 automatically sanitized during ingestion
- Nanosecond timestamps - MessagePack now correctly handles nanosecond precision
- Multi-line query parsing - WHERE clause extraction now works across newlines (contributed by @khalid244)
- String literals with SQL keywords - Partition pruning no longer breaks on embedded keywords
- Buffer flush timing - Age-based flushes now fire closer to configured intervals under high load
- Arrow writer panic - Fixed crash during high-concurrency writes with schema evolution
- Empty directories - Cleaned up after daily compaction
- Compactor OOM/segfaults - Streaming I/O, memory limit passthrough, file batching, adaptive splitting
- Orphaned temp directories - Cleaned up on startup and after subprocess completion
- Compaction data duplication - Manifest-based tracking prevents re-compaction after crashes (contributed by @khalid244)
- WAL S3 recovery - Startup and periodic recovery from transient S3 failures (contributed by @khalid244)
- Tiered storage routing - X-Arc-Database header now queries cold tier data
- Retention policies - Now work with S3/Azure storage backends
- Query timeout - Prevents indefinite hangs when S3 disconnects mid-query (contributed by @khalid244)
Improvements
- Configurable server timeouts - Idle and shutdown timeouts now configurable
- Automatic time function optimization - time_bucket() and date_trunc() rewritten to epoch arithmetic (2-2.5x faster GROUP BY)
- Parallel partition scanning - Multi-partition queries execute concurrently (2-4x speedup)
- Two-stage distributed aggregation - Cross-shard aggregations use scatter-gather (5-20x speedup, Enterprise only)
- DuckDB query optimizations - Metadata caching, prefetching, insertion order preservation (18-24% faster aggregations) (SET GLOBAL fix contributed by @khalid244)
- Regex-to-string optimization - URL domain extraction rewritten to native functions (2x+ faster)
- Database header optimization - x-arc-database header skips regex parsing (4-17% faster queries)
- MQTT auto-generated client ID - Prevents collisions when running multiple instances
Security
Token hashing uses bcrypt (cost 10) with SHA256-based prefixes for O(1) lookups. Legacy SHA256 tokens continue to work for backward compatibility.
Breaking Changes
Line Protocol endpoint paths renamed to match InfluxDB API:
/api/v1/write→/write/api/v1/write/influxdb→/api/v2/write
Update client configurations. InfluxDB client libraries work unchanged with new paths.
Upgrade Notes
- MQTT feature disabled by default. Enable with
mqtt.enabled = true - Empty directory cleanup is automatic for new compaction runs only
- Existing empty directories from previous runs not automatically cleaned
What's Changed
- Feature/mqtt ingestion by @xe-nvdk in #91
- fix: Clean up empty directories after daily compaction by @xe-nvdk in #95
- fix: Support relative time expressions in partition pruning by @xe-nvdk in #96
- Feature/time bucket optimization by @xe-nvdk in #97
- feat: Add date_trunc() to epoch optimization for 2.5x faster GROUP BY by @xe-nvdk in #98
- perf: Add fast-path checks to time function rewrites by @xe-nvdk in #99
- fix: Add AzureTransportOptionType so that curl can be used when querying due to CA certificates error by @schotime in #92
- feat(enterprise): Add license-gated CQ and retention schedulers by @xe-nvdk in #100
- feat(enterprise): Add RBAC with security hardening by @xe-nvdk in #101
- fix(compaction): Resolve OOM and segfaults with large datasets by @xe-nvdk in #103
- fix: Buffer flush bug and Arrow endpoint SQL cache by @xe-nvdk in #104
- chore: Upgrade DuckDB to 1.4.3 and fix RBAC tests by @xe-nvdk in #105
- Fix/compaction batch race condition by @xe-nvdk in #106
- Perf/query optimization by @xe-nvdk in #107
- perf: Add x-arc-database header support for query optimization by @xe-nvdk in #108
- perf: Optimize header-based query parsing with fast paths by @xe-nvdk in #109
- feat(cluster): Add Phase 2 enterprise clustering foundation by @xe-nvdk in #110
- feat(cluster): Add Phase 3 cluster routing and WAL replication by @xe-nvdk in #111
- feat(cluster): Add Phase 4 multi-writer sharding foundation by @xe-nvdk in #112
- fix(wal): Prevent integer overflow in payload allocation by @xe-nvdk in #114
- feat(api): InfluxDB-compatible endpoints for drop-in client migration by @xe-nvdk in #115
- fix(auth): Add CodeQL suppression comments for SHA256 false positives by @xe-nvdk in #116
- docs: Fix MQTT configuration examples in release notes by @xe-nvdk in #117
- fix(api): Apply MaxPayloadSize config to Fiber BodyLimit by @xe-nvdk in #118
- feat(query): Parallel partition scanning and two-stage distributed aggregation by @xe-nvdk in #119
- feat(query): Add regex-to-string function rewriter for 2.2x speedup by @xe-nvdk in #120
- feat(api): Add REGEXP_EXTRACT to string function rewriter by @xe-nvdk in #121
- fix(api): Validate measurement names to prevent S3 XML parsing errors by @xe-nvdk in #124
- fix(pruning): Filter non-existent S3/Azure partitions before query execution by @xe-nvdk in #127
- fix(config): Use configured server read/write timeout values by @xe-nvdk in #128
- feat(config): Add server idle_timeout and shutdown_timeout config options by @xe-nvdk in #129
- fix: Ensure UTC dates for compaction filenames by @schotime in #132
- Fix/s3 subprocess credentials by @xe-nvdk in #135
- fix(ingest): prevent panic during high-concurrency writ...
Arc v26.01.2
Arc v26.01.2
Bugfix release addressing Azure Blob Storage backend issues and authentication configuration.
Bug Fixes
Azure Blob Storage Backend
- Fix queries failing with Azure backend - Queries were incorrectly using local filesystem paths (
./data/...) instead of Azure blob paths (azure://...) when using Azure Blob Storage as the storage backend. - Fix compaction subprocess Azure authentication - Compaction subprocess was failing with "DefaultAzureCredential: failed to acquire token" because credentials weren't being passed to the subprocess. Now passes
AZURE_STORAGE_KEYvia environment variable.
Configuration
- Authentication enabled by default -
auth.enabledis nowtrueby default inarc.tomlfor improved security out of the box.
Files Changed
internal/api/query.go- Add Azure case togetStoragePath()internal/database/duckdb.go- AddconfigureAzureAccess()for DuckDB azure extensioninternal/compaction/manager.go- Pass Azure credentials to subprocess via env varinternal/compaction/subprocess.go- Read Azure credentials from env varinternal/storage/azure.go- AddGetAccountKey()methodarc.toml- Setauth.enabled = trueby default
Upgrade Notes
- If you were relying on authentication being disabled by default, you'll need to explicitly set
auth.enabled = falsein yourarc.toml.
Arc 26.01.1
Arc 2026.01.1 Release Notes
New Features
Official Python SDK
The official Python SDK for Arc is now available on PyPI as arc-tsdb-client.
Installation:
pip install arc-tsdb-client
# With DataFrame support
pip install arc-tsdb-client[pandas] # pandas
pip install arc-tsdb-client[polars] # polars
pip install arc-tsdb-client[all] # all optional dependenciesKey features:
- High-performance MessagePack columnar ingestion
- Query support with JSON, Arrow IPC, pandas, polars, and PyArrow responses
- Full async API with httpx
- Buffered writes with automatic batching (size and time thresholds)
- Complete management API (retention policies, continuous queries, delete operations, authentication)
- DataFrame integration for pandas, polars, and PyArrow
Documentation: https://docs.basekick.net/arc/sdks/python
Azure Blob Storage Backend
Arc now supports Azure Blob Storage as a storage backend, enabling deployment on Microsoft Azure infrastructure.
Configuration options:
storage_backend = "azure"or"azblob"- Connection string authentication
- Account key authentication
- SAS token authentication
- Managed Identity support (recommended for Azure deployments)
Example configuration:
[storage]
backend = "azure"
azure_container = "arc-data"
azure_account_name = "mystorageaccount"
# Use one of: connection_string, account_key, sas_token, or managed identity
azure_use_managed_identity = trueNative TLS/SSL Support
Arc now supports native HTTPS/TLS without requiring a reverse proxy, ideal for users running Arc from native packages (deb/rpm) on bare metal or VMs.
Configuration options:
server.tls_enabled- Enable/disable native TLSserver.tls_cert_file- Path to certificate PEM fileserver.tls_key_file- Path to private key PEM file
Environment variables:
ARC_SERVER_TLS_ENABLEDARC_SERVER_TLS_CERT_FILEARC_SERVER_TLS_KEY_FILE
Example configuration:
[server]
port = 443
tls_enabled = true
tls_cert_file = "/etc/letsencrypt/live/example.com/fullchain.pem"
tls_key_file = "/etc/letsencrypt/live/example.com/privkey.pem"Key features:
- Uses Fiber's built-in
ListenTLS()for direct HTTPS support - Automatic HSTS header (
Strict-Transport-Security) when TLS is enabled - Certificate and key file validation on startup
- Backward compatible - TLS disabled by default
Configurable Ingestion Concurrency
Ingestion concurrency settings are now configurable to support high-concurrency deployments with many simultaneous clients
Configuration options:
ingest.flush_workers- Async flush worker pool size (default: 2x CPU cores, min 8, max 64)ingest.flush_queue_size- Pending flush queue capacity (default: 4x workers, min 100)ingest.shard_count- Buffer shards for lock distribution (default: 32)
Environment variables:
ARC_INGEST_FLUSH_WORKERSARC_INGEST_FLUSH_QUEUE_SIZEARC_INGEST_SHARD_COUNT
Example configuration for high concurrency:
[ingest]
flush_workers = 32 # More workers for parallel I/O
flush_queue_size = 200 # Larger queue for burst handling
shard_count = 64 # More shards to reduce lock contentionKey features:
- Defaults scale dynamically with CPU cores (similar to QuestDB and InfluxDB)
- Previously hardcoded values now tunable for specific workloads
- Helps prevent flush queue overflow under high concurrent load
Data-Time Partitioning
Parquet files are now organized by the data's timestamp instead of ingestion time, enabling proper backfill of historical data.
Key features:
- Historical data lands in correct time-based partitions (e.g., December 2024 data goes to
2024/12/folders, not today's folder) - Batches spanning multiple hours are automatically split into separate files per hour
- Data is sorted by timestamp within each Parquet file for optimal query performance
- Enables accurate partition pruning for time-range queries
How it works:
- Single-hour batches: sorted and written to one file
- Multi-hour batches: split by hour boundary, each hour sorted independently
Example: Backfilling data from December 1st, 2024:
# Before: All data went to ingestion date
data/mydb/cpu/2025/01/04/... (wrong - today's partition)
# After: Data goes to correct historical partition
data/mydb/cpu/2024/12/01/14/... (correct - data's timestamp)
data/mydb/cpu/2024/12/01/15/...
Contributed by @schotime
Compaction API Triggers
Hourly and daily compaction now have separate schedules and can be triggered manually via API.
API Endpoints:
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/v1/compaction/hourly |
Trigger hourly compaction |
POST |
/api/v1/compaction/daily |
Trigger daily compaction |
Configuration:
[compaction]
hourly_schedule = "0 * * * *" # Every hour
daily_schedule = "0 2 * * *" # Daily at 2 AMContributed by @schotime
Configurable Max Payload Size
The maximum request payload size for write endpoints is now configurable, with the default increased from 100MB to 1GB.
Configuration options:
server.max_payload_size- Maximum payload size (e.g., "1GB", "500MB")- Environment variable:
ARC_SERVER_MAX_PAYLOAD_SIZE
Example configuration:
[server]
max_payload_size = "2GB"Key features:
- Applies to both compressed and decompressed payloads
- Supports human-readable units: B, KB, MB, GB
- Improved error messages suggest batching when limit is exceeded
- Default increased 10x from 100MB to 1GB to support larger bulk imports
Database Management API
New REST API endpoints for managing databases programmatically, enabling pre-creation of databases before agents send data.
Endpoints:
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/v1/databases |
List all databases with measurement counts |
POST |
/api/v1/databases |
Create a new database |
GET |
/api/v1/databases/:name |
Get database info |
GET |
/api/v1/databases/:name/measurements |
List measurements in a database |
DELETE |
/api/v1/databases/:name |
Delete a database (requires delete.enabled=true) |
Example usage:
# List databases
curl -H "Authorization: Bearer $TOKEN" http://localhost:8000/api/v1/databases
# Create a database
curl -X POST -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"name": "production"}' \
http://localhost:8000/api/v1/databases
# Delete a database (requires confirmation)
curl -X DELETE -H "Authorization: Bearer $TOKEN" \
"http://localhost:8000/api/v1/databases/old_data?confirm=true"Key features:
- Database name validation (alphanumeric, underscore, hyphen; must start with letter; max 64 characters)
- Reserved names protected (
system,internal,_internal) - DELETE respects
delete.enabledconfiguration for safety - DELETE requires
?confirm=truequery parameter - Works with all storage backends (local, S3, Azure)
DuckDB S3 Query Support (httpfs)
Arc now configures the DuckDB httpfs extension automatically, enabling direct queries against Parquet files stored in S3.
Key improvements:
- Automatic httpfs extension installation and configuration
- S3 credentials passed to DuckDB for authenticated access
SET GLOBALused to persist credentials across connection pool- Works with standard S3 buckets (note: S3 Express One Zone uses different auth and is not supported by httpfs)
Configuration:
[storage]
backend = "s3"
s3_bucket = "my-bucket"
s3_region = "us-east-2"
# Credentials via environment variables recommended:
# AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEYImprovements
Storage Backend Interface Enhancements
- Added
ListDirectories()method for efficient partition discovery - Added
ListObjects()method for listing files within partitions - Both local and S3 backends implement the enhanced interface
Compaction Subprocess Improvements
- Fixed "argument list too long" error when compacting partitions with many files
- Job configuration now passed via stdin instead of command-line arguments
- Supports compaction of partitions with 15,000+ files
Arrow Writer Enhancements
- Added row-to-columnar conversion for efficient data ingestion
- Improved buffer management for high-throughput scenarios
Ingestion Pipeline Optimizations
- Zstd compression support: Added Zstd decompression for MessagePack payloads. Zstd achieves 9.57M rec/sec with only 5% overhead vs uncompressed (compared to 12% overhead with GZIP at 8.85M rec/sec). Auto-detected via magic bytes - no client configuration required.
- Consolidated type conversion helpers: Extracted common
toInt64(),toFloat64(),firstNonNil()functions, eliminating ~100 lines of duplicate code across the ingestion pipeline. - O(n log n) column sorting: Replaced O(n²) bubble sort with
sort.Slice()for column ordering in schema inference. - Single-pass timestamp normalization: Reduced from 2-3 passes to single pass for timestamp type conversion and unit normalization.
- Result: 7% throughput improvement (9.47M → 10.1M rec/s), 63% p50 latency reduction (8.40ms → 3.09ms), 84% p99 latency reduction (42.29ms → 6.73ms).
Authentication Performance Optimizations
- Token lookup index: Added
token_prefixcolumn with database index for O(1) token lookup instead of O(n) full table scan. Reduces bcrypt comparisons from O(n/2) average to O(1-2) per cache miss. - Atomic cache counters: Replaced mutex-protected counters with
atomic.Int64operations...
Arc 25.12.1
Arc v25.12.1 - Go Implementation
Major Release: Complete rewrite from Python to Go
Arc is a high-performance time-series database built on DuckDB, optimized for IoT, observability, and analytics workloads.
Migration Highlights
This release marks the complete migration from Python to Go, delivering:
Performance Improvements
- 9.47M records/sec MessagePack ingestion (125% faster than Python's 4.21M)
- 1.92M records/sec Line Protocol ingestion (76% faster than Python's 1.09M)
- 2.88M rows/sec Arrow query throughput
Reliability
- Memory stable: No memory leaks (Python leaked 372MB per 500 queries)
- Single binary: No Python dependencies, pip, or virtual environments
- Type-safe: Strong typing catches bugs at compile time
Full Feature Parity
- ✅ Authentication (user/password)
- ✅ Automatic Compaction (Parquet optimization)
- ✅ Write-Ahead Log (WAL for durability)
- ✅ Retention Policies (automatic data expiration)
- ✅ Continuous Queries (real-time aggregations)
- ✅ Delete API (selective data removal)
- ✅ S3/MinIO storage backend
- ✅ Arrow IPC query responses
Quick Start
Docker
docker run -d \
-p 8000:8000 \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:25.12.1Debian/Ubuntu (amd64, arm64)
# Download and install
wget https://github.com/basekick-labs/arc/releases/download/v25.12.1/arc_25.12.1_amd64.deb
sudo dpkg -i arc_25.12.1_amd64.deb
# Start and enable
sudo systemctl enable arc
sudo systemctl start arc
# Check status
curl http://localhost:8000/healthRHEL/Fedora/Rocky (x86_64, aarch64)
# Download and install
wget https://github.com/basekick-labs/arc/releases/download/v25.12.1/arc-25.12.1-1.x86_64.rpm
sudo rpm -i arc-25.12.1-1.x86_64.rpm
# Start and enable
sudo systemctl enable arc
sudo systemctl start arcKubernetes (Helm)
helm install arc https://github.com/basekick-labs/arc/releases/download/v25.12.1/arc-25.12.1.tgzDownload Artifacts
| Platform | Architecture | Package |
|---|---|---|
| Docker | amd64, arm64 | ghcr.io/basekick-labs/arc:25.12.1 |
| Debian/Ubuntu | amd64 | arc_25.12.1_amd64.deb |
| Debian/Ubuntu | arm64 | arc_25.12.1_arm64.deb |
| RHEL/Fedora | x86_64 | arc-25.12.1-1.x86_64.rpm |
| RHEL/Fedora | aarch64 | arc-25.12.1-1.aarch64.rpm |
| Kubernetes | - | arc-25.12.1.tgz (Helm) |
Breaking Changes
- Python version: The Python implementation is preserved in the
python-legacybranch - Configuration: TOML config format (unchanged, but verify your
arc.toml)
Upgrading from Python
- Stop existing Arc service
- Backup your data directory
- Install the new Go binary (same config format)
- Start Arc - data is automatically migrated
Documentation
Community
Arc 25.11.1
Welcome to our very first release 👯
One database for metrics, logs, traces, and events.
Query all your observability data with SQL. Built on DuckDB + Parquet. 6.57M records/sec unified ingestion.
Quick Start
Docker
docker run -d \
-p 8000:8000 \
-e STORAGE_BACKEND=local \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:25.11.1
Kubernetes (Helm)
helm install arc https://github.com/Basekick-Labs/arc/releases/download/v25.11.1/arc-25.11.1.tgz
kubectl port-forward svc/arc 8000:8000
Features
High-Performance Ingestion
- 6.57M records/sec unified: Ingest metrics, logs, traces, and events simultaneously through one endpoint
- MessagePack columnar protocol: Zero-copy ingestion optimized for throughput
- InfluxDB Line Protocol: 240K records/sec for Telegraf compatibility and easy migration
Query & Analytics
- DuckDB SQL engine: Full analytical SQL with window functions, CTEs, joins, and aggregations
- Cross-database queries: Join metrics, logs, and traces in a single SQL query
- Query caching: Configurable result caching for repeated analytical queries
- Apache Arrow format: Zero-copy columnar data transfer for Pandas/Polars pipelines
Storage & Scalability
- Columnar Parquet storage: 3-5x compression ratios, optimized for analytical queries
- Flexible backends: Local filesystem, MinIO, AWS S3/R2, Google Cloud Storage, or any S3-compatible storage
- Multi-database architecture: Organize data by environment, tenant, or application with database namespaces
- Automatic compaction: Merges small files into optimized 512MB files for 10-50x faster queries
Data Management
- Retention policies: Time-based data lifecycle management with automatic cleanup
- Continuous queries: Downsampling and materialized views for long-term data aggregation
- GDPR-compliant deletion: Precise deletion with zero overhead on writes/queries
- Write-Ahead Log (WAL): Optional durability feature for zero data loss (disabled by default for max throughput)
Integrations & Tools
- VSCode Extension: Full-featured database manager with query editor, notebooks, CSV import, and alerting - Install Now
- Apache Superset: Native dialect for BI dashboards and visualizations
- Grafana: Native Data Source
- Prometheus: Ingest via Telegraf bridge (native remote write coming Q1 2026)
- OpenTelemetry: Ingest via OTEL Collector (native receivers coming Q1 2026)
- Telegraf Arc output plugin (In progress)
Operations & Monitoring
- Health checks: /health and /ready endpoints for orchestration
- Prometheus metrics: Export operational metrics for monitoring
- Authentication: Token-based API authentication with cache for performance
- Production ready: Docker, native deployment, and systemd service management
Performance
Unified Ingestion Benchmark (Apple M3 Max, 14 cores):
- Metrics: 2.91M/sec
- Logs: 1.55M/sec
- Traces: 1.50M/sec
- Events: 1.54M/sec
- Total: 6.57M records/sec (all data types simultaneously)
ClickBench Results (AWS c6a.4xlarge, 100M rows):
- Cold run: 120.25s
- Warm run: 35.70s
- 12.4x faster than TimescaleDB
- 1.2x faster than QuestDB (Combined and Cold Run)
Known Issues
- High Availability and clustering not yet implemented (coming Q1 2026)
- Native Prometheus remote write endpoint in development
- Native OTLP receivers in development
- Performance penalty with Docker Desktop.
Roadmap
Q1 2026:
- Arc Cloud managed hosting
- Read Replicas
- Enhanced authentication (RBAC, SSO)
💬 Community
- Discord: Join community
- GitHub: Report issues
- Website: basekick.net