-
Notifications
You must be signed in to change notification settings - Fork 3
feat: comprehensive codebase improvements and Docker optimization #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Enhanced error handling with comprehensive try-catch blocks - Added environment validation with startup checks - Implemented structured logging with configurable log levels - Optimized Docker setup with multi-stage builds and security improvements - Added development and production Docker configurations - Improved Redis connection handling with reconnection strategies - Enhanced RSS monitoring with better error recovery and cleanup - Modularized codebase with utility functions and constants - Added ESLint configuration for code quality - Improved command structure with better embed formatting - Added rate limiting and performance optimizations - Created Docker management scripts for easier deployment - Updated documentation with comprehensive setup instructions - Added health checks for container monitoring - Implemented graceful shutdown handling - Enhanced API timeout configurations and error recovery Breaking Changes: - Environment variable validation is now enforced at startup - Redis connection requires proper error handling - Some command structures have been updated for consistency Migration Guide: 1. Copy .env.docker to .env and configure your environment 2. Use Docker for deployment: ./docker-manage.sh start 3. For local development: npm run dev 4. Check environment: npm run validate
WalkthroughAdds Docker tooling and management scripts, CI workflow, environment templates, ESLint, health-check, logging/validation utilities, Redis and RSS lifecycle refactors, dynamic async command loading, client AI/support and unit-conversion features, enhanced command UIs, and multiple example/config files. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App as App (index.js)
participant Val as utils/validation
participant Log as utils/logger
participant Redis as redisClient
participant Client as Discord Client
participant RSS as rss.js
App->>Val: validateEnvironment()
Val-->>App: {isValid, missing, warnings}
App->>Val: setEnvironmentDefaults()
App->>Log: logValidationResults()
App->>Redis: connect (reconnectStrategy)
Redis-->>App: ready / error (non-fatal)
App->>Client: loadCommands() (async)
App->>Client: registerCommands()
App->>Client: client.login()
Client-->>App: ready
alt ENABLE_RSS_MONITORING
App->>RSS: startRSS(client)
loop RSS polling
RSS->>Redis: read/set dedupe keys
RSS->>Client: post new items
end
end
note over App,Client: Global handlers: SIGINT, unhandledRejection, uncaughtException
sequenceDiagram
autonumber
participant User as User
participant Bot as Bot (commands/github/stats.js)
participant GH as GitHub API
User->>Bot: /stats
Bot->>Bot: deferReply()
par Parallel fetch
Bot->>GH: GET /repos/{org}/{repo}
Bot->>GH: GET /repos/{org}/{repo}/contributors
end
Bot-->>Bot: build repo embed + leaderboard
Bot->>User: initial reply + select menu
User->>Bot: select option
Bot->>User: update message with selected embed
Bot-->>User: remove components on timeout
sequenceDiagram
autonumber
participant U as User
participant B as Bot (client.js)
participant P as Perspective API
participant AI as AI Service
participant FB as Feedback API
U->>B: Message in AI support channel
B->>P: checkMessage(toxicity)
alt toxic or API error
B-->>U: deny or moderation notice
else not toxic
B->>B: rate-limit check
alt over limit
B-->>U: rate limit notice
else ok
B->>AI: query
AI-->>B: response
B->>U: reply + add 👍/👎 reactions
U->>B: reaction
B->>FB: submit feedback
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Add GitHub Actions workflow to build :dev tagged Docker images - Trigger on pushes to codebase-rework branch with smart change detection - Create both :dev and :dev-codebase-rework tagged images - Include manual workflow dispatch option for testing - Use Docker build cache for improved performance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🧹 Nitpick comments (22)
.dockerignore (1)
20-24
: Reconsider ignoring docs and Dockerfiles (optional)
- If your Dockerfile labels or build steps ever COPY README.md/CHANGELOG.md into the image, these ignores will prevent it.
- Similarly, ignoring Dockerfile*/compose files is fine, but be aware they won’t be available for COPY during multi-stage builds if you ever rely on that.
Also applies to: 42-45
redisClient.js (1)
12-26
: Use centralized Logger and add readiness visibility (optional)
- Replace console.* with utils/logger Logger for consistent structured logs.
- Consider exporting a helper like
isRedisReady = () => redisClient.isOpen
for callers to guard operations if Redis is optional.Example:
-import { createClient } from 'redis'; +import { createClient } from 'redis'; +import Logger from './utils/logger.js'; +const log = new Logger('REDIS'); -redisClient.on('error', (error) => { - console.error('[REDIS] Connection error:', error.message); -}); +redisClient.on('error', (error) => log.error(`Connection error: ${error.message}`, error)); -redisClient.on('connect', () => { - console.log('[REDIS] Connected successfully'); -}); +redisClient.on('connect', () => log.info('Connected successfully')); -redisClient.on('reconnecting', () => { - console.log('[REDIS] Reconnecting...'); -}); +redisClient.on('reconnecting', () => log.warn('Reconnecting...')); -redisClient.on('ready', () => { - console.log('[REDIS] Ready for commands'); -}); +redisClient.on('ready', () => log.info('Ready for commands')); -redisClient.connect().catch((error) => { - console.error('[REDIS] Failed to connect:', error.message); -}); +redisClient.connect().catch((error) => { + log.error(`Failed to connect: ${error.message}`, error); +});Also applies to: 28-32
commands/bot/support.js (1)
9-13
: Optional: adopt Logger for consistencyUse the new utils/logger Logger instead of console for consistent formatting and level control.
Also applies to: 12-26
utils/constants.js (1)
5-67
: Looks good; centralized constants are clearValues and structure are reasonable. Exporting both named and default is fine; consider standardizing on named-only to avoid import style drift.
Also applies to: 69-92
utils/logger.js (1)
13-16
: Normalize and validate LOG_LEVELUppercase and validate env to avoid unexpected behavior when an invalid value is supplied.
Apply this diff:
constructor(module = 'MAIN') { this.module = module; - this.logLevel = process.env.LOG_LEVEL || 'INFO'; + const envLevel = String(process.env.LOG_LEVEL || 'INFO').toUpperCase(); + const allowed = ['ERROR', 'WARN', 'INFO', 'DEBUG']; + this.logLevel = allowed.includes(envLevel) ? envLevel : 'INFO'; } @@ shouldLog(level) { - const levels = ['DEBUG', 'INFO', 'WARN', 'ERROR']; + const levels = ['DEBUG', 'INFO', 'WARN', 'ERROR']; const currentIndex = levels.indexOf(this.logLevel); const messageIndex = levels.indexOf(level); return messageIndex >= currentIndex; }Also applies to: 34-39
rss.js (2)
109-117
: Guard sending to text-capable channels.Ensure the resolved channel supports send to avoid runtime errors on non-text channels.
- const channel = await client.channels.fetch(guildConfig.channelId); - if (channel) { + const channel = await client.channels.fetch(guildConfig.channelId); + if (channel && typeof channel.isTextBased === 'function' && channel.isTextBased()) { const message = `📢 **${item.title}**\n${item.link || item.url}`; await channel.send(message); await redisClient.sAdd(sentIdsKey, uniqueId); await redisClient.expire(sentIdsKey, 60 * 60 * 24 * 7); // 7 days newItemsCount++; }
161-187
: Validate subreddit slug to avoid malformed URLs.Normalize/validate subreddit to safe pattern, e.g., /^[A-Za-z0-9_]+$/; skip otherwise.
- const subreddit = url + const subreddit = url .replace(/^https:\/\/(www\.)?reddit\.com\/r\//, '') .replace(/\/$/, ''); + if (!/^[A-Za-z0-9_]+$/.test(subreddit)) { + console.warn(`[RSS] Invalid subreddit in URL: ${url}`); + return []; + }.env.docker (2)
20-20
: Quote complex list value to satisfy dotenv linters and parsers.Wrap JSON-like value in single quotes.
-CHANNELS_TO_IGNORE=["channel_id_1","channel_id_2"] +CHANNELS_TO_IGNORE='["channel_id_1","channel_id_2"]'
1-35
: Minor: address dotenv-linter warnings.
- Reorder keys as suggested.
- Ensure trailing newline at EOF.
You can run dotenv-linter to auto-fix.
health-check.js (2)
19-35
: Deduplicate validation; use utils/validation.js for required vars.Avoid drift by reusing validateEnvironment().
- // Basic check - ensure we can import our main modules - await import('./utils/validation.js'); + // Validate environment using shared utility + const { validateEnvironment } = await import('./utils/validation.js'); + const result = validateEnvironment(); + if (!result.isValid) { + console.error(`Health check failed: Missing environment variables: ${result.missing.join(', ')}`); + process.exit(1); + } - // Check if Redis connection would be available (if configured) + // Check if Redis URL looks configured (optional quick signal) if (process.env.REDIS_URL) { // This is a basic check - in a real scenario you might want to ping Redis console.log('Redis URL configured'); } - - // Check if required environment variables are set - const requiredVars = ['DISCORD_TOKEN', 'CLIENT_ID', 'REPO_ORG', 'REPO_NAME']; - const missing = requiredVars.filter(varName => !process.env[varName]); - - if (missing.length > 0) { - console.error(`Health check failed: Missing environment variables: ${missing.join(', ')}`); - process.exit(1); - }
8-9
: Remove unused import.fs is not used.
-import { promises as fs } from 'fs';
utils/validation.js (1)
8-13
: Optional: export constants for reuse (health checks, docs).Export REQUIRED_ENV_VARS and OPTIONAL_ENV_VARS to centralize truth.
-const REQUIRED_ENV_VARS = [ +export const REQUIRED_ENV_VARS = [ 'DISCORD_TOKEN', 'CLIENT_ID', 'REPO_ORG', 'REPO_NAME' ]; -const OPTIONAL_ENV_VARS = { +export const OPTIONAL_ENV_VARS = { 'REDIS_URL': 'redis://localhost:6379', 'GITHUB_API_BASE': 'https://api.github.com', 'ENABLE_RSS_MONITORING': 'false' };Also applies to: 18-22
README.md (1)
174-186
: Add language to fenced code block (MD040)Specify a language to satisfy markdownlint and enable proper rendering.
Apply:
-``` +```text ...If markdownlint is wired in CI, re-run lint to confirm it passes. </blockquote></details> <details> <summary>Dockerfile (1)</summary><blockquote> `56-58`: **EXPOSE 3000 looks unused** No web server is bound. Remove EXPOSE or document the port’s purpose to avoid confusion. </blockquote></details> <details> <summary>package.json (1)</summary><blockquote> `26-26`: **Upgrade axios to a patched version (SSRF hardening)** axios ^1.7.9 predates SSRF/absolute-URL handling fixes in later 1.x. If any URLs can be influenced by users/env, prefer >=1.8.2. Based on learnings. Please confirm usage patterns and consider: ```diff - "axios": "^1.7.9", + "axios": "^1.8.2",
Run linters/build after upgrade.
docker-manage.sh (3)
4-4
: Stricter shell safetyUse errexit + nounset + pipefail for robustness.
-set -e +set -Eeuo pipefail
51-54
: Prefer ‘docker compose’ or support both CLIsMany environments ship the plugin (‘docker compose’) not the legacy binary (‘docker-compose’). Add a small wrapper to choose whichever is available.
+# Detect compose command +DC=${DC:-$(command -v docker-compose || echo "docker compose")} ... - docker-compose up -d + $DC up -d ... - print_status "Use 'docker-compose logs -f' to view logs" + print_status "Use '$DC logs -f' to view logs"Replicate $DC for all other docker-compose invocations in this script.
90-95
: ‘docker system prune -f’ is broadThis can remove unrelated resources on the host. Consider limiting scope to this project.
- docker system prune -f + $DC down -v --remove-orphans || true + docker image prune -f +# Optional: docker volume prune -f (be careful)index.js (4)
5-5
: Robust dynamic import of absolute pathsDynamic import with a plain absolute path can be fragile. Convert to file:// URL for Node ESM.
Apply:
-import { fileURLToPath } from 'url'; +import { fileURLToPath, pathToFileURL } from 'url';- const importPromise = import(filePath) + const importPromise = import(pathToFileURL(filePath).href)Test on Windows and Linux to ensure commands load consistently.
Also applies to: 66-68
171-181
: Use PermissionFlagsBits for permissions checkUse the typed constant rather than a string key.
Apply:
-import { GatewayIntentBits, REST, Routes, MessageFlags, EmbedBuilder } from 'discord.js'; +import { GatewayIntentBits, REST, Routes, MessageFlags, EmbedBuilder, PermissionFlagsBits } from 'discord.js';- if (!member.permissions.has('ManageMessages')) { + if (!member.permissions.has(PermissionFlagsBits.ManageMessages)) {Based on learnings.
268-278
: Handle SIGTERM for containerized shutdownsKubernetes/Docker send SIGTERM first. Mirror SIGINT handling for SIGTERM.
Apply:
process.on('SIGINT', async () => { console.log('[BOT] Received SIGINT, shutting down gracefully...'); try { await redisClient.quit(); client.destroy(); } catch (error) { console.error('[BOT] Error during shutdown:', error.message); } process.exit(0); }); + +process.on('SIGTERM', async () => { + console.log('[BOT] Received SIGTERM, shutting down gracefully...'); + try { + await redisClient.quit(); + client.destroy(); + } catch (error) { + console.error('[BOT] Error during shutdown:', error.message); + } + process.exit(0); +});
212-214
: Command lookup by display name may failclient.commands keys are set by command.data.name (slash command name, lowercase, no spaces). Looking up 'Report Message' likely misses. Ensure the key matches the slash command name (e.g., 'report') or store a secondary alias map.
Example:
- const command = client.commands.get('Report Message'); + const command = client.commands.get('report');Or maintain a map from modal IDs to command handlers.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
.dockerignore
(1 hunks).env.docker
(1 hunks).eslintrc.json
(1 hunks)Dockerfile
(1 hunks)README.md
(1 hunks)client.js
(1 hunks)commands/bot/support.js
(1 hunks)commands/github/stats.js
(1 hunks)docker-compose.dev.yml
(1 hunks)docker-manage.sh
(1 hunks)example.docker-compose.yml
(1 hunks)health-check.js
(1 hunks)index.js
(2 hunks)package.json
(1 hunks)redisClient.js
(1 hunks)rss.js
(1 hunks)utils/constants.js
(1 hunks)utils/logger.js
(1 hunks)utils/validation.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
health-check.js (1)
utils/validation.js (1)
missing
(29-29)
utils/validation.js (2)
health-check.js (1)
missing
(29-29)index.js (1)
validation
(14-14)
commands/bot/support.js (1)
commands/github/repo.js (1)
embed
(21-51)
commands/github/stats.js (3)
index.js (1)
interaction
(171-171)commands/github/repo.js (3)
embed
(21-51)filter
(73-73)collector
(74-74)rss.js (1)
response
(166-175)
client.js (3)
commands/github/stats.js (1)
response
(86-92)rss.js (1)
response
(166-175)commands/github/issue.js (3)
response
(33-40)response
(52-52)repoName
(20-20)
index.js (3)
utils/validation.js (3)
validateEnvironment
(28-57)logValidationResults
(74-92)setEnvironmentDefaults
(62-68)rss.js (2)
startRSS
(21-36)message
(111-111)redisClient.js (1)
redisClient
(3-10)
rss.js (2)
commands/bot/monitor.js (9)
keys
(107-107)key
(77-77)key
(100-100)key
(133-133)type
(74-74)type
(112-112)url
(75-75)url
(98-98)url
(131-131)redisClient.js (1)
redisClient
(3-10)
🪛 dotenv-linter (3.3.0)
.env.docker
[warning] 6-6: [UnorderedKey] The CLIENT_ID key should go before the DISCORD_TOKEN key
(UnorderedKey)
[warning] 10-10: [UnorderedKey] The REPO_NAME key should go before the REPO_ORG key
(UnorderedKey)
[warning] 14-14: [UnorderedKey] The GITHUB_API_BASE key should go before the GITHUB_TOKEN key
(UnorderedKey)
[warning] 18-18: [UnorderedKey] The AI_SUPPORTCHANNEL_ID key should go before the FORUM_CHANNEL_ID key
(UnorderedKey)
[warning] 20-20: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 20-20: [UnorderedKey] The CHANNELS_TO_IGNORE key should go before the FORUM_CHANNEL_ID key
(UnorderedKey)
[warning] 31-31: [UnorderedKey] The ENABLE_RSS_MONITORING key should go before the LOG_LEVEL key
(UnorderedKey)
[warning] 35-35: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
🪛 markdownlint-cli2 (0.18.1)
README.md
174-174: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (8)
client.js (2)
248-316
: ...
248-314
: After comment, update scratch lines.Finally, ensure file_end.
Also in final output we only have this comment? Only bug found. No other comment.
Let's craft final.
First update scratch to note done.
Then yield comment.
Then finish.
Ensure final output only includes necessary markers etc.
Let's execute.
.dockerignore (1)
8-13
: Good: env files excluded from build contextAvoids accidental secret leakage into images. No changes needed.
.eslintrc.json (1)
1-25
: Solid baseline ESLint configGood balance of recommended rules with pragmatic exceptions (no-console off). Looks ready.
commands/github/stats.js (1)
84-97
: Axios version constraint covers SSRF fix
Your^1.7.9
range (>=1.7.9 <2.0.0) will resolve to 1.12.2 on fresh installs, which includes the SSRF/URL-handling patches added in ≥1.8.2. Just update your lockfile (npm update axios
) to pull in the fix.docker-compose.dev.yml (1)
8-14
: Remove redundant env_file configuration
dotenv.config() in index.js loads .env from the mounted project directory; adding env_file is unnecessary.
[optional] Consider running as non-root:user: "node"
Likely an incorrect or invalid review comment.
rss.js (1)
4-11
: Leverage built-in timeout/maxRedirects or manual fetch—no customFetch option
The Parser constructor acceptstimeout
,maxRedirects
,headers
, andrequestOptions
. For finer control (e.g. using AbortController), perform the HTTP fetch yourself and invokeparser.parseString(xml)
.Likely an incorrect or invalid review comment.
example.docker-compose.yml (1)
49-52
: depends_on health condition may require a recent ComposeSome Docker Compose variants don’t support condition: service_healthy. Ensure your environment uses a Compose version that supports health-based dependency ordering.
Run
docker compose version
and confirm it’s recent enough; otherwise, handle readiness in app or wait-for scripts.
# Optional AI Support Configuration | ||
AI_APIKEY=your_ai_api_key | ||
AI_SUPPRT_URL=your_ai_support_service_url | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Typo in AI support URL variable; standardize to AI_SUPPORT_URL.
Rename to avoid confusion; keep backward compat via code.
-# Optional AI Support Configuration
-AI_APIKEY=your_ai_api_key
-AI_SUPPRT_URL=your_ai_support_service_url
+# Optional AI Support Configuration
+AI_APIKEY=your_ai_api_key
+AI_SUPPORT_URL=your_ai_support_service_url
🏁 Script executed:
#!/bin/bash
# Find all occurrences of both spellings to align usage and plan a safe rename
rg -n 'AI_SUPPRT_URL|AI_SUPPORT_URL'
Length of output: 424
Standardize AI support URL env var
Rename every AI_SUPPRT_URL
reference to AI_SUPPORT_URL
across .env.docker
, docker-compose.yml
, utils/validation.js
, client.js
, and README.md
. In code, use a fallback when reading the env var, e.g.
const supportUrl = process.env.AI_SUPPORT_URL || process.env.AI_SUPPRT_URL;
🤖 Prompt for AI Agents
In .env.docker around lines 22-25 and across docker-compose.yml,
utils/validation.js, client.js, and README.md, the environment variable is
misspelled as AI_SUPPRT_URL; rename the variable to AI_SUPPORT_URL everywhere
(update .env.docker and docker-compose service env entries, documentation in
README.md, and validation logic) and, in code files that read the env var,
implement a fallback read to maintain compatibility: read
process.env.AI_SUPPORT_URL first and fall back to process.env.AI_SUPPRT_URL;
ensure validation and any runtime usage reference AI_SUPPORT_URL as the primary
key.
await interaction.reply({ | ||
embeds: [embed], | ||
flags: MessageFlags.Ephemeral | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ephemeral handling: use ephemeral: true
(not flags
), and guard replies in catch
flags: MessageFlags.Ephemeral
isn’t the supported way for interaction replies in discord.js v14. Use ephemeral: true
. Also, in catch, prefer interaction.followUp
when already replied/deferred.
Apply this diff:
-import { SlashCommandBuilder, EmbedBuilder, MessageFlags } from 'discord.js';
+import { SlashCommandBuilder, EmbedBuilder } from 'discord.js';
@@
- await interaction.reply({
- embeds: [embed],
- flags: MessageFlags.Ephemeral
- });
+ await interaction.reply({
+ embeds: [embed],
+ ephemeral: true
+ });
@@
- console.error('[SUPPORT] Error executing support command:', error.message);
-
- await interaction.reply({
- content: '❌ An error occurred while displaying support information.',
- flags: MessageFlags.Ephemeral
- }).catch(() => {});
+ console.error('[SUPPORT] Error executing support command:', error.message);
+ if (interaction.replied || interaction.deferred) {
+ await interaction.followUp({ content: '❌ An error occurred while displaying support information.', ephemeral: true }).catch(() => {});
+ } else {
+ await interaction.reply({ content: '❌ An error occurred while displaying support information.', ephemeral: true }).catch(() => {});
+ }
Also applies to: 49-56, 1-1
🤖 Prompt for AI Agents
In commands/bot/support.js around lines 1, 44-47 and 49-56, the interaction
reply is using flags: MessageFlags.Ephemeral which is not correct for discord.js
v14 and the catch handler doesn't guard whether the interaction was already
replied or deferred; replace flags with ephemeral: true when calling
interaction.reply, and in the catch branch detect interaction.replied or
interaction.deferred and call interaction.followUp(...) in that case, otherwise
call interaction.reply(...) (or interaction.deferReply() first as appropriate);
ensure error handling is wrapped in try/catch and use followUp only when already
replied/deferred to avoid double-reply errors.
@@ -1,93 +1,204 @@ | |||
import { SlashCommandBuilder, StringSelectMenuBuilder, MessageFlags } from 'discord.js'; | |||
import { SlashCommandBuilder, StringSelectMenuBuilder, MessageFlags, EmbedBuilder } from 'discord.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix ephemeral handling and component structure (discord.js v14)
- Use
ephemeral: true
ondeferReply
(you cannot make a non-ephemeral reply ephemeral later). - Stop using
flags: MessageFlags.Ephemeral
in replies/edits; useephemeral: true
only where supported. - Wrap the select menu in an
ActionRowBuilder
.
Apply this diff:
-import { SlashCommandBuilder, StringSelectMenuBuilder, MessageFlags, EmbedBuilder } from 'discord.js';
+import { SlashCommandBuilder, StringSelectMenuBuilder, ActionRowBuilder, EmbedBuilder } from 'discord.js';
@@
- await interaction.deferReply();
+ await interaction.deferReply({ ephemeral: true });
@@
- if (!leaderboard?.length) {
- return interaction.editReply({
- content: "❌ Unable to fetch contributor data",
- flags: MessageFlags.Ephemeral
- });
- }
+ if (!leaderboard?.length) {
+ return interaction.editReply({ content: "❌ Unable to fetch contributor data" });
+ }
@@
- if (!repoData) {
- return interaction.editReply({
- content: "❌ Unable to fetch repository information",
- flags: MessageFlags.Ephemeral
- });
- }
+ if (!repoData) {
+ return interaction.editReply({ content: "❌ Unable to fetch repository information" });
+ }
@@
- const message = await interaction.editReply({
- embeds: [embed],
- components: [{ type: 1, components: [selectMenu] }]
- });
+ const message = await interaction.editReply({
+ embeds: [embed],
+ components: [new ActionRowBuilder().addComponents(selectMenu)]
+ });
@@
- const errorResponse = {
- content: "❌ An error occurred while fetching statistics",
- flags: MessageFlags.Ephemeral
- };
+ const errorResponse = { content: "❌ An error occurred while fetching statistics" };
- if (interaction.deferred) {
- await interaction.editReply(errorResponse);
- } else {
- await interaction.reply(errorResponse);
- }
+ if (interaction.replied || interaction.deferred) {
+ await interaction.editReply(errorResponse);
+ } else {
+ await interaction.reply({ ...errorResponse, ephemeral: true });
+ }
Also applies to: 11-11, 18-23, 25-30, 35-38, 63-75
🤖 Prompt for AI Agents
In commands/github/stats.js around lines 1, 11, 18-23, 25-30, 35-38 and 63-75:
fix ephemeral handling and component structure for discord.js v14 by (1)
importing ActionRowBuilder and using deferReply({ ephemeral: true }) at the
start so the interaction is ephemeral from the outset (do not attempt to change
ephemerality later), (2) remove uses of MessageFlags.Ephemeral and the flags
option in replies/edits and instead pass { ephemeral: true } where the API
accepts it, and (3) wrap the StringSelectMenuBuilder in a new ActionRowBuilder
and pass components: [actionRow] to reply/edit calls; update imports accordingly
to include ActionRowBuilder and drop MessageFlags if no longer used.
name: "Watchers", | ||
value: repoData.watchers?.toLocaleString() || "0", | ||
inline: true | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Verify GitHub “Watchers” field
GitHub API commonly exposes watchers_count
(historically mirrors stars) and subscribers_count
for notifications. You’re using watchers
; confirm this matches your intent.
Run this quick check to inspect the actual response fields:
🏁 Script executed:
#!/bin/bash
# Inspect GitHub repo fields (unauthenticated, low rate limits)
curl -s https://api.github.com/repos/streamyfin/streamyfin | jq '{watchers, watchers_count, stargazers_count, subscribers_count}'
Length of output: 103
Clarify metric for “Watchers”
repoData.watchers
(alias for watchers_count
) returns the star count, not actual watchers. Replace with repoData.subscribers_count
for true watcher numbers, or rename the label to “Stars” if you intend to show stargazers.
🤖 Prompt for AI Agents
In commands/github/stats.js around lines 118 to 121, the "Watchers" field uses
repoData.watchers (which maps to stargazers) instead of the actual watcher
count; either change the label to "Stars" if you intend to show stargazers, or
replace repoData.watchers?.toLocaleString() with
repoData.subscribers_count?.toLocaleString() (falling back to "0") so the label
"Watchers" shows the true subscribers_count; update the inline property
unchanged.
reconnectStrategy: (retries) => Math.min(retries * 50, 1000), | ||
connectTimeout: 10000, | ||
}, | ||
retry_unfulfilled_commands: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid option for node-redis v4: remove retry_unfulfilled_commands
retry_unfulfilled_commands
isn’t a supported option in node-redis v4 (it’s from ioredis). It’s ignored and can mislead future maintainers.
Apply this diff:
socket: {
reconnectStrategy: (retries) => Math.min(retries * 50, 1000),
connectTimeout: 10000,
},
- retry_unfulfilled_commands: true,
});
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
retry_unfulfilled_commands: true, |
🤖 Prompt for AI Agents
In redisClient.js around line 9, the config includes the unsupported option
`retry_unfulfilled_commands`; remove this option from the redis client
configuration so only valid node-redis v4 options remain, and update any related
comments or tests that reference it to avoid confusion.
rss.js
Outdated
const keys = await redisClient.keys('monitor:*'); | ||
console.log(`[RSS] Processing ${keys.length} configured feeds`); | ||
|
||
for (const key of keys) { | ||
if (isShuttingDown) break; | ||
|
||
try { | ||
await processSingleFeed(client, key); | ||
} catch (error) { | ||
console.error(`[RSS] Error processing feed ${key}:`, error.message); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: KEYS pattern processes non-config keys and blocks Redis; use SCAN and filter by type/hash.
- KEYS 'monitor:' will include monitor::sent and :lastCheck; hGetAll on those causes WRONGTYPE and wasted work.
- KEYS is O(N) and blocks Redis. Prefer scanIterator with MATCH and TYPE=hash.
Apply:
- const keys = await redisClient.keys('monitor:*');
- console.log(`[RSS] Processing ${keys.length} configured feeds`);
-
- for (const key of keys) {
+ const feedKeys = [];
+ // Iterate non-blockingly over only hash configs
+ for await (const key of redisClient.scanIterator({ MATCH: 'monitor:*', TYPE: 'hash' })) {
+ feedKeys.push(key);
+ }
+ console.log(`[RSS] Processing ${feedKeys.length} configured feeds`);
+
+ for (const key of feedKeys) {
if (isShuttingDown) break;
try {
await processSingleFeed(client, key);
} catch (error) {
console.error(`[RSS] Error processing feed ${key}:`, error.message);
}
}
// Periodic cleanup
- if (keys.length > CLEANUP_THRESHOLD) {
+ if (feedKeys.length > CLEANUP_THRESHOLD) {
await cleanupOldEntries();
}
Also applies to: 57-64
🤖 Prompt for AI Agents
In rss.js around lines 44-55 (and similarly for 57-64) the code uses
redisClient.keys('monitor:*') which returns monitor keys including non-hash
entries like monitor:*:sent and monitor:*:lastCheck and is O(N) blocking;
replace this with a non-blocking SCAN-based iterator (redisClient.scanIterator)
using MATCH 'monitor:*' and TYPE 'hash' if supported, or iterate MATCH
'monitor:*' and call TYPE per candidate and only process TYPE === 'hash' (or
filter by key suffix patterns) so you only hGetAll hashes; implement async
iteration so you can break on isShuttingDown and handle per-key errors as
before.
utils/validation.js
Outdated
if (!process.env.AI_APIKEY || !process.env.AI_SUPPRT_URL) { | ||
warnings.push('AI support not configured - AI features will be disabled'); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix AI support URL typo and allow backward compatibility.
Current code checks AI_SUPPRT_URL; correct to AI_SUPPORT_URL while honoring legacy var.
- if (!process.env.AI_APIKEY || !process.env.AI_SUPPRT_URL) {
+ const aiSupportUrl = process.env.AI_SUPPORT_URL || process.env.AI_SUPPRT_URL;
+ if (!process.env.AI_APIKEY || !aiSupportUrl) {
warnings.push('AI support not configured - AI features will be disabled');
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (!process.env.AI_APIKEY || !process.env.AI_SUPPRT_URL) { | |
warnings.push('AI support not configured - AI features will be disabled'); | |
} | |
const aiSupportUrl = process.env.AI_SUPPORT_URL || process.env.AI_SUPPRT_URL; | |
if (!process.env.AI_APIKEY || !aiSupportUrl) { | |
warnings.push('AI support not configured - AI features will be disabled'); | |
} |
🤖 Prompt for AI Agents
In utils/validation.js around lines 44-46, the environment variable name is
misspelled as AI_SUPPRT_URL; update the validation to check for the correct
AI_SUPPORT_URL but preserve backward compatibility by accepting the legacy
AI_SUPPRT_URL if AI_SUPPORT_URL is not set (e.g., use supportUrl =
process.env.AI_SUPPORT_URL || process.env.AI_SUPPRT_URL) and adjust the
conditional to push the warning only when neither is present so existing
deployments using the old name keep working.
- Add fetch-depth: 0 to ensure full history is available - Improve commit validation with git cat-file existence check - Enhanced fallback logic for missing or corrupted commit objects - Better error handling to prevent 'fatal: bad object' errors Fixes workflow failures when previous commit SHA is unavailable or corrupted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 20
♻️ Duplicate comments (2)
client.js (1)
248-249
: Fix typo in environment variable name (see earlier comment).The environment variable
AI_SUPPRT_URL
should beAI_SUPPORT_URL
. This is the same typo identified inutils/validation.js
.Also applies to: 305-305
.env.docker (1)
24-24
: Fix typo in environment variable name (see earlier comment).The environment variable
AI_SUPPRT_URL
should beAI_SUPPORT_URL
. This is the same typo identified inutils/validation.js
andclient.js
.
🧹 Nitpick comments (15)
README.md (1)
174-186
: Add a language to the fenced code block (fixes MD040).Specify a language for the project structure block to satisfy markdownlint.
Apply:
-``` +```text ├── commands/ # Command implementations ... └── rss.js # RSS monitoring service</blockquote></details> <details> <summary>.eslintrc.json (1)</summary><blockquote> `14-25`: **ESLint baseline looks good.** Rules are reasonable. Consider relaxing linebreak-style for cross‑platform devs later. </blockquote></details> <details> <summary>redisClient.js (1)</summary><blockquote> `12-26`: **Use centralized Logger instead of console.** Adopt utils/logger.js for consistent levels/format. ```diff -import { createClient } from 'redis'; +import { createClient } from 'redis'; +import logger from './utils/logger.js'; @@ -redisClient.on('error', (error) => { - console.error('[REDIS] Connection error:', error.message); -}); +redisClient.on('error', (error) => { + logger.error('[REDIS] Connection error', { err: error.message }); +}); @@ -redisClient.on('connect', () => { - console.log('[REDIS] Connected successfully'); -}); +redisClient.on('connect', () => { + logger.info('[REDIS] Connected successfully'); +}); @@ -redisClient.on('reconnecting', () => { - console.log('[REDIS] Reconnecting...'); -}); +redisClient.on('reconnecting', () => { + logger.warn('[REDIS] Reconnecting...'); +}); @@ -redisClient.on('ready', () => { - console.log('[REDIS] Ready for commands'); -}); +redisClient.on('ready', () => { + logger.info('[REDIS] Ready for commands'); +}); @@ -redisClient.connect().catch((error) => { - console.error('[REDIS] Failed to connect:', error.message); +redisClient.connect().catch((error) => { + logger.error('[REDIS] Failed to connect', { err: error.message }); // Don't exit the process, let the app continue without Redis });
commands/bot/support.js (1)
49-56
: Also fix ephemeral in error path and use Logger.Use
ephemeral: true
and centralized logger.- } catch (error) { - console.error('[SUPPORT] Error executing support command:', error.message); + } catch (error) { + // import logger from '../../utils/logger.js' at top + logger.error('[SUPPORT] Error executing support command', { err: error.message }); @@ - await interaction.reply({ - content: '❌ An error occurred while displaying support information.', - flags: MessageFlags.Ephemeral - }).catch(() => {}); + await interaction.reply({ + content: '❌ An error occurred while displaying support information.', + ephemeral: true + }).catch(() => {});Also consider importing CONFIG to use CONFIG.COLORS.PRIMARY for the embed color.
utils/constants.js (1)
8-11
: Prefer ActivityType enum over magic number.Use discord.js ActivityType for clarity.
+import { ActivityType } from 'discord.js'; @@ - ACTIVITY_TYPE: 3, // Watching + ACTIVITY_TYPE: ActivityType.Watching,Based on learnings.
commands/github/stats.js (2)
35-38
: Use ActionRowBuilder for components.- const message = await interaction.editReply({ - embeds: [embed], - components: [{ type: 1, components: [selectMenu] }] - }); + const message = await interaction.editReply({ + embeds: [embed], + components: [new ActionRowBuilder().addComponents(selectMenu)] + });As per discord.js v14. Based on learnings.
63-75
: Use Logger and simplify error reply (ephemeral already set).- console.error('[STATS] Error executing stats command:', error.message); + // import logger from '../../utils/logger.js' + logger.error('[STATS] Error executing stats command', { err: error.message }); @@ - const errorResponse = { - content: "❌ An error occurred while fetching statistics", - flags: MessageFlags.Ephemeral - }; + const errorResponse = { + content: "❌ An error occurred while fetching statistics" + }; @@ - if (interaction.deferred) { + if (interaction.deferred) { await interaction.editReply(errorResponse); } else { - await interaction.reply(errorResponse); + await interaction.reply({ ...errorResponse, ephemeral: true }); }Also consider using CONFIG.COLORS.PRIMARY instead of hardcoded color.
client.js (1)
235-241
: Improve rate limit cleanup efficiency.The cleanup logic only triggers when the Map exceeds 1000 entries, which could allow unbounded growth in high-traffic scenarios. Consider cleaning up on every request or using a more efficient approach.
Apply this diff to clean up on every request:
- // Clean up old rate limit entries - if (this.userAIRateLimit.size > 1000) { - const cutoff = now - 60000; // 1 minute ago - for (const [id, timestamp] of this.userAIRateLimit.entries()) { - if (timestamp < cutoff) this.userAIRateLimit.delete(id); - } - } + // Clean up old rate limit entries (older than 1 minute) + const cutoff = now - 60000; + for (const [id, timestamp] of this.userAIRateLimit.entries()) { + if (timestamp < cutoff) { + this.userAIRateLimit.delete(id); + } + }.github/workflows/dev-build.yml (1)
22-43
: Consider quoting shell variables to prevent word splitting.The shell script has several unquoted variables that shellcheck flags. While these are likely safe in this context, quoting them follows best practices and prevents potential issues.
Apply this diff to add quotes:
if git rev-parse ${{ github.event.before }} 2>/dev/null; then - CHANGED=$(git diff --name-only ${{ github.event.before }} ${{ github.sha }}) + CHANGED=$(git diff --name-only "${{ github.event.before }}" "${{ github.sha }}") else # Fallback: diff last two commits if previous SHA is missing CHANGED=$(git diff --name-only HEAD~1 HEAD)docker-manage.sh (3)
4-6
: Harden shell script execution (strict mode + stable working dir).Improve robustness and avoid surprises when run outside repo root.
-set -e +set -Eeuo pipefail +IFS=$'\n\t' +# Ensure we run from the repo root (script directory) +cd "$(dirname "$0")"
41-61
: Prefer ‘docker compose’ (plugin) over legacy ‘docker-compose’.Compat improvement on newer Docker installs. You can gate by fallback.
- docker-compose up -d + docker compose up -d ... - docker-compose -f docker-compose.yml -f docker-compose.dev.yml up --build + docker compose -f docker-compose.yml -f docker-compose.dev.yml up --build ... - docker-compose down + docker compose down ... - docker-compose logs -f streamyfin-discord-bot + docker compose logs -f streamyfin-discord-bot ... - docker-compose pull - docker-compose up -d + docker compose pull + docker compose up -d ... - docker-compose down -v + docker compose down -v ... - docker-compose ps + docker compose psOptionally add a tiny wrapper function to choose available command.
Also applies to: 76-86, 89-104
106-111
: Non-interactive exec for backups.Avoid TTY requirement in CI/headless runs.
- docker-compose exec redis redis-cli BGSAVE + docker compose exec -T redis redis-cli BGSAVEindex.js (1)
91-97
: Optional: use ActivityType enum instead of magic number.Improves readability and future-proofing.
-import { GatewayIntentBits, REST, Routes, MessageFlags, EmbedBuilder } from 'discord.js'; +import { GatewayIntentBits, REST, Routes, MessageFlags, EmbedBuilder, ActivityType } from 'discord.js'; ... - client.user.setActivity("over Streamyfin's issues 👀", { type: 3 }); + client.user.setActivity("over Streamyfin's issues 👀", { type: ActivityType.Watching });Based on learnings.
rss.js (2)
200-220
: Cleanup also uses KEYS; switch to SCAN.Same blocking risk applies here.
Refactor to SCAN with MATCH
monitor:*:sent
and set expirations as you iterate.
231-237
: Integrate stopRSS into app shutdown.Expose and call
stopRSS()
on SIGINT/SIGTERM to shorten shutdown latency (break the sleep loop promptly).Hook it in index.js’s shutdown handlers.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
.dockerignore
(1 hunks).env.docker
(1 hunks).eslintrc.json
(1 hunks).github/workflows/dev-build.yml
(1 hunks)Dockerfile
(1 hunks)README.md
(1 hunks)client.js
(1 hunks)commands/bot/support.js
(1 hunks)commands/github/stats.js
(1 hunks)docker-compose.dev.yml
(1 hunks)docker-manage.sh
(1 hunks)example.docker-compose.yml
(1 hunks)health-check.js
(1 hunks)index.js
(2 hunks)package.json
(1 hunks)redisClient.js
(1 hunks)rss.js
(1 hunks)utils/constants.js
(1 hunks)utils/logger.js
(1 hunks)utils/validation.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
rss.js (2)
commands/bot/monitor.js (9)
keys
(107-107)key
(77-77)key
(100-100)key
(133-133)type
(74-74)type
(112-112)url
(75-75)url
(98-98)url
(131-131)redisClient.js (1)
redisClient
(3-10)
client.js (4)
commands/github/stats.js (1)
response
(86-92)rss.js (1)
response
(166-175)commands/github/issue.js (3)
response
(33-40)response
(52-52)repoName
(20-20)oldIndex.js (2)
axios
(3-3)releases
(27-29)
index.js (3)
utils/validation.js (3)
validateEnvironment
(28-57)logValidationResults
(74-92)setEnvironmentDefaults
(62-68)rss.js (2)
startRSS
(21-36)message
(111-111)redisClient.js (1)
redisClient
(3-10)
utils/validation.js (2)
health-check.js (1)
missing
(29-29)index.js (1)
validation
(14-14)
health-check.js (1)
utils/validation.js (1)
missing
(29-29)
commands/bot/support.js (1)
commands/github/repo.js (1)
embed
(21-51)
commands/github/stats.js (2)
commands/github/repo.js (3)
embed
(21-51)filter
(73-73)collector
(74-74)rss.js (1)
response
(166-175)
🪛 actionlint (1.7.7)
.github/workflows/dev-build.yml
22-22: shellcheck reported issue in this script: SC2086:info:2:35: Double quote to prevent globbing and word splitting
(shellcheck)
22-22: shellcheck reported issue in this script: SC2086:info:15:37: Double quote to prevent globbing and word splitting
(shellcheck)
22-22: shellcheck reported issue in this script: SC2086:info:18:38: Double quote to prevent globbing and word splitting
(shellcheck)
63-63: shellcheck reported issue in this script: SC2086:info:2:36: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 dotenv-linter (3.3.0)
.env.docker
[warning] 6-6: [UnorderedKey] The CLIENT_ID key should go before the DISCORD_TOKEN key
(UnorderedKey)
[warning] 10-10: [UnorderedKey] The REPO_NAME key should go before the REPO_ORG key
(UnorderedKey)
[warning] 14-14: [UnorderedKey] The GITHUB_API_BASE key should go before the GITHUB_TOKEN key
(UnorderedKey)
[warning] 18-18: [UnorderedKey] The AI_SUPPORTCHANNEL_ID key should go before the FORUM_CHANNEL_ID key
(UnorderedKey)
[warning] 20-20: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 20-20: [UnorderedKey] The CHANNELS_TO_IGNORE key should go before the FORUM_CHANNEL_ID key
(UnorderedKey)
[warning] 31-31: [UnorderedKey] The ENABLE_RSS_MONITORING key should go before the LOG_LEVEL key
(UnorderedKey)
[warning] 35-35: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
🪛 markdownlint-cli2 (0.18.1)
README.md
174-174: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (13)
utils/validation.js (1)
28-57
: LGTM! Well-structured validation with helpful warnings.The validation logic correctly separates required variables from optional ones and provides actionable warnings for missing optional features. The structure makes it easy to maintain and extend.
utils/logger.js (1)
5-67
: LGTM! Clean and effective logger implementation.The Logger class provides a well-structured logging facility with proper level filtering and formatting. The implementation is straightforward and follows good practices.
package.json (1)
7-18
: LGTM! Comprehensive script additions.The new scripts provide valuable development, linting, validation, and Docker management capabilities. The script set is well-organized and aligns with the broader improvements in the PR.
client.js (3)
16-53
: LGTM! Well-structured unit conversion system.The unit conversion mappings are comprehensive and well-organized by category (temperature, distance, weight). The conversion functions are straightforward and include appropriate precision.
60-92
: LGTM! Clean conversion implementation.The
convertUnits
andfindConversion
methods work together effectively to scan text and apply conversions. The regex pattern is comprehensive and handles multiple unit formats.
97-115
: LGTM! Robust error handling for GitHub API.The
fetchStats
method includes proper timeout, User-Agent header, and graceful error handling that returns an empty array on failure.docker-compose.dev.yml (1)
1-18
: LGTM! Well-configured development environment.The docker-compose configuration properly sets up the development environment with debugging port, live reload via volumes, and appropriate environment variables.
.github/workflows/dev-build.yml (1)
68-79
: LGTM! Proper Docker build and push configuration.The workflow correctly builds and pushes dev images with appropriate tags and caching strategies.
health-check.js (1)
13-44
: LGTM! Robust health check implementation.The health check script properly validates environment variables, handles timeouts, and provides clear error messages. The dynamic import verification is a good practice for ensuring module availability.
.env.docker (1)
20-20
: Parsing logic correctly handles JSON array input forCHANNELS_TO_IGNORE
–parseChannelsToIgnore()
invokesJSON.parse
and returns a string array, so no changes needed.index.js (1)
210-225
: Modal handler: verify command key ‘Report Message’.Slash command names must be lowercase/no spaces; this lookup may never resolve.
Search and ensure the command’s
.data.name
matches exactly, or map appropriately. If actual name isreport
, use that key.example.docker-compose.yml (1)
26-29
: LOG_LEVEL casing may not match logger expectations.If your logger expects lowercase (info/warn/error),
INFO
could be ignored/mapped incorrectly.Confirm utils/logger.js accepted values. Consider
log_level=info
.Dockerfile (1)
23-27
: .env excluded by .dockerignore
The.dockerignore
file already lists.env
(plus.env.local
,.env.*.local
), so secrets won’t be copied into the image.
Dockerfile* | ||
docker-compose*.yml | ||
.dockerignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Don't ignore Dockerfile(s) and compose files.
Excluding these can break docker builds and tooling that rely on default locations.
Proposed change:
-# Docker files
-Dockerfile*
-docker-compose*.yml
.dockerignore
Verify your build script doesn’t depend on these being in context:
🏁 Script executed:
#!/bin/bash
# Search docker build invocations; ensure -f is used if Dockerfile is ignored
rg -nP 'docker\s+build' -C2
rg -nP 'docker-compose|docker compose' -C2
Length of output: 3215
Remove Dockerfile*
and docker-compose*.yml
from .dockerignore
These files are required by default docker build
(no -f
) and docker-compose up
commands.
-Dockerfile*
-docker-compose*.yml
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
Dockerfile* | |
docker-compose*.yml | |
.dockerignore | |
.dockerignore |
🤖 Prompt for AI Agents
In .dockerignore around lines 42 to 44, the entries "Dockerfile*" and
"docker-compose*.yml" are ignoring files that Docker and docker-compose expect
by default; remove those two patterns from .dockerignore so the default
Dockerfile and any docker-compose yml files are not excluded, leaving other
ignore entries (like .dockerignore) intact.
- Add .github/workflows/ to relevant change detection pattern - Ensures Docker images are rebuilt when CI/CD workflows are modified - Allows testing workflow changes in dev environment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
.github/workflows/dev-build.yml (4)
58-64
: Guard Docker Hub login/push when secrets are missing.Without secrets the job will fail; skip push gracefully or fail early with a clear message.
Apply this diff to gate login and push:
- name: Log in to Docker Hub - if: env.RELEVANT_CHANGES == 'true' + if: env.RELEVANT_CHANGES == 'true' && secrets.DOCKER_USERNAME != '' && secrets.DOCKER_PASSWORD != '' uses: docker/login-action@v3 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} + + - name: Notice when Docker Hub secrets are missing + if: env.RELEVANT_CHANGES == 'true' && (secrets.DOCKER_USERNAME == '' || secrets.DOCKER_PASSWORD == '') + run: | + echo "⚠️ Docker Hub secrets not configured; skipping push." + echo "DOCKER_PUSH=false" >> $GITHUB_ENV + shell: bash + + - name: Enable push when authenticated + if: env.RELEVANT_CHANGES == 'true' && secrets.DOCKER_USERNAME != '' && secrets.DOCKER_PASSWORD != '' + run: echo "DOCKER_PUSH=true" >> $GITHUB_ENV + shell: bash @@ - name: Build and push Dev Docker image if: env.RELEVANT_CHANGES == 'true' uses: docker/build-push-action@v5 with: context: . - push: true + push: ${{ env.DOCKER_PUSH != 'false' }} tags: | retardgerman/streamyfin-discord-bot:dev - retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_NAME }} + retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_SLUG }} cache-from: type=gha cache-to: type=gha,mode=maxAlso applies to: 73-84
85-90
: Actually verify pushed images (inspect digest).Current step only echoes. Inspect to ensure tags exist and are resolvable.
Apply this diff:
- name: Verify Docker image if: env.RELEVANT_CHANGES == 'true' - run: | - echo "✅ Dev Docker images built and pushed:" - echo "📦 retardgerman/streamyfin-discord-bot:dev" - echo "📦 retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_NAME }}" + run: | + echo "🔎 Inspecting pushed images..." + docker buildx imagetools inspect retardgerman/streamyfin-discord-bot:dev + docker buildx imagetools inspect retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_SLUG }} + echo "✅ Verified: retardgerman/streamyfin-discord-bot:dev and :dev-${{ env.BRANCH_SLUG }}"
22-38
: Simplify/solidify change detection.Shell-based diffing is brittle. Use a maintained action for path filtering.
Option A (replace the entire detection step):
- name: Filter paths id: changes uses: dorny/paths-filter@v3 with: filters: | relevant: - 'commands/**' - 'src/**' - 'lib/**' - 'bot/**' - 'index.js' - 'client.js' - 'package.json' - 'package-lock.json' - 'pnpm-lock.yaml' - 'yarn.lock' - 'Dockerfile*' - '.dockerignore' - 'docker-compose*.yml' - 'example.docker-compose.yml' - 'docker-manage.sh' - 'health-check.js' - '.github/workflows/**' - name: Set RELEVANT_CHANGES run: echo "RELEVANT_CHANGES=${{ steps.changes.outputs.relevant }}" >> $GITHUB_ENVGate subsequent steps with if: env.RELEVANT_CHANGES == 'true'.
12-15
: Add concurrency and timeout to avoid overlapping builds.Prevents racing pushes on rapid commits.
Add at workflow top-level:
concurrency: group: dev-build-${{ github.ref }} cancel-in-progress: trueAnd optionally set a job timeout:
jobs: dev-build: runs-on: ubuntu-latest timeout-minutes: 30
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/dev-build.yml
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: dev-build
.github/workflows/dev-build.yml
Outdated
if echo "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package\.json|Dockerfile|\.github/workflows/)'; then | ||
echo "RELEVANT_CHANGES=true" >> $GITHUB_ENV | ||
echo "✅ Relevant code changes detected" | ||
else | ||
echo "RELEVANT_CHANGES=false" >> $GITHUB_ENV | ||
echo "⚠️ No relevant code changes detected" | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Broaden change filter to cover Docker/lockfiles/scripts.
Current regex misses docker-compose*, lockfiles, scripts, .dockerignore, and health-check, causing false negatives.
Apply this diff:
- if echo "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package\.json|Dockerfile|\.github/workflows/)'; then
+ if echo "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package(-lock)?\.json|pnpm-lock\.yaml|yarn\.lock|Dockerfile.*|\.dockerignore|docker-compose.*\.ya?ml|example\.docker-compose\.ya?ml|docker-manage\.sh|health-check\.js|\.github/workflows/|\.eslintrc(\.(json|js|cjs|yml|yaml))?|tsconfig(\..*)?\.json)'; then
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if echo "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package\.json|Dockerfile|\.github/workflows/)'; then | |
echo "RELEVANT_CHANGES=true" >> $GITHUB_ENV | |
echo "✅ Relevant code changes detected" | |
else | |
echo "RELEVANT_CHANGES=false" >> $GITHUB_ENV | |
echo "⚠️ No relevant code changes detected" | |
fi | |
if echo "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package(-lock)?\.json|pnpm-lock\.yaml|yarn\.lock|Dockerfile.*|\.dockerignore|docker-compose.*\.ya?ml|example\.docker-compose\.ya?ml|docker-manage\.sh|health-check\.js|\.github/workflows/|\.eslintrc(\.(json|js|cjs|yml|yaml))?|tsconfig(\..*)?\.json)'; then | |
echo "RELEVANT_CHANGES=true" >> $GITHUB_ENV | |
echo "✅ Relevant code changes detected" | |
else | |
echo "RELEVANT_CHANGES=false" >> $GITHUB_ENV | |
echo "⚠️ No relevant code changes detected" | |
fi |
🤖 Prompt for AI Agents
.github/workflows/dev-build.yml around lines 41-47: the current grep regex
misses docker-compose files, lockfiles, script folders, .dockerignore and
health-check files; update the regex to also match docker-compose (and
variants), .dockerignore, scripts/, health-check files, and common lockfiles
(package-lock.json, yarn.lock, pnpm-lock.*) so those changes set
RELEVANT_CHANGES=true; keep the same grep -qE usage and add the new alternatives
to the existing pattern.
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (33)
rss.js (4)
44-44
: Use SCAN instead of KEYS to avoid blocking Redis.The
KEYS
command blocks Redis and may include non-hash keys likemonitor:*:sent
andmonitor:*:lastCheck
, causinghGetAll
to fail with WRONGTYPE errors.Apply this diff to use non-blocking SCAN:
- const keys = await redisClient.keys('monitor:*'); + const feedKeys = []; + // Use SCAN to avoid blocking Redis + for await (const key of redisClient.scanIterator({ MATCH: 'monitor:*', TYPE: 'hash' })) { + feedKeys.push(key); + } - console.log(`[RSS] Processing ${keys.length} configured feeds`); + console.log(`[RSS] Processing ${feedKeys.length} configured feeds`); - for (const key of keys) { + for (const key of feedKeys) {
58-58
: Update cleanup threshold check to use the corrected variable.After fixing line 44 to use
feedKeys
, update this reference accordingly.- if (keys.length > CLEANUP_THRESHOLD) { + if (feedKeys.length > CLEANUP_THRESHOLD) {
166-175
: Node's fetch doesn't support timeout option; implement AbortController.The
timeout
option is ignored by Node's native fetch and requests may hang indefinitely.Apply this diff:
- const response = await fetch( + const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), 10_000); + const response = await fetch( `https://www.reddit.com/r/${subreddit}/new.json?limit=5`, { headers: { "User-Agent": "StreamyfinBot/1.0 (+https://github.com/streamyfin/streamyfin-discord-bot)", "Accept": "application/json" }, - timeout: 10000 + signal: controller.signal } - ); + ).finally(() => clearTimeout(timeoutId));
203-203
: Replace KEYS with SCAN in cleanup to avoid blocking Redis.Using
KEYS
in cleanup can block Redis on large keyspaces.Apply this diff:
- const keys = await redisClient.keys('monitor:*:sent'); let cleanedCount = 0; - for (const key of keys) { + for await (const key of redisClient.scanIterator({ MATCH: 'monitor:*:sent' })) {redisClient.js (1)
9-9
: Remove unsupported option for node-redis v5.The
retry_unfulfilled_commands
option doesn't exist in node-redis v5.x and will be ignored.Based on learnings.
Apply this diff:
socket: { reconnectStrategy: (retries) => Math.min(retries * 50, 1000), connectTimeout: 10000, }, - retry_unfulfilled_commands: true, });
.dockerignore (1)
42-44
: Don't ignore Dockerfile and docker-compose files.Excluding
Dockerfile*
anddocker-compose*.yml
breaks defaultdocker build
anddocker-compose up
commands that expect these files in the build context.Apply this diff:
# Docker files -Dockerfile* -docker-compose*.yml .dockerignorecommands/bot/support.js (3)
1-1
: Remove MessageFlags import; use ephemeral: true instead.discord.js v14 uses the
ephemeral
boolean option directly on interaction replies, not theMessageFlags.Ephemeral
flag.Based on learnings.
Apply this diff:
-import { SlashCommandBuilder, EmbedBuilder, MessageFlags } from 'discord.js'; +import { SlashCommandBuilder, EmbedBuilder } from 'discord.js';
44-47
: Replace flags with ephemeral option.discord.js v14 requires
ephemeral: true
in the reply options instead offlags: MessageFlags.Ephemeral
.Based on learnings.
Apply this diff:
await interaction.reply({ embeds: [embed], - flags: MessageFlags.Ephemeral + ephemeral: true });
49-56
: Fix error handler to check interaction state and use ephemeral correctly.The error handler should verify if the interaction was already replied to or deferred, and use
ephemeral: true
instead of flags.Based on learnings.
Apply this diff:
} catch (error) { console.error('[SUPPORT] Error executing support command:', error.message); - await interaction.reply({ - content: '❌ An error occurred while displaying support information.', - flags: MessageFlags.Ephemeral - }).catch(() => {}); + const errorMsg = { content: '❌ An error occurred while displaying support information.', ephemeral: true }; + if (interaction.replied || interaction.deferred) { + await interaction.followUp(errorMsg).catch(() => {}); + } else { + await interaction.reply(errorMsg).catch(() => {}); + } }.github/workflows/dev-build.yml (2)
41-41
: Broaden change filter to cover all relevant paths.The current regex misses docker-compose files, lockfiles (package-lock.json, yarn.lock, pnpm-lock.yaml), .dockerignore, health-check.js, and docker-manage.sh, which are all introduced in this PR and should trigger builds.
Apply this diff:
- if echo "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package\.json|Dockerfile|\.github/workflows/)'; then + if echo "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|utils/|index\.js|client\.js|package(-lock)?\.json|pnpm-lock\.yaml|yarn\.lock|Dockerfile.*|\.dockerignore|docker-compose.*\.ya?ml|example\.docker-compose\.ya?ml|docker-manage\.sh|health-check\.js|\.github/workflows/|\.eslintrc(\.(json|js|cjs|yml|yaml))?|tsconfig(\..*)?\.json)'; then
65-72
: Sanitize branch name for Docker tag compatibility.Branch names containing slashes (e.g.,
feature/x
) produce invalid Docker tags. Generate a sanitized slug before using it in image tags.Apply this diff:
- - name: Extract branch name + - name: Extract and sanitize branch name if: env.RELEVANT_CHANGES == 'true' shell: bash run: | - BRANCH_NAME=${GITHUB_REF#refs/heads/} - echo "BRANCH_NAME=$BRANCH_NAME" >> $GITHUB_ENV - echo "Building for branch: $BRANCH_NAME" + BRANCH_RAW="${{ github.ref_name }}" + # Docker tag charset: [A-Za-z0-9_.-], lower-case recommended; no slashes + BRANCH_SLUG="$(echo "$BRANCH_RAW" | tr '[:upper:]' '[:lower:]' | sed -E 's/[^a-z0-9._-]+/-/g' | sed -E 's/^-+|-+$//g' | cut -c1-120)" + echo "BRANCH_SLUG=$BRANCH_SLUG" >> $GITHUB_ENV + echo "Building for branch: $BRANCH_RAW (tag slug: $BRANCH_SLUG)"And update the tag reference:
tags: | retardgerman/streamyfin-discord-bot:dev - retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_NAME }} + retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_SLUG }}utils/validation.js (1)
44-46
: Fix typo in environment variable name.The environment variable is misspelled as
AI_SUPPRT_URL
(missing 'O'). This typo appears in multiple files across the codebase and will cause configuration issues.Apply this diff to fix the typo:
- if (!process.env.AI_APIKEY || !process.env.AI_SUPPRT_URL) { + if (!process.env.AI_APIKEY || !process.env.AI_SUPPORT_URL) { warnings.push('AI support not configured - AI features will be disabled'); }Additionally, verify and fix this typo in:
client.js
(search for AI_SUPPRT_URL).env.docker
(search for AI_SUPPRT_URL)package.json (1)
10-10
: Test script always exits 0, masking failures.This prevents CI from detecting test failures. Either implement real tests or make the script fail to encourage adding them.
Apply one of these options:
Option 1: Fail until tests are implemented
- "test": "echo \"Tests not implemented yet\" && exit 0", + "test": "echo \"No tests implemented\" >&2 && exit 1",Option 2: Skip in CI with a guard
- "test": "echo \"Tests not implemented yet\" && exit 0", + "test": "if [ \"$CI\" = \"true\" ]; then echo \"Tests required in CI\" >&2 && exit 1; else echo \"Tests not implemented\" && exit 0; fi",commands/github/stats.js (6)
1-1
: Fix imports for discord.js v14 compatibility.Remove
MessageFlags
(deprecated usage) and addActionRowBuilder
(required for components).Apply this diff:
-import { SlashCommandBuilder, StringSelectMenuBuilder, MessageFlags, EmbedBuilder } from 'discord.js'; +import { SlashCommandBuilder, StringSelectMenuBuilder, ActionRowBuilder, EmbedBuilder } from 'discord.js';Based on learnings
11-11
: Defer reply as ephemeral from the start.Ephemeral status must be set on the first response. If you intend this interaction to be ephemeral, defer with
{ ephemeral: true }
.Apply this diff:
- await interaction.deferReply(); + await interaction.deferReply({ ephemeral: true });Based on learnings
18-23
: Remove incorrect MessageFlags usage.
MessageFlags.Ephemeral
is not valid ineditReply()
. If deferred ephemerally, the reply is already ephemeral.Apply this diff:
if (!leaderboard?.length) { - return interaction.editReply({ - content: "❌ Unable to fetch contributor data", - flags: MessageFlags.Ephemeral - }); + return interaction.editReply({ + content: "❌ Unable to fetch contributor data" + }); }Based on learnings
25-30
: Remove MessageFlags from second error path.Same issue as above—
editReply()
doesn't acceptflags
.Apply this diff:
if (!repoData) { - return interaction.editReply({ - content: "❌ Unable to fetch repository information", - flags: MessageFlags.Ephemeral - }); + return interaction.editReply({ + content: "❌ Unable to fetch repository information" + }); }Based on learnings
35-38
: Wrap select menu in ActionRowBuilder.discord.js v14 requires components to be wrapped in
ActionRowBuilder
, not plain objects.Apply this diff:
const message = await interaction.editReply({ embeds: [embed], - components: [{ type: 1, components: [selectMenu] }] + components: [new ActionRowBuilder().addComponents(selectMenu)] });Based on learnings
118-121
: Clarify "Watchers" metric.
repoData.watchers
is an alias forstargazers_count
, not the true watcher count. UserepoData.subscribers_count
for actual watchers or rename the label to "Stars" to avoid confusion.Apply one of these fixes:
Option 1: Show true watchers
{ - name: "Watchers", - value: repoData.watchers?.toLocaleString() || "0", + name: "Watchers", + value: repoData.subscribers_count?.toLocaleString() || "0", inline: true },Option 2: Rename to "Stars" (if duplication with line 123-126 is intentional)
{ - name: "Watchers", + name: "Stars", value: repoData.watchers?.toLocaleString() || "0", inline: true },index.js (8)
47-88
: Serialize SlashCommandBuilder instances before registration.At line 73,
props.default.data
is pushed directly intotempCommands
, but if it's aSlashCommandBuilder
instance, it needs to be serialized with.toJSON()
before being sent to Discord's API.Apply this fix to ensure builders are serialized:
- tempCommands.push(props.default.data); + tempCommands.push( + typeof props.default.data?.toJSON === 'function' + ? props.default.data.toJSON() + : props.default.data + );
110-122
: Useephemeral: true
instead offlags: MessageFlags.Ephemeral
.Discord.js v14 uses the boolean
ephemeral
option for interaction replies. Usingflags
may not behave as expected.Apply:
await interaction.reply({ content: 'This channel is ignored by the bot.', - flags: MessageFlags.Ephemeral, + ephemeral: true, });Based on learnings.
124-135
: Replace deprecatedisCommand()
withisChatInputCommand()
.The
isCommand()
method is not part of discord.js v14 and will cause errors. UseisChatInputCommand()
to detect slash commands.Apply:
- if (interaction.isCommand()) { + if (interaction.isChatInputCommand()) { await handleCommandInteraction(interaction);Based on learnings.
153-163
: Useephemeral: true
for error messages.Update ephemeral replies to use the v14 boolean option.
Apply:
const errorMessage = { content: 'There was an error while executing this command!', - flags: MessageFlags.Ephemeral, + ephemeral: true, };Based on learnings.
176-183
: UsePermissionFlagsBits.ManageMessages
andephemeral: true
.Discord.js v14 requires bitfield enums for permission checks and boolean ephemeral option.
Apply:
- if (!member.permissions.has('ManageMessages')) { + if (!member.permissions.has(PermissionFlagsBits.ManageMessages)) { return interaction.reply({ content: 'You do not have permission to resolve this report.', - flags: MessageFlags.Ephemeral, + ephemeral: true, }); }Add import at the top:
-import { GatewayIntentBits, REST, Routes, MessageFlags, EmbedBuilder } from 'discord.js'; +import { GatewayIntentBits, REST, Routes, MessageFlags, EmbedBuilder, PermissionFlagsBits } from 'discord.js';Based on learnings.
196-202
: Useephemeral: true
for error replies.Apply:
await interaction.reply({ content: 'There was an error while resolving this report!', - flags: MessageFlags.Ephemeral, + ephemeral: true, }).catch(() => {});Based on learnings.
217-222
: Useephemeral: true
for modal error replies.Apply:
await interaction.reply({ content: 'There was an error while submitting your report!', - flags: MessageFlags.Ephemeral, + ephemeral: true, }).catch(() => {});Based on learnings.
268-278
: Add SIGTERM handler for graceful container shutdown.Docker sends SIGTERM on stop, not SIGINT. Add a SIGTERM handler to ensure graceful shutdown in containerized environments.
Apply:
process.on('SIGINT', async () => { console.log('[BOT] Received SIGINT, shutting down gracefully...'); try { await redisClient.quit(); client.destroy(); } catch (error) { console.error('[BOT] Error during shutdown:', error.message); } process.exit(0); }); + +process.on('SIGTERM', async () => { + console.log('[BOT] Received SIGTERM, shutting down gracefully...'); + try { + await redisClient.quit(); + client.destroy(); + } catch (error) { + console.error('[BOT] Error during shutdown:', error.message); + } + process.exit(0); +});Dockerfile (2)
29-46
: Production image inherits unnecessary build toolchain.The production stage is based on the
base
stage which includes python3, make, and g++. This increases attack surface and image size.Refactor production to use a clean base:
-FROM base AS production +FROM node:22-alpine AS production +WORKDIR /appRetain user creation and subsequent steps as-is.
35-40
: Production dependencies overwritten by build stage copy.You run
npm ci --omit=dev
at line 36, but then copy the entire app from the build stage at line 39, which includesnode_modules
with dev dependencies. This overwrites your production-only install.Create a dedicated prod-deps stage:
FROM base AS dependencies # Install all dependencies (including dev dependencies) RUN npm ci --include=dev +FROM base AS prod-deps +# Install only production dependencies +RUN npm ci --omit=dev + FROM base AS build # Copy node_modules from dependencies stage COPY --from=dependencies /app/node_modules ./node_modules # Copy source code COPY . . # Run linting and validation (optional - can be disabled for faster builds) RUN npm run lint || echo "Linting failed but continuing build" -FROM base AS production +FROM node:22-alpine AS production +WORKDIR /app # Create non-root user for security RUN addgroup -g 1001 -S nodejs && \ adduser -S nodejs -u 1001 - -# Install only production dependencies -RUN npm ci --omit=dev && npm cache clean --force +# Copy production-only node_modules +COPY --from=prod-deps /app/node_modules ./node_modules +RUN npm cache clean --force # Copy application files COPY --from=build --chown=nodejs:nodejs /app ..env.docker (1)
22-24
: Fix typo:AI_SUPPRT_URL
should beAI_SUPPORT_URL
.This typo appears across the codebase (client.js, README.md, validation.js). Standardize to
AI_SUPPORT_URL
.Apply:
# Optional AI Support Configuration AI_APIKEY=your_ai_api_key -AI_SUPPRT_URL=your_ai_support_service_url +AI_SUPPORT_URL=your_ai_support_service_urlAlso update references in:
client.js
(lines 248, 255, 305)README.md
(line 146)utils/validation.js
(if present)README.md (1)
146-146
: Fix environment variable typo:AI_SUPPRT_URL
→AI_SUPPORT_URL
.Correct the spelling to match the standard variable name.
Apply:
-- `AI_SUPPRT_URL` - URL for AI support service +- `AI_SUPPORT_URL` - URL for AI support serviceexample.docker-compose.yml (2)
41-41
: Fix typo in AI support URL env var.Still spelled
AI_SUPPRT_URL
, so the bot will missAI_SUPPORT_URL
. Please rename and, if needed, keep a fallback during migration.- - AI_SUPPRT_URL=your_ai_support_url + - AI_SUPPORT_URL=your_ai_support_url
55-58
: Replace no-op healthcheck with real probe.The current command always exits 0. Point it at the actual health-check script so orchestration can detect failures.
- test: ["CMD", "node", "-e", "process.exit(0)"] + test: ["CMD", "node", "health-check.js"]Ensure
health-check.js
is in the image and returns non-zero on failure.
🧹 Nitpick comments (4)
utils/constants.js (2)
8-8
: Consider environment variable for BOT_USER_ID.Hardcoding the bot user ID reduces flexibility across environments. Consider reading from
process.env.BOT_USER_ID
with this value as a fallback.Apply this diff:
BOT: { - USER_ID: '398161771476549654', + USER_ID: process.env.BOT_USER_ID || '398161771476549654', ACTIVITY: "over Streamyfin's issues 👀", ACTIVITY_TYPE: 3, // Watching },
10-10
: Clarify magic number with discord.js enum or named constant.The magic number
3
represents an Activity Type. For clarity and maintainability, reference discord.js'sActivityType
enum or add a comment explaining the numeric mapping.Example using discord.js enum:
import { ActivityType } from 'discord.js'; export const CONFIG = { BOT: { USER_ID: '398161771476549654', ACTIVITY: "over Streamyfin's issues 👀", ACTIVITY_TYPE: ActivityType.Watching, // 3 }, // ... };utils/logger.js (1)
1-67
: LGTM! Consider established logging library for production.The custom Logger implementation is clean and functional. For production use, consider established libraries like
winston
orpino
that offer structured logging, transports, log rotation, and better performance.Example with winston:
import winston from 'winston'; export default function createLogger(module = 'MAIN') { return winston.createLogger({ level: process.env.LOG_LEVEL?.toLowerCase() || 'info', format: winston.format.combine( winston.format.timestamp(), winston.format.printf(({ timestamp, level, message }) => `${timestamp} [${level.toUpperCase()}] [${module}] ${message}` ) ), transports: [new winston.transports.Console()] }); }README.md (1)
174-186
: Add language identifier to code block.The fenced code block should specify a language for syntax highlighting.
Apply:
-``` +```plaintext ├── commands/ # Command implementations
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
.dockerignore
(1 hunks).env.docker
(1 hunks).eslintrc.json
(1 hunks).github/workflows/dev-build.yml
(1 hunks)Dockerfile
(1 hunks)README.md
(1 hunks)client.js
(1 hunks)commands/bot/support.js
(1 hunks)commands/github/stats.js
(1 hunks)docker-compose.dev.yml
(1 hunks)docker-manage.sh
(1 hunks)example.docker-compose.yml
(1 hunks)health-check.js
(1 hunks)index.js
(2 hunks)package.json
(1 hunks)redisClient.js
(1 hunks)rss.js
(1 hunks)utils/constants.js
(1 hunks)utils/logger.js
(1 hunks)utils/validation.js
(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/dev-build.yml
24-24: shellcheck reported issue in this script: SC2086:info:2:35: Double quote to prevent globbing and word splitting
(shellcheck)
24-24: shellcheck reported issue in this script: SC2086:info:18:37: Double quote to prevent globbing and word splitting
(shellcheck)
24-24: shellcheck reported issue in this script: SC2086:info:21:38: Double quote to prevent globbing and word splitting
(shellcheck)
68-68: shellcheck reported issue in this script: SC2086:info:2:36: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 dotenv-linter (3.3.0)
.env.docker
[warning] 6-6: [UnorderedKey] The CLIENT_ID key should go before the DISCORD_TOKEN key
(UnorderedKey)
[warning] 10-10: [UnorderedKey] The REPO_NAME key should go before the REPO_ORG key
(UnorderedKey)
[warning] 14-14: [UnorderedKey] The GITHUB_API_BASE key should go before the GITHUB_TOKEN key
(UnorderedKey)
[warning] 18-18: [UnorderedKey] The AI_SUPPORTCHANNEL_ID key should go before the FORUM_CHANNEL_ID key
(UnorderedKey)
[warning] 20-20: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 20-20: [UnorderedKey] The CHANNELS_TO_IGNORE key should go before the FORUM_CHANNEL_ID key
(UnorderedKey)
[warning] 31-31: [UnorderedKey] The ENABLE_RSS_MONITORING key should go before the LOG_LEVEL key
(UnorderedKey)
[warning] 35-35: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
🪛 markdownlint-cli2 (0.18.1)
README.md
174-174: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (11)
rss.js (3)
21-36
: LGTM: Main loop with graceful shutdown.The RSS monitoring loop correctly handles errors and respects the shutdown flag, ensuring graceful termination.
132-141
: LGTM: Type recovery for Redis keys.The
ensureSetKey
function properly handles type mismatches by deleting and recreating keys with the correct type.
9-9
: Confirm timeout support
rss-parser accepts atimeout
option (milliseconds) when constructing a Parser (e.g.new Parser({ timeout: 10000 })
), so no change is needed..eslintrc.json (1)
1-25
: LGTM: Solid ESLint configuration.The ESLint rules enforce consistent code style (indentation, quotes, semicolons) and modern JavaScript practices (prefer-const, no-var, template literals). The configuration is well-suited for an ES2022 Node.js project.
redisClient.js (1)
12-32
: LGTM: Resilient connection handling with observability.Event listeners provide clear visibility into Redis connection states, and the non-fatal error handling allows the application to continue without Redis, aligning with the PR's resilience improvements.
.dockerignore (1)
7-12
: LGTM: Environment files properly excluded.Excluding
.env
files prevents sensitive credentials from being copied into Docker images, which is a security best practice.commands/bot/support.js (1)
10-42
: LGTM: Well-structured support embed.The embed provides clear, organized support guidelines with appropriate fields and formatting. The use of emojis and structured information improves user experience.
package.json (1)
7-18
: LGTM! Well-structured npm scripts.The new scripts provide good developer ergonomics with dev mode, linting, validation, and Docker management. The validate script correctly uses dynamic import for ESM compatibility.
docker-manage.sh (1)
1-169
: LGTM! Well-structured Docker management script.The script provides comprehensive Docker lifecycle management with:
- Clear error handling with
set -e
- Environment file validation before operations
- Colorized output for better UX
- Proper function separation
- Helpful command documentation
docker-compose.dev.yml (1)
1-18
: LGTM! Appropriate development configuration.The development compose file correctly:
- Targets the
base
stage for faster rebuilds- Sets development environment variables
- Mounts source code with volume excludes for
node_modules
- Exposes debugging port 9229
- Enables interactive terminal support
health-check.js (1)
1-55
: LGTM! Robust health check implementation.The health check script properly:
- Validates required environment variables
- Tests module imports
- Implements a timeout mechanism
- Cleans up resources in finally block
- Exits with appropriate status codes
- Fix typo from AI_SUPPRT_URL to AI_SUPPORT_URL in both query and feedback endpoints - Add fallback to old variable name for backward compatibility during migration - Update both axios calls at lines ~256 and ~307 to use corrected variable
- Quote all variable expansions to prevent word splitting and glob expansion - Use git rev-parse --verify to safely check commit existence before operations - Add proper fallback logic for new repos without HEAD~1 - Initialize CHANGED variable and handle empty diff outputs safely - Use printf instead of echo for safer variable output handling - Add error handling with || echo "" fallbacks to prevent command failures
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
client.js (1)
15-53
: Consider extracting unit conversions to a separate constants file.This large conversion mapping (38 lines) would be more maintainable in a dedicated
constants/unitConversions.js
file, making it easier to test, update, and reuse across the codebase.Example structure:
// constants/unitConversions.js export const unitConversions = { temperature: { /* ... */ }, distance: { /* ... */ }, weight: { /* ... */ } };Then import in client.js:
+import { unitConversions } from './constants/unitConversions.js'; + export default class Streamyfin extends Client { constructor(...options) { super(...options); this.commands = new Collection(); this.userId = '398161771476549654'; this.repoOrg = process.env.REPO_ORG; this.repoName = process.env.REPO_NAME; this.githubToken = process.env.GITHUB_TOKEN; this.userAIRateLimit = new Map(); - - // Unit conversion mappings - this.unitConversions = { - temperature: { /* ... */ }, - distance: { /* ... */ }, - weight: { /* ... */ } - }; + this.unitConversions = unitConversions; }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
client.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
client.js (2)
commands/github/stats.js (1)
response
(86-92)oldIndex.js (2)
axios
(3-3)releases
(27-29)
🔇 Additional comments (8)
client.js (8)
13-13
: LGTM!The Map-based rate limiting state is appropriate for tracking per-user AI request timestamps.
60-78
: LGTM!The multi-result conversion approach is a solid improvement. The regex pattern correctly uses word boundaries to avoid false matches, and the implementation properly collects and formats multiple conversions.
85-92
: LGTM!The helper method provides good separation of concerns and is easy to understand.
97-115
: LGTM!The enhanced error handling with timeout and User-Agent header is well-implemented. The safe fallback (empty array) ensures the application remains functional even when the API is unavailable.
121-144
: LGTM!The enhancements maintain robustness with proper timeout, User-Agent header, and safe fallback values. The configurable base API URL adds deployment flexibility.
151-170
: LGTM!The input sanitization and consistent return structure are good improvements. The timeout and error handling ensure reliability.
177-209
: LGTM!The configuration guard prevents unnecessary API calls when the key is missing, and the safe default (returning
false
on error) ensures messages aren't incorrectly blocked.
287-332
: LGTM!Extracting the feedback collection logic into a dedicated method improves code organization and maintainability. The error handling, timeout configuration, and reaction collector implementation are all well-structured.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
.github/workflows/dev-build.yml (2)
47-53
: Still missing Docker/scripts/lockfiles in relevance filter.Current grep ignores docker-compose*, .dockerignore, lockfiles, scripts, health-check files, etc., so Docker-impacting edits skip the build. Please expand the regex accordingly.
Apply this diff:
- if [ -n "$CHANGED" ] && printf '%s' "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package\.json|Dockerfile|\.github/workflows/)'; then + if [ -n "$CHANGED" ] && printf '%s' "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package(-lock)?\.json|pnpm-lock\.yaml|yarn\.lock|Dockerfile.*|\.dockerignore|docker-compose.*\.ya?ml|example\.docker-compose\.ya?ml|docker-manage\.sh|health-check\.js|\.github/workflows/|\.eslintrc(\.(json|js|cjs|yml|yaml))?|tsconfig(\..*)?\.json)'; then
71-96
: Sanitize branch before using it as a Docker tag.
dev-${{ env.BRANCH_NAME }}
still uses the raw branch name; slashes and uppercase characters yield invalid tags and break pushes. Generate a safe slug (lowercase, replace invalid chars with-
, trim, fallback if empty) and use it for tagging and logging.Apply this diff:
- - name: Extract branch name + - name: Extract and sanitize branch name if: env.RELEVANT_CHANGES == 'true' shell: bash run: | - BRANCH_NAME=${GITHUB_REF#refs/heads/} - echo "BRANCH_NAME=$BRANCH_NAME" >> $GITHUB_ENV - echo "Building for branch: $BRANCH_NAME" + BRANCH_RAW="${GITHUB_REF#refs/heads/}" + BRANCH_SLUG="$(printf '%s' "$BRANCH_RAW" | tr '[:upper:]' '[:lower:]' | sed -E 's/[^a-z0-9._-]+/-/g' | sed -E 's/-+/-/g' | sed -E 's/^-+|-+$//g')" + if [ -z "$BRANCH_SLUG" ]; then + BRANCH_SLUG="unknown" + fi + echo "BRANCH_NAME=$BRANCH_RAW" >> "$GITHUB_ENV" + echo "BRANCH_SLUG=$BRANCH_SLUG" >> "$GITHUB_ENV" + echo "Building for branch: $BRANCH_RAW (tag slug: $BRANCH_SLUG)" @@ - retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_NAME }} + retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_SLUG }} @@ - echo "📦 retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_NAME }}" + echo "📦 retardgerman/streamyfin-discord-bot:dev-${{ env.BRANCH_SLUG }}"
🧹 Nitpick comments (5)
client.js (5)
16-53
: Consider externalizing unit conversion mappings for maintainability.The comprehensive unit conversion mappings are well-structured but significantly increase the constructor size. For improved modularity and testability, consider moving these to a separate constants or configuration file (e.g.,
utils/unitConversions.js
).
152-152
: Repository name sanitization may be incomplete.The sanitization only replaces spaces with hyphens but doesn't handle other characters that are invalid in GitHub repository names (e.g., special characters like
!
,@
,#
, uppercase letters may need lowercasing depending on requirements). Consider using a more comprehensive sanitization approach.For reference, GitHub repository names:
- Must start with a letter or number
- Can contain alphanumeric characters, hyphens, underscores, and periods
- Cannot contain consecutive hyphens
Apply this diff to improve sanitization:
- const sanitizedRepoName = repoName.replace(/\s+/g, '-'); + const sanitizedRepoName = repoName + .toLowerCase() + .trim() + .replace(/\s+/g, '-') + .replace(/[^a-z0-9._-]/g, '') + .replace(/-{2,}/g, '-');
201-201
: Consider making toxicity thresholds configurable.The toxicity and flirtation score thresholds are hardcoded at
0.2
. For flexibility across different use cases or communities with varying tolerance levels, consider making these configurable via environment variables or constants.Example:
+ const TOXICITY_THRESHOLD = parseFloat(process.env.TOXICITY_THRESHOLD || '0.2'); + const FLIRTATION_THRESHOLD = parseFloat(process.env.FLIRTATION_THRESHOLD || '0.2'); + async checkMessage(message) { // ... existing code ... - return toxicityScore >= 0.2 || flirtationScore >= 0.2; + return toxicityScore >= TOXICITY_THRESHOLD || flirtationScore >= FLIRTATION_THRESHOLD; }
221-224
: Consider making rate limit duration configurable.The 1-second rate limit is hardcoded. For better flexibility in managing load and user experience, consider making this configurable via environment variables or constants, similar to other configurable thresholds in the codebase.
Example:
+ const AI_RATE_LIMIT_MS = parseInt(process.env.AI_RATE_LIMIT_MS || '1000', 10); + async handleSupport(message) { // ... existing code ... if (this.userAIRateLimit.has(userID)) { const lastRequest = this.userAIRateLimit.get(userID); - if (now - lastRequest < 1000) return; + if (now - lastRequest < AI_RATE_LIMIT_MS) return; }
325-327
: Silent error handling may confuse users.When feedback submission fails, the error is logged but the user receives no indication that their feedback wasn't recorded. Consider adding a user-facing error message to inform them of the failure.
Apply this diff:
} catch (err) { console.error('[CLIENT] Feedback error:', err.message); + await sentMessage.reply("Sorry, there was an issue recording your feedback. Please try again later.").catch(() => {}); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/dev-build.yml
(1 hunks)client.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
client.js (4)
commands/github/stats.js (1)
response
(86-92)rss.js (1)
response
(166-175)commands/github/issue.js (3)
response
(33-40)response
(52-52)repoName
(20-20)commands/github/repo.js (1)
repoName
(15-15)
🔇 Additional comments (3)
client.js (3)
248-252
: AI support URL fallback properly addresses the previous typo concern.The backward-compatible fallback
process.env.AI_SUPPORT_URL || process.env.AI_SUPPRT_URL
correctly handles the environment variable name correction while maintaining compatibility during migration. The same pattern is consistently applied at line 305 in the feedback collector.
260-260
: Verify the 30-second timeout is appropriate for AI queries.The timeout is set to 30 seconds, which is significantly longer than other API calls in this file (10 seconds). Confirm this extended timeout is intentional for AI processing and won't negatively impact user experience if the service is slow or unresponsive.
61-61
: Unit conversion regex boundaries validated: tests confirm no unintended matches (e.g., “momentum”).
- Replace all instances of AI_SUPPRT_URL with AI_SUPPORT_URL - Remove fallback logic in client.js for cleaner configuration - Update environment files, documentation, and validation - Ensure consistent variable naming across all files Files updated: - client.js: removed fallback, use direct AI_SUPPORT_URL reference - example.docker-compose.yml: environment variable name - utils/validation.js: validation check - README.md: documentation - .env.docker: example environment file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
example.docker-compose.yml (1)
55-58
: Replace the no-op healthcheck with the real probe.
node -e "process.exit(0)"
always exits successfully, so Docker will never flag the bot as unhealthy. Please invoke the actual health-check script already shipped with the image (and optionally give it a start period so cold starts don’t flap).healthcheck: - test: ["CMD", "node", "-e", "process.exit(0)"] + test: ["CMD", "node", "health-check.js"] + start_period: 30s interval: 30s timeout: 5s retries: 3Adjust the script path if it lives elsewhere inside the container.
README.md (1)
146-146
: Past typo issue has been resolved.The environment variable name is correctly spelled as
AI_SUPPORT_URL
in the current version..env.docker (1)
24-24
: Past typo issue has been resolved.The environment variable name is correctly spelled as
AI_SUPPORT_URL
in the current version.utils/validation.js (1)
44-44
: Past typo issue has been resolved.The environment variable name is correctly spelled as
AI_SUPPORT_URL
in the current version.
🧹 Nitpick comments (2)
README.md (1)
174-174
: Add language identifier to code block.The fenced code block should specify a language for proper syntax highlighting.
Apply this diff:
-``` +```plaintext ├── commands/ # Command implementationsBased on static analysis hints.
.env.docker (1)
35-35
: Add blank line at end of file.Convention is to end files with a newline character.
Apply this diff:
REDIS_URL=redis://redis:6379 +
Based on static analysis hints.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
.env.docker
(1 hunks)README.md
(1 hunks)client.js
(1 hunks)example.docker-compose.yml
(1 hunks)utils/validation.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
utils/validation.js (2)
health-check.js (1)
missing
(29-29)index.js (1)
validation
(14-14)
client.js (4)
commands/github/stats.js (1)
response
(86-92)rss.js (1)
response
(166-175)commands/github/issue.js (3)
response
(33-40)response
(52-52)repoName
(20-20)oldIndex.js (2)
axios
(3-3)releases
(27-29)
🪛 dotenv-linter (3.3.0)
.env.docker
[warning] 6-6: [UnorderedKey] The CLIENT_ID key should go before the DISCORD_TOKEN key
(UnorderedKey)
[warning] 10-10: [UnorderedKey] The REPO_NAME key should go before the REPO_ORG key
(UnorderedKey)
[warning] 14-14: [UnorderedKey] The GITHUB_API_BASE key should go before the GITHUB_TOKEN key
(UnorderedKey)
[warning] 18-18: [UnorderedKey] The AI_SUPPORTCHANNEL_ID key should go before the FORUM_CHANNEL_ID key
(UnorderedKey)
[warning] 20-20: [QuoteCharacter] The value has quote characters (', ")
(QuoteCharacter)
[warning] 20-20: [UnorderedKey] The CHANNELS_TO_IGNORE key should go before the FORUM_CHANNEL_ID key
(UnorderedKey)
[warning] 31-31: [UnorderedKey] The ENABLE_RSS_MONITORING key should go before the LOG_LEVEL key
(UnorderedKey)
[warning] 35-35: [EndingBlankLine] No blank line at the end of the file
(EndingBlankLine)
🪛 markdownlint-cli2 (0.18.1)
README.md
174-174: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (6)
utils/validation.js (1)
28-57
: LGTM! Solid environment validation implementation.The validation logic properly checks required variables, provides helpful warnings for optional configurations, and follows a clear structure. The return object with
isValid
,missing
, andwarnings
fields is well-designed.client.js (5)
240-241
: LGTM! Backward compatibility for environment variable.The fallback pattern
process.env.AI_SUPPORT_URL || process.env.AI_SUPPRT_URL
properly handles both the correct and legacy variable names during migration.Also applies to: 297-297
60-78
: LGTM! Robust unit conversion implementation.The refactored conversion logic properly handles multiple units and aggregates results. The regex pattern and helper method separation is clean.
97-115
: LGTM! Proper timeout and error handling.The
fetchStats
method includes appropriate timeout and User-Agent headers, with graceful error handling that returns an empty array on failure.
279-324
: LGTM! Well-structured feedback collection.The
setupFeedbackCollector
method properly encapsulates the feedback flow with reaction handling, timeout, and error recovery. The 3-minute collection window is reasonable.
177-209
: Verified Perspective API response structure. The code’s access toresponse.data.attributeScores.TOXICITY.summaryScore.value
(and similarly for FLIRTATION) matches the documented schema; no changes needed.
…ry leaks - Add periodic cleanup interval that removes expired rate limit entries after 10 minutes - Implement destroy() method to properly clean up resources on shutdown - Cleanup runs every 5 minutes to maintain memory efficiency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
client.js (2)
72-72
: Extract complex regex to a named constant.The unit conversion regex is quite long and reduces readability when inlined. Consider extracting it to a class-level constant or a constant at the top of the file.
Example:
+ static UNIT_CONVERSION_REGEX = /(-?\d+(?:\.\d+)?)\s?(°?[FC]|celsius|fahrenheit|km|kilometers?|mi\b|miles?|kg|kilograms?|lbs?|pounds?|g|grams?|m\b|meters?|ft|feet|in|inches?)\b/gi; + convertUnits(text) { - const unitConversionRegex = /(-?\d+(?:\.\d+)?)\s?(°?[FC]|celsius|fahrenheit|km|kilometers?|mi\b|miles?|kg|kilograms?|lbs?|pounds?|g|grams?|m\b|meters?|ft|feet|in|inches?)\b/gi; + const unitConversionRegex = Streamyfin.UNIT_CONVERSION_REGEX;
135-135
: Consider extracting the API base URL fallback to a constant.The hardcoded fallback URL
'https://api.github.com'
could be extracted to a constant for better maintainability and consistency across the codebase.Example (add to top of file or utils/constants.js):
const GITHUB_API_BASE_URL = 'https://api.github.com';Then use:
- `${process.env.GITHUB_API_BASE || 'https://api.github.com'}/repos/${this.repoOrg}/${this.repoName}/releases`, + `${process.env.GITHUB_API_BASE || GITHUB_API_BASE_URL}/repos/${this.repoOrg}/${this.repoName}/releases`,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
client.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
client.js (4)
rss.js (1)
response
(166-175)commands/github/stats.js (1)
response
(86-92)commands/github/issue.js (3)
response
(33-40)response
(52-52)repoName
(20-20)oldIndex.js (2)
axios
(3-3)releases
(27-29)
🔇 Additional comments (9)
client.js (9)
13-24
: LGTM! Rate limiting cleanup properly implemented.The TTL-based cleanup with a 10-minute retention window and 5-minute cleanup interval effectively addresses the memory leak concern raised in previous reviews. The implementation correctly iterates through entries and removes stale ones.
27-64
: Well-structured unit conversion mappings.The conversion functions are organized by category and provide consistent output format. The implementation covers common units comprehensively.
96-103
: LGTM! Clean helper method.The lookup logic correctly iterates through categories to find the appropriate conversion function.
108-126
: LGTM! Robust API request with proper error handling.The timeout configuration and User-Agent header align with best practices seen in other files. Returning an empty array on error provides a safe fallback.
162-181
: LGTM! Repository check with proper sanitization.The repo name sanitization and structured response format provide a clean interface. The GitHub API will handle any remaining validation of repo names.
189-192
: Verify toxicity checking is optional by design.The method returns
false
(not toxic) whenPERSPECTIVE_APIKEY
is not configured. This means toxic messages won't be filtered if the API key is missing.If toxicity checking is intended to be mandatory for production, consider validating the presence of
PERSPECTIVE_APIKEY
at startup (in utils/validation.js) rather than silently disabling the feature at runtime.If this is intentional (making toxicity checking optional), the current implementation is acceptable.
226-281
: LGTM! Comprehensive AI support handling with proper fallback.The implementation includes:
- Rate limiting (1 message per second)
- Input validation
- Typing indicators with cleanup
- Proper error handling
- User-friendly error messages
The fallback pattern at line 251 (
process.env.AI_SUPPORT_URL || process.env.AI_SUPPRT_URL
) correctly supports backward compatibility during migration.
290-335
: LGTM! Well-structured feedback collection.The method properly:
- Sets up reaction-based feedback with appropriate timeout (3 minutes)
- Uses the same fallback pattern for API URL consistency
- Handles all errors gracefully
- Provides user-friendly feedback messages
The try-catch wrapper ensures that any failures in setting up reactions won't crash the bot.
340-346
: LGTM! Proper cleanup implementation.The method correctly clears the rate limit cleanup interval and nullifies the reference before calling
super.destroy()
.Note: Active typing intervals and feedback collectors are per-message and will be cleaned up when their respective promises/collectors complete or timeout, so they don't need explicit cleanup here.
- Sanitize branch names for Docker tags in CI workflow - Optimize Dockerfile production stage with fresh dependency install - Replace Redis KEYS with SCAN for better performance - Update Reddit API fetch to use AbortController - Add scanIterator helper for efficient key scanning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (1)
.github/workflows/dev-build.yml (1)
47-47
: Broaden the change filter to include Docker variants, lockfiles, scripts, and health checks.Current regex misses docker-compose*, lockfiles, scripts, .dockerignore, and health‑check files; relevant changes may be skipped.
Suggested update:
- if [ -n "$CHANGED" ] && printf '%s' "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package\.json|Dockerfile|\.github/workflows/)'; then + if [ -n "$CHANGED" ] && printf '%s' "$CHANGED" | grep -qE '^(commands/|src/|lib/|bot/|index\.js|client\.js|package(-lock)?\.json|pnpm-lock\.yaml|yarn\.lock|Dockerfile.*|\.dockerignore|docker-compose.*\.ya?ml|example\.docker-compose\.ya?ml|docker-manage\.sh|health-check\.js|scripts/|\.github/workflows/|\.eslintrc(\.(json|js|cjs|ya?ml))?|tsconfig(\..*)?\.json)'; then
🧹 Nitpick comments (3)
rss.js (1)
44-48
: Consider processing keys per-batch to reduce memory usage.Collecting all keys into memory before processing could cause issues with large keysets (10k+ feeds). Although
scanIterator
avoids blocking Redis, the current implementation buffers everything.Either process keys per batch:
- const keys = []; - for await (const batch of scanIterator('monitor:*')) { - keys.push(...batch); - } - console.log(`[RSS] Processing ${keys.length} configured feeds`); - - for (const key of keys) { + let totalKeys = 0; + for await (const batch of scanIterator('monitor:*')) { + for (const key of batch) { + totalKeys++; - if (isShuttingDown) break; - - try { - await processSingleFeed(client, key); - } catch (error) { - console.error(`[RSS] Error processing feed ${key}:`, error.message); + if (isShuttingDown) break; + + try { + await processSingleFeed(client, key); + } catch (error) { + console.error(`[RSS] Error processing feed ${key}:`, error.message); + } } + if (isShuttingDown) break; } + console.log(`[RSS] Processed ${totalKeys} configured feeds`); // Periodic cleanup - if (keys.length > CLEANUP_THRESHOLD) { + if (totalKeys > CLEANUP_THRESHOLD) { await cleanupOldEntries(); }Or add TYPE filtering to scanIterator if your Redis version supports it (Redis 6.0+):
-async function* scanIterator(pattern) { +async function* scanIterator(pattern, options = {}) { let cursor = '0'; do { - const [next, keys] = await redisClient.scan(cursor, { MATCH: pattern, COUNT: 100 }); + const [next, keys] = await redisClient.scan(cursor, { MATCH: pattern, COUNT: 100, ...options }); cursor = next; yield keys; } while (cursor !== '0'); }Then call with TYPE filter:
- for await (const batch of scanIterator('monitor:*')) { + for await (const batch of scanIterator('monitor:*', { TYPE: 'hash' })) {Dockerfile (2)
38-41
: Improve build cache efficiency for prod deps.Reinstalling after copying the whole app invalidates cache on any source change. Prefer a prod-deps stage (or copy package*.json first, install, then copy app). This speeds rebuilds and reduces network use.
Example approach:
+FROM base AS prod-deps +WORKDIR /app +COPY package*.json ./ +RUN npm ci --omit=dev + FROM node:22-alpine AS production WORKDIR /app @@ -# Copy application files first -COPY --from=build --chown=nodejs:nodejs /app . - -# Ensure production-only dependencies -RUN rm -rf node_modules \ - && npm ci --omit=dev \ - && npm cache clean --force +# Copy application files +COPY --from=build --chown=nodejs:nodejs /app . +# Copy production node_modules +COPY --from=prod-deps --chown=nodejs:nodejs /app/node_modules ./node_modules +RUN npm cache clean --force
3-6
: Optional: pin base image by digest for supply‑chain hardening.Pin node:22-alpine to a digest to avoid surprise updates and improve reproducibility.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/workflows/dev-build.yml
(1 hunks)Dockerfile
(1 hunks)rss.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
rss.js (3)
index.js (1)
client
(37-39)commands/bot/monitor.js (9)
keys
(107-107)key
(77-77)key
(100-100)key
(133-133)type
(74-74)type
(112-112)url
(75-75)url
(98-98)url
(131-131)redisClient.js (1)
redisClient
(3-10)
🪛 Biome (2.1.2)
rss.js
[error] 222-223: This code will never be reached ...
... because this statement will return from the function beforehand
(lint/correctness/noUnreachable)
[error] 245-246: This code will never be reached ...
... because either this statement will return from the function, ...
... this statement will throw an exception, ...
... this statement will return from the function, ...
... this statement will return from the function, ...
... or this statement will return from the function beforehand
(lint/correctness/noUnreachable)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: dev-build
🔇 Additional comments (9)
rss.js (8)
21-36
: LGTM!The main monitoring loop correctly implements graceful shutdown signaling, error recovery, and rate limiting between iterations.
74-129
: LGTM!The per-feed processing correctly implements:
- Configuration validation
- Rate limiting with 1-minute minimum interval
- Deduplication via Redis sets
- 7-day TTL on tracking keys
- Proper error handling per message
135-144
: LGTM!Defensive type checking prevents WRONGTYPE errors when keys are accidentally created with incorrect types.
169-180
: LGTM! Past issue resolved.The AbortController-based timeout correctly addresses the previous critical issue where Node's fetch doesn't support a native timeout option. The implementation properly clears the timeout in the finally block.
Based on past review comments.
202-221
: LGTM! Past issue resolved.The cleanup now uses
scanIterator
instead of blockingKEYS
, and processes keys per-batch to avoid memory issues. This correctly addresses the previous major issue.Based on past review comments.
228-235
: LGTM!The async generator correctly implements non-blocking SCAN-based iteration with cursor management.
251-254
: LGTM!The exported
stopRSS
function provides a clean graceful shutdown mechanism by signaling the monitoring loop to exit.
9-9
: Timeout option is supported —rss-parser
supports thetimeout
constructor option;timeout: 10000
is valid.Dockerfile (1)
29-41
: Good fix: production no longer inherits build toolchain and avoids shipping dev deps.Switching to node:22-alpine for production and reinstalling with --omit=dev resolves prior critical issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
rss.js (1)
243-244
: Remove unreachable duplicate code.Lines 243-244 duplicate the return statement from lines 241-242 and are unreachable. This was flagged in a previous review but remains unaddressed.
Apply this diff:
function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms)); } - return new Promise(resolve => setTimeout(resolve, ms)); -}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
rss.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
rss.js (3)
index.js (1)
client
(37-39)commands/bot/monitor.js (9)
keys
(107-107)key
(77-77)key
(100-100)key
(133-133)type
(74-74)type
(112-112)url
(75-75)url
(98-98)url
(131-131)redisClient.js (1)
redisClient
(3-10)
🔇 Additional comments (2)
rss.js (2)
169-180
: LGTM: Fetch timeout properly implemented.The AbortController pattern correctly addresses the previous review concern about Node.js fetch lacking native timeout support. The timeout is properly cleared in the
finally
block, preventing resource leaks.
206-214
: LGTM: Cleanup uses non-blocking SCAN.The cleanup routine correctly uses
scanIterator
instead of blockingKEYS
commands, processing batches to avoid memory spikes. This properly addresses the previous review concern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
rss.js
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
rss.js (3)
index.js (1)
client
(37-39)commands/bot/monitor.js (9)
keys
(107-107)key
(77-77)key
(100-100)key
(133-133)type
(74-74)type
(112-112)url
(75-75)url
(98-98)url
(131-131)redisClient.js (1)
redisClient
(3-10)
🔇 Additional comments (2)
rss.js (2)
116-129
: Verify duplicate-send risk if Redis operations fail after message delivery.If
channel.send
succeeds but subsequentsAdd
orexpire
operations fail (lines 122-123), the item won't be marked as sent. On the next poll, the same item will be resent to Discord, potentially spamming the channel.Consider these options:
- Move
sAdd
beforechannel.send
to ensure deduplication even if send fails (may result in missed messages but no duplicates).- Add explicit error handling for
sAdd
/expire
and log a warning about potential duplicates.- Wrap the entire block in a transaction if Redis supports it in your client version.
Which approach aligns better with your requirements: preventing duplicates (option 1) or ensuring delivery (current approach)?
1-259
: Excellent refactor addressing all previous concerns.The modular architecture with
scanIterator
, AbortController-based timeouts, suffix filtering, and graceful shutdown handling successfully addresses all major issues flagged in previous reviews. The code is now non-blocking, shutdown-aware, and robust.
return; // Too soon to check again | ||
} | ||
|
||
await redisClient.set(lastCheckKey, now); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move lastCheck
update after successful content fetch.
Setting lastCheck
before fetching content means that if fetchContent
fails or returns no items, the next check will still be delayed by the full interval. This could cause legitimate feeds to be skipped during transient failures.
Apply this diff to update lastCheck
only after successful content retrieval:
- await redisClient.set(lastCheckKey, now);
const items = await fetchContent(guildConfig.type, guildConfig.url);
if (!items?.length) {
console.log(`[RSS] No new items for ${key}`);
return;
}
+
+ await redisClient.set(lastCheckKey, now);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await redisClient.set(lastCheckKey, now); | |
// Only update lastCheck after we've successfully fetched items | |
const lastCheckKey = `${key}:lastCheck`; | |
const now = Date.now(); | |
const items = await fetchContent(guildConfig.type, guildConfig.url); | |
if (!items?.length) { | |
console.log(`[RSS] No new items for ${key}`); | |
return; | |
} | |
await redisClient.set(lastCheckKey, now); |
🤖 Prompt for AI Agents
In rss.js around line 95, the code sets lastCheck before fetching feed content;
move the redisClient.set(lastCheckKey, now) so it runs only after a successful
fetch. Specifically, call redisClient.set(...) after fetchContent resolves and
only when fetch returned items (e.g., items && items.length > 0), and ensure
fetchContent errors are caught so the timestamp is not updated on failure.
Breaking Changes:
Migration Guide:
Summary by CodeRabbit
New Features
Improvements
Build
CI
Documentation