Skip to content

feat: mcp service#1485

Open
tankerkiller125 wants to merge 3 commits into
mainfrom
mk/mcp
Open

feat: mcp service#1485
tankerkiller125 wants to merge 3 commits into
mainfrom
mk/mcp

Conversation

@tankerkiller125
Copy link
Copy Markdown
Contributor

@tankerkiller125 tankerkiller125 commented May 10, 2026

What type of PR is this?

  • feature

What this PR does / why we need it:

Adds basic read-only MCP server Homebox for AI agents to be able to use.

Summary by CodeRabbit

Release Notes

  • New Features
    • Added optional Model Context Protocol (MCP) server integration, disabled by default and configurable
    • MCP exposes tools for querying inventory data: search items, list groups, retrieve item details, view maintenance schedules, and browse entity types
    • All MCP endpoints enforce the same authentication and authorization as the main application

Review Change Stack

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 10, 2026

Walkthrough

This PR introduces Model Context Protocol (MCP) support to Homebox, enabling AI tools to query inventory via HTTP. It adds optional endpoint mounting at /api/v1/mcp guarded by configuration, implements eight read-only query tools with per-group access control, and integrates with existing authentication middleware.

Changes

Model Context Protocol (MCP) Server Integration

Layer / File(s) Summary
Configuration & Dependencies
backend/internal/sys/config/conf.go, backend/go.mod
Added MCPConfig struct with Enabled boolean flag (default false) to Config. Updated Go module to require MCP SDK v1.5.0 and indirect JSON schema/encoding dependencies.
Authentication & Context Helpers
backend/internal/mcp/context.go
Exports ServiceCtx() to build authenticated request context and ResolveGroup() to scope context to optional group UUID with membership validation. Defines ErrNoAuthContext and ErrGroupNotMember errors.
Dependency Injection & Tool Registry
backend/internal/mcp/deps.go, backend/internal/mcp/registry.go
Deps struct holds service and repository bundles for tool handlers. Registrar function type and global registry enable per-file tool self-registration via init() + mcp.Register().
JSON Schema Generation
backend/internal/mcp/schema.go
MustSchema[T]() generic helper generates JSON schemas with UUID fields canonicalized to string/uuid format via type overrides in jsonschema inference.
Server Factory
backend/internal/mcp/server.go
NewHandler(deps) constructs MCP server (identified as "homebox" v0.1.0), invokes tool registration, and returns streamable HTTP handler.
HTTP Route Integration
backend/app/api/routes.go
Conditionally mounts GET/POST/DELETE /api/v1/mcp endpoints when conf.Mcp.Enabled is true, wrapping handlers with existing userMW authentication/authorization middleware.
Tools Framework Documentation
backend/internal/mcp/tools/doc.go
Package-level documentation explaining tool registration pattern (one tool per file, init-based registration) and context-derivation guidelines for user/group scoping.
Inventory Query Tools
backend/internal/mcp/tools/*.go (8 tools)
search_items: paginated query by text/type. get_item: full entity details. get_entity_path: breadcrumb hierarchy. get_group_statistics: aggregate counters. list_entity_types: entity type catalog. list_labels: tag listing. list_my_groups: user's group memberships. get_maintenance_schedule: filtered maintenance entries. All resolve group context and validate membership before querying repositories.

Security Recommendations

🔐 Key Security Observations:

  1. Authentication & Authorization: All MCP tools are correctly wrapped by userMW middleware, ensuring every request is authenticated and authorized before tool execution. The ResolveGroup() validation confirms user membership in requested groups—this is strong and prevents cross-group data leakage.

  2. Input Sanitization: Tool input structs (e.g., searchItemsInput, getMaintenanceScheduleInput) use explicit typed fields rather than opaque JSON. Recommend verifying that repository layer queries properly escape/parameterize any string inputs (query text in search_items) to prevent injection attacks.

  3. Limit/Pagination Enforcement: search_items and get_maintenance_schedule enforce reasonable limits (page_size capped at implicit bounds, maintenance limit capped at 200). Ensure these remain if tools are extended.

  4. Read-Only Operations: All eight tools are read-only queries—no mutations. This is excellent for limiting attack surface. If write tools are added in future PRs, ensure they are gated behind additional authorization checks.

  5. Error Message Leakage: Tool handlers wrap errors with descriptive prefixes (e.g., fmt.Errorf("get item: %w", err)). Ensure these error messages are never returned to clients in production; consider returning generic "tool call failed" messages in the HTTP response layer.

  6. Group Context Derivation: The pattern of extracting user/group from middleware context (via mcp.ServiceCtx()) rather than trusting tool input is correct. This prevents privilege escalation via input forgery.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • sysadminsmedia/homebox#1414: Both PRs modify backend/app/api/routes.go and affect route mounting logic; this PR adds conditional MCP endpoint mounting while the related PR restructures v1 routes.

Suggested labels

⬆️ enhancement, go, review needed

Suggested reviewers

  • katosdev
  • tonyaellie

Poem

🤖 A Protocol Blooms

With context and care, the tools take their place,
Eight queries shine forth in the MCP space,
Each group scoped tight, no data will leak,
AI speaks now with inventory's tongue sleek! ✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The description is incomplete. It includes PR type and a brief summary but is missing required sections: detailed explanation of changes per file, decisions made, and which issue(s) the PR fixes. Complete the description by adding file-by-file change summaries, explaining key design decisions (e.g., tool registration pattern, context/auth handling), and specifying the issue(s) this PR fixes.
Docstring Coverage ⚠️ Warning Docstring coverage is 35.71% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: mcp service' directly and clearly summarizes the main change: adding an MCP (Model Context Protocol) service to the codebase.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch mk/mcp

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai Bot requested review from katosdev and tonyaellie May 10, 2026 21:22
@coderabbitai coderabbitai Bot added ⬆️ enhancement New feature or request review needed A review is needed on this PR or Issue go Pull requests that update Go code labels May 10, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warning

CodeRabbit couldn't request changes on this pull request because it doesn't have sufficient GitHub permissions.

Please grant CodeRabbit Pull requests: Read and write permission and re-run the review.

👉 Steps to fix this

Actionable comments posted: 4

🧹 Nitpick comments (7)
backend/internal/mcp/server.go (1)

34-37: ⚡ Quick win

Add a request-size cap around the MCP transport handler.

Line 34 currently exposes the transport without an explicit body limit; wrapping it with http.MaxBytesHandler helps reduce DoS/memory-pressure risk on POST traffic.

Proposed hardening
 func NewHandler(d Deps) http.Handler {
 	server := mcpsdk.NewServer(&mcpsdk.Implementation{
 		Name:    ServerName,
 		Version: ServerVersion,
 	}, nil)

 	registerAll(server, d)

-	return mcpsdk.NewStreamableHTTPHandler(
+	h := mcpsdk.NewStreamableHTTPHandler(
 		func(*http.Request) *mcpsdk.Server { return server },
 		nil,
 	)
+	return http.MaxBytesHandler(h, 1<<20) // 1 MiB; tune as needed
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/internal/mcp/server.go` around lines 34 - 37, The MCP transport
handler returned by mcpsdk.NewStreamableHTTPHandler currently has no request
body limit; wrap the resulting http.Handler with http.MaxBytesHandler to cap
request size (e.g., 10<<20 for 10MB or a configurable constant) before returning
it from the function that builds the handler (the closure that returns the
server passed into mcpsdk.NewStreamableHTTPHandler should remain the same);
ensure you wrap the handler returned by mcpsdk.NewStreamableHTTPHandler (the
value being returned now) with http.MaxBytesHandler(handler, maxBytes) so POST
bodies cannot exceed the set limit.
backend/app/api/routes.go (2)

235-251: ⚡ Quick win

Consider rate limiting for MCP endpoints.

The MCP endpoints are properly protected with userMW (authentication, tenant, and role checks), which is essential for securing AI agent access to inventory data. However, unlike the auth endpoints (lines 92, 95-96), there is no rate limiting applied.

Consider adding rate limiting to prevent potential abuse, especially since AI agents might issue many rapid queries.

💡 Suggested approach

Create an MCP-specific rate limiter similar to a.mwAuthRateLimit and apply it to the MCP endpoints:

 		if a.conf.Mcp.Enabled {
 			mcpHandler := hbmcp.NewHandler(hbmcp.Deps{
 				Services: a.services,
 				Repos:    a.repos,
 			})
 			mcpHandlerFunc := chain.ToHandlerFunc(func(w http.ResponseWriter, r *http.Request) error {
 				mcpHandler.ServeHTTP(w, r)
 				return nil
-			}, userMW...)
+			}, append(userMW, a.mwMcpRateLimit)...)
 			r.Get("/mcp", mcpHandlerFunc)
 			r.Post("/mcp", mcpHandlerFunc)
 			r.Delete("/mcp", mcpHandlerFunc)
 		}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/app/api/routes.go` around lines 235 - 251, Add an MCP-specific rate
limiter and apply it to the MCP routes: when a.conf.Mcp.Enabled is true and you
construct hbmcp.NewHandler(hbmcp.Deps{...}) create a middleware similar to
a.mwAuthRateLimit (e.g., a.mwMcpRateLimit) and include it in the
chain.ToHandlerFunc call that produces mcpHandlerFunc (in addition to userMW) so
the GET/POST/DELETE "/mcp" routes are rate-limited; ensure the new middleware
follows the same signature/behavior as a.mwAuthRateLimit and is configured for
expected MCP throughput.

244-247: ⚡ Quick win

Error handling in external protocol handlers is intentional and correct.

Both the MCP and OpenTelemetry proxy handlers manage their own HTTP responses and error logging internally (OTEL uses http.Error() and log.Error(); MCP wraps the SDK's handler which handles its own response). Returning nil from the wrapper is appropriate since these aren't application logic errors for the error chain middleware to process.

However, consider adding panic recovery around external SDK handler calls for production robustness—if mcpHandler.ServeHTTP() or the OTEL SDK panics unexpectedly, the error chain middleware won't catch or log it. Both endpoints are already protected by authentication middleware.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/app/api/routes.go` around lines 244 - 247, Wrap the external SDK HTTP
handler calls in a panic-recovery block so panics inside mcpHandler.ServeHTTP
(and the OTEL handler) are caught, logged, and translated into an HTTP 500
instead of crashing the process; specifically, update the wrapper that creates
mcpHandlerFunc (and the analogous otel wrapper) to defer a recover() that logs
the panic (using the existing logger) and writes a 500 response, while still
returning nil to the chain so the middleware flow remains unchanged.
backend/internal/mcp/tools/search_items.go (1)

60-63: 💤 Low value

Consider enforcing maximum page size.

While the JSON schema comment suggests "max useful value is around 100," there's no enforcement. An AI agent could request an arbitrarily large page size, potentially causing performance issues.

Consider capping the page size:

 				pageSize := in.PageSize
 				if pageSize <= 0 {
 					pageSize = searchItemsDefaultPageSize
+				} else if pageSize > 100 {
+					pageSize = 100
 				}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/internal/mcp/tools/search_items.go` around lines 60 - 63, The handler
currently sets pageSize from in.PageSize with only a default fallback (pageSize
and searchItemsDefaultPageSize); add an upper bound constant (e.g.,
searchItemsMaxPageSize) and clamp pageSize after the existing default logic so
pageSize = max(min(pageSize, searchItemsMaxPageSize), 1). Update any related
validation or comments and use the new searchItemsMaxPageSize constant wherever
pagination limits are documented or enforced.
backend/internal/mcp/tools/get_maintenance_schedule.go (3)

18-22: ⚡ Quick win

Consider adding offset/cursor for pagination beyond the first page.

The input struct accepts limit but no offset or cursor, preventing clients from accessing results beyond the first page. For a read-only MCP tool, pagination support would enable AI agents to explore larger maintenance schedules systematically.

📄 Proposed enhancement to add offset parameter
 type getMaintenanceScheduleInput struct {
 	Status  string    `json:"status,omitempty"   jsonschema:"scheduled | completed | both; defaults to scheduled"`
 	Limit   int       `json:"limit,omitempty"    jsonschema:"max number of entries to return; defaults to 50; capped at 200"`
+	Offset  int       `json:"offset,omitempty"   jsonschema:"number of entries to skip; defaults to 0; use with limit for pagination"`
 	GroupID uuid.UUID `json:"group_id,omitempty" jsonschema:"optional group to query; omit to use your default group; call list_my_groups to see available groups"`
 }

Then pass Offset to the repository query alongside Limit.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/internal/mcp/tools/get_maintenance_schedule.go` around lines 18 - 22,
Add pagination to getMaintenanceScheduleInput by introducing an Offset (int) or
Cursor (string) field alongside Limit in the getMaintenanceScheduleInput struct
(e.g., add Offset int `json:"offset,omitempty"` or Cursor string
`json:"cursor,omitempty"`). Update the handler/logic that calls the repository
method (the code that consumes getMaintenanceScheduleInput and invokes the
repository query) to pass the new Offset/Cursor into the repository query
parameters and enforce sane defaults/limits (e.g., default limit 50, cap 200).
Also update any JSON schema tags for the new field and the repository method
signature if needed so the repository uses the offset/cursor when fetching
results.

51-54: Group access control relies on ResolveGroup authorization.

The tool allows specifying any group_id, relying on mcp.ResolveGroup to enforce permissions. Ensure that ResolveGroup properly validates that the authenticated user has access to the requested group before returning the scoped context.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/internal/mcp/tools/get_maintenance_schedule.go` around lines 51 - 54,
The code calls mcp.ResolveGroup(ctx, in.GroupID) without confirming that
ResolveGroup enforces user authorization; update ResolveGroup (or add an
explicit authorization check immediately after the call) to verify the
authenticated user in ctx has access to the requested group ID before returning
the scoped context sctx. Specifically: ensure ResolveGroup performs permission
checks against the caller's identity/claims in ctx for the given group
(in.GroupID) and returns a non-nil error if unauthorized, or alternatively call
an authorization helper (e.g., AuthorizeGroupAccess(ctx, in.GroupID) or check
sctx.User/claims) inside getMaintenanceSchedule to deny access when the user
lacks permission. Ensure the error path returns an appropriate authorization
error instead of proceeding.

99-100: ⚡ Quick win

Use zero-time checks before serialization to respect omitempty intent.

e.ScheduledDate and e.CompletedDate are non-nullable time.Time values, so calling .String() is safe. However, zero time.Time values will serialize to "0001-01-01 00:00:00 +0000 UTC" instead of being omitted despite the omitempty JSON tag. Conditionally populate these fields only when non-zero, and consider using RFC3339 format for better interoperability with client applications.

💡 Suggested fix
 				item := getMaintenanceScheduleItem{
 					ID:            e.ID,
 					ItemID:        e.ItemID,
 					ItemName:      e.ItemName,
 					Name:          e.Name,
 					Description:   e.Description,
-					ScheduledDate: e.ScheduledDate.String(),
-					CompletedDate: e.CompletedDate.String(),
 					Cost:          e.Cost,
 				}
+				if !e.ScheduledDate.IsZero() {
+					item.ScheduledDate = e.ScheduledDate.Format(time.RFC3339)
+				}
+				if !e.CompletedDate.IsZero() {
+					item.CompletedDate = e.CompletedDate.Format(time.RFC3339)
+				}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/internal/mcp/tools/get_maintenance_schedule.go` around lines 99 -
100, The ScheduledDate and CompletedDate fields are being set unconditionally
from e.ScheduledDate/e.CompletedDate which serializes zero times instead of
respecting omitempty; update the code in get_maintenance_schedule.go where these
fields are assigned (look for the block setting ScheduledDate: and
CompletedDate:) to only populate those JSON fields when e.ScheduledDate.IsZero()
/ e.CompletedDate.IsZero() are false, and format the times using
e.ScheduledDate.Format(time.RFC3339) and e.CompletedDate.Format(time.RFC3339) so
clients receive RFC3339 timestamps and zero values are omitted.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@backend/go.mod`:
- Line 24: The go.mod entry pinning github.com/modelcontextprotocol/go-sdk
v1.5.0 contains unresolved HIGH severity advisories; update the dependency to a
patched release (replace the version string for
github.com/modelcontextprotocol/go-sdk in go.mod and run `go get`/`go mod tidy`
to lock the fixed version) or remove/replace the module if no patch exists and
defer merging this PR until a safe version is available; ensure jsonschema-go
remains at v0.4.2 or higher and re-run `go list -m -u all` / security scanning
to confirm advisories are cleared before merging.

In `@backend/internal/mcp/registry.go`:
- Around line 17-19: Add a nil-check in the Register function to avoid appending
nil registrars: ensure the function (Register) returns early if the passed
Registrar is nil, so registry only contains non-nil entries and registerAll
won’t panic when replaying registrations; reference the Registrar type, the
Register function, and the registry slice to locate where to insert the guard.

In `@backend/internal/mcp/server.go`:
- Line 35: The code returns a shared mcpsdk.Server instance (the server
variable) for every HTTP request, but mcpsdk.Server v1.5.0 has mutable fields
(e.g., pendingNotifications, resourceSubscriptions) that are not
concurrency-safe; replace the singleton usage by constructing a fresh
*mcpsdk.Server per request inside the handler factory (or otherwise
clone/initialize a new server instance for each request) or wrap all accesses to
the shared server state with a mutex protecting pendingNotifications and
resourceSubscriptions and any other mutable fields used by methods invoked from
the HTTP request; update the function that currently returns server to instead
create and return a request-scoped *mcpsdk.Server (or ensure all handler calls
acquire the mutex) so concurrent requests cannot mutate the same internal maps.

In `@backend/internal/mcp/tools/get_maintenance_schedule.go`:
- Around line 76-85: The current code fetches all entries via
Repos.MaintEntry.GetAllMaintenance and then truncates the returned entries slice
(entries[:limit]), which is inefficient and dangerous; update the
MaintenanceFilters type to include Limit and Offset (e.g., Limit *int, Offset
*int), modify GetAllMaintenance to accept and apply those filters at the query
level (use the query's Limit/Offset) and then call GetAllMaintenance passing the
limit (and offset if used) instead of fetching everything, and finally remove
the in-memory truncation of entries in the function that currently slices
entries by limit.

---

Nitpick comments:
In `@backend/app/api/routes.go`:
- Around line 235-251: Add an MCP-specific rate limiter and apply it to the MCP
routes: when a.conf.Mcp.Enabled is true and you construct
hbmcp.NewHandler(hbmcp.Deps{...}) create a middleware similar to
a.mwAuthRateLimit (e.g., a.mwMcpRateLimit) and include it in the
chain.ToHandlerFunc call that produces mcpHandlerFunc (in addition to userMW) so
the GET/POST/DELETE "/mcp" routes are rate-limited; ensure the new middleware
follows the same signature/behavior as a.mwAuthRateLimit and is configured for
expected MCP throughput.
- Around line 244-247: Wrap the external SDK HTTP handler calls in a
panic-recovery block so panics inside mcpHandler.ServeHTTP (and the OTEL
handler) are caught, logged, and translated into an HTTP 500 instead of crashing
the process; specifically, update the wrapper that creates mcpHandlerFunc (and
the analogous otel wrapper) to defer a recover() that logs the panic (using the
existing logger) and writes a 500 response, while still returning nil to the
chain so the middleware flow remains unchanged.

In `@backend/internal/mcp/server.go`:
- Around line 34-37: The MCP transport handler returned by
mcpsdk.NewStreamableHTTPHandler currently has no request body limit; wrap the
resulting http.Handler with http.MaxBytesHandler to cap request size (e.g.,
10<<20 for 10MB or a configurable constant) before returning it from the
function that builds the handler (the closure that returns the server passed
into mcpsdk.NewStreamableHTTPHandler should remain the same); ensure you wrap
the handler returned by mcpsdk.NewStreamableHTTPHandler (the value being
returned now) with http.MaxBytesHandler(handler, maxBytes) so POST bodies cannot
exceed the set limit.

In `@backend/internal/mcp/tools/get_maintenance_schedule.go`:
- Around line 18-22: Add pagination to getMaintenanceScheduleInput by
introducing an Offset (int) or Cursor (string) field alongside Limit in the
getMaintenanceScheduleInput struct (e.g., add Offset int
`json:"offset,omitempty"` or Cursor string `json:"cursor,omitempty"`). Update
the handler/logic that calls the repository method (the code that consumes
getMaintenanceScheduleInput and invokes the repository query) to pass the new
Offset/Cursor into the repository query parameters and enforce sane
defaults/limits (e.g., default limit 50, cap 200). Also update any JSON schema
tags for the new field and the repository method signature if needed so the
repository uses the offset/cursor when fetching results.
- Around line 51-54: The code calls mcp.ResolveGroup(ctx, in.GroupID) without
confirming that ResolveGroup enforces user authorization; update ResolveGroup
(or add an explicit authorization check immediately after the call) to verify
the authenticated user in ctx has access to the requested group ID before
returning the scoped context sctx. Specifically: ensure ResolveGroup performs
permission checks against the caller's identity/claims in ctx for the given
group (in.GroupID) and returns a non-nil error if unauthorized, or alternatively
call an authorization helper (e.g., AuthorizeGroupAccess(ctx, in.GroupID) or
check sctx.User/claims) inside getMaintenanceSchedule to deny access when the
user lacks permission. Ensure the error path returns an appropriate
authorization error instead of proceeding.
- Around line 99-100: The ScheduledDate and CompletedDate fields are being set
unconditionally from e.ScheduledDate/e.CompletedDate which serializes zero times
instead of respecting omitempty; update the code in get_maintenance_schedule.go
where these fields are assigned (look for the block setting ScheduledDate: and
CompletedDate:) to only populate those JSON fields when e.ScheduledDate.IsZero()
/ e.CompletedDate.IsZero() are false, and format the times using
e.ScheduledDate.Format(time.RFC3339) and e.CompletedDate.Format(time.RFC3339) so
clients receive RFC3339 timestamps and zero values are omitted.

In `@backend/internal/mcp/tools/search_items.go`:
- Around line 60-63: The handler currently sets pageSize from in.PageSize with
only a default fallback (pageSize and searchItemsDefaultPageSize); add an upper
bound constant (e.g., searchItemsMaxPageSize) and clamp pageSize after the
existing default logic so pageSize = max(min(pageSize, searchItemsMaxPageSize),
1). Update any related validation or comments and use the new
searchItemsMaxPageSize constant wherever pagination limits are documented or
enforced.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 44b8f1b5-ae91-44ca-858b-ca26f9574bf1

📥 Commits

Reviewing files that changed from the base of the PR and between 89d4a9b and e8d96d8.

⛔ Files ignored due to path filters (1)
  • backend/go.sum is excluded by !**/*.sum
📒 Files selected for processing (17)
  • backend/app/api/routes.go
  • backend/go.mod
  • backend/internal/mcp/context.go
  • backend/internal/mcp/deps.go
  • backend/internal/mcp/registry.go
  • backend/internal/mcp/schema.go
  • backend/internal/mcp/server.go
  • backend/internal/mcp/tools/doc.go
  • backend/internal/mcp/tools/get_entity_path.go
  • backend/internal/mcp/tools/get_group_statistics.go
  • backend/internal/mcp/tools/get_item.go
  • backend/internal/mcp/tools/get_maintenance_schedule.go
  • backend/internal/mcp/tools/list_entity_types.go
  • backend/internal/mcp/tools/list_labels.go
  • backend/internal/mcp/tools/list_my_groups.go
  • backend/internal/mcp/tools/search_items.go
  • backend/internal/sys/config/conf.go

Comment thread backend/go.mod
github.com/gorilla/schema v1.4.1
github.com/hay-kot/httpkit v0.0.11
github.com/jackc/pgx/v5 v5.9.2
github.com/modelcontextprotocol/go-sdk v1.5.0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

`Check OSV/GitHub advisories for:

  • github.com/modelcontextprotocol/go-sdk v1.5.0
  • github.com/google/jsonschema-go v0.4.2
    Return unresolved HIGH/CRITICAL findings and affected version ranges.`

💡 Result:

github.com/modelcontextprotocol/go-sdk v1.5.0: Unresolved HIGH/CRITICAL findings (as of 2026-05-10): - GO-2026-4569 (GHSA-wvj2-96wp-fq3f, CVE-2026-27896): Improper Handling of Case Sensitivity. HIGH severity. Affected: >=1.3.1 (no fixed version listed; review_status: UNREVIEWED). v1.5.0 affected since introduced through 1.3.1 and no explicit fix beyond.[1][2][3] Other HIGH advisories listed on GitHub security page, published Feb-Mar 2026 (e.g., GHSA-q382-vc8q-7jhj null Unicode, GHSA-89xv-2j6f-qhc8 Cross-Site Tool Execution <=1.4.0, GHSA-xw59-hvm2-8pj6 DNS Rebinding <=1.4.0), but GO-2026-4569 explicitly covers >=1.3.1 range including v1.5.0.[2][4][5] github.com/google/jsonschema-go v0.4.2: No HIGH/CRITICAL unresolved findings in OSV/GitHub advisories. No published security advisories on GitHub.[6][7] Note: Unrelated vulns in other jsonschema packages (e.g., npm json-schema).[8]

Citations:


Do not merge with unresolved HIGH severity security advisories in go-sdk dependency.

The MCP dependency chain contains an unresolved HIGH severity advisory (GO-2026-4569/CVE-2026-27896: Improper Handling of Case Sensitivity) affecting go-sdk v1.5.0. Additional HIGH advisories (GHSA-q382-vc8q-7jhj, GHSA-89xv-2j6f-qhc8, GHSA-xw59-hvm2-8pj6) also affect this version with no published fixes available. jsonschema-go v0.4.2 is clear. Either upgrade go-sdk to a patched version or defer this PR pending security resolution.

Also applies to: 156-156

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/go.mod` at line 24, The go.mod entry pinning
github.com/modelcontextprotocol/go-sdk v1.5.0 contains unresolved HIGH severity
advisories; update the dependency to a patched release (replace the version
string for github.com/modelcontextprotocol/go-sdk in go.mod and run `go get`/`go
mod tidy` to lock the fixed version) or remove/replace the module if no patch
exists and defer merging this PR until a safe version is available; ensure
jsonschema-go remains at v0.4.2 or higher and re-run `go list -m -u all` /
security scanning to confirm advisories are cleared before merging.

Comment on lines +17 to +19
func Register(r Registrar) {
registry = append(registry, r)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Guard against nil registrars to prevent panic at registration replay.

If Register(nil) ever happens, Line 23 panics when registerAll runs. A defensive guard here is cheap and avoids brittle startup failures.

Proposed fix
 func Register(r Registrar) {
+	if r == nil {
+		panic("mcp.Register: nil registrar")
+	}
 	registry = append(registry, r)
 }

Also applies to: 22-23

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/internal/mcp/registry.go` around lines 17 - 19, Add a nil-check in
the Register function to avoid appending nil registrars: ensure the function
(Register) returns early if the passed Registrar is nil, so registry only
contains non-nil entries and registerAll won’t panic when replaying
registrations; reference the Registrar type, the Register function, and the
registry slice to locate where to insert the guard.

registerAll(server, d)

return mcpsdk.NewStreamableHTTPHandler(
func(*http.Request) *mcpsdk.Server { return server },
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

In github.com/modelcontextprotocol/go-sdk/mcp v1.5.0, is *mcp.Server documented as safe for concurrent use across multiple simultaneous HTTP requests when reused as a singleton?

💡 Result:

No. In v1.5.0, the code/docs do not document mcp.Server as “safe for concurrent use” across multiple simultaneous HTTP requests when reused as a singleton. What the authoritative docs do say instead: - The Go SDK’s protocol documentation explicitly says MCP has no concurrency guarantees at the spec level, and the Go SDK only implements certain heuristics (e.g., async handling of tools/calls) rather than stating the underlying server object is concurrency-safe for reuse across concurrent HTTP requests. [1] What’s visible from the server implementation (indirectly relevant to the question): - mcp.NewServer builds a *Server with internal mutable fields like maps (e.g., pendingNotifications, resourceSubscriptions) that are not documented here as protected for concurrent access. [2] Practical implication for your scenario (singleton across concurrent HTTP requests): - Because there is no explicit “safe for concurrent use” guarantee for the singleton *mcp.Server object in v1.5.0, you should not assume it is safe. Instead, assume that per-request/per-session state must be handled by the session/transport layers and your handlers must be concurrency-safe themselves. The SDK doc indicates calls are handled asynchronously relative to each other, which increases the likelihood of shared-state issues if you rely on a singleton server object without synchronization. [1] If you want, share the specific HTTP entrypoint/transport you are using in v1.5.0 (e.g., streamable HTTP handler variant and how you store/route *mcp.Server), and I can check the exact transport/session code paths to determine where concurrency boundaries are enforced.

Citations:


🏁 Script executed:

cat -n backend/internal/mcp/server.go | head -60

Repository: sysadminsmedia/homebox

Length of output: 1495


The shared *mcpsdk.Server instance is unsafe for concurrent use in v1.5.0—implement per-request server instances or add synchronization.

The web search confirms that mcp.Server v1.5.0 lacks documented concurrency guarantees and contains mutable internal fields (e.g., pendingNotifications, resourceSubscriptions) that are not protected for concurrent access. Reusing a singleton server across simultaneous HTTP requests creates a data race risk—concurrent tool invocations or protocol state updates may corrupt shared maps or introduce inconsistent state.

Mitigation options:

  • Create a new *mcpsdk.Server per HTTP request instead of reusing a singleton, or
  • Add mutex synchronization to protect all shared state within tool handlers

Security note: Data races in protocol handlers could allow users to access each other's context or tool results, particularly in multi-tenant scenarios where requests carry different user/tenant information through context.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/internal/mcp/server.go` at line 35, The code returns a shared
mcpsdk.Server instance (the server variable) for every HTTP request, but
mcpsdk.Server v1.5.0 has mutable fields (e.g., pendingNotifications,
resourceSubscriptions) that are not concurrency-safe; replace the singleton
usage by constructing a fresh *mcpsdk.Server per request inside the handler
factory (or otherwise clone/initialize a new server instance for each request)
or wrap all accesses to the shared server state with a mutex protecting
pendingNotifications and resourceSubscriptions and any other mutable fields used
by methods invoked from the HTTP request; update the function that currently
returns server to instead create and return a request-scoped *mcpsdk.Server (or
ensure all handler calls acquire the mutex) so concurrent requests cannot mutate
the same internal maps.

Comment on lines +76 to +85
entries, err := d.Repos.MaintEntry.GetAllMaintenance(ctx, sctx.GID, repo.MaintenanceFilters{
Status: status,
})
if err != nil {
return nil, getMaintenanceScheduleOutput{}, fmt.Errorf("get maintenance schedule: %w", err)
}

if len(entries) > limit {
entries = entries[:limit]
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Check MaintenanceFilters struct to see if it supports limit/offset

# Find the MaintenanceFilters definition
ast-grep --pattern $'type MaintenanceFilters struct {
  $$$
}'

# Also search for GetAllMaintenance signature to see what parameters it accepts
ast-grep --pattern $'func ($$$) GetAllMaintenance($$$) ($$$) {
  $$$
}'

Repository: sysadminsmedia/homebox

Length of output: 2922


Fetch-then-truncate causes inefficient database queries.

The code fetches ALL maintenance entries from the database, then truncates to the limit in memory (lines 83-85). With large maintenance datasets, this wastes database and memory resources.

Pass limit to GetAllMaintenance so the database can apply LIMIT directly. Note: MaintenanceFilters currently only includes a Status field—you'll need to extend it with Limit (and ideally Offset) fields first before updating the query.

Security note: This fetch-all-then-truncate pattern can become a DoS vector if a group accumulates thousands of maintenance entries, since the query returns all matching records regardless of the intended limit.

⚡ Proposed approach (requires MaintenanceFilters update first)
  1. Extend MaintenanceFilters in backend/internal/data/repo/repo_maintenance.go:
type MaintenanceFilters struct {
	Status MaintenanceFilterStatus `json:"status" schema:"status"`
	Limit  *int                    `json:"limit" schema:"limit"`
	Offset *int                    `json:"offset" schema:"offset"`
}
  1. Update GetAllMaintenance to apply limits at the query level (use Ent's .Limit() and .Offset() on the query).

  2. Remove the in-memory truncation at lines 83-85.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@backend/internal/mcp/tools/get_maintenance_schedule.go` around lines 76 - 85,
The current code fetches all entries via Repos.MaintEntry.GetAllMaintenance and
then truncates the returned entries slice (entries[:limit]), which is
inefficient and dangerous; update the MaintenanceFilters type to include Limit
and Offset (e.g., Limit *int, Offset *int), modify GetAllMaintenance to accept
and apply those filters at the query level (use the query's Limit/Offset) and
then call GetAllMaintenance passing the limit (and offset if used) instead of
fetching everything, and finally remove the in-memory truncation of entries in
the function that currently slices entries by limit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⬆️ enhancement New feature or request go Pull requests that update Go code review needed A review is needed on this PR or Issue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant