feat: mcp service#1485
Conversation
WalkthroughThis PR introduces Model Context Protocol (MCP) support to Homebox, enabling AI tools to query inventory via HTTP. It adds optional endpoint mounting at ChangesModel Context Protocol (MCP) Server Integration
Security Recommendations🔐 Key Security Observations:
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Warning
CodeRabbit couldn't request changes on this pull request because it doesn't have sufficient GitHub permissions.
Please grant CodeRabbit Pull requests: Read and write permission and re-run the review.
Actionable comments posted: 4
🧹 Nitpick comments (7)
backend/internal/mcp/server.go (1)
34-37: ⚡ Quick winAdd a request-size cap around the MCP transport handler.
Line 34 currently exposes the transport without an explicit body limit; wrapping it with
http.MaxBytesHandlerhelps reduce DoS/memory-pressure risk on POST traffic.Proposed hardening
func NewHandler(d Deps) http.Handler { server := mcpsdk.NewServer(&mcpsdk.Implementation{ Name: ServerName, Version: ServerVersion, }, nil) registerAll(server, d) - return mcpsdk.NewStreamableHTTPHandler( + h := mcpsdk.NewStreamableHTTPHandler( func(*http.Request) *mcpsdk.Server { return server }, nil, ) + return http.MaxBytesHandler(h, 1<<20) // 1 MiB; tune as needed }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/internal/mcp/server.go` around lines 34 - 37, The MCP transport handler returned by mcpsdk.NewStreamableHTTPHandler currently has no request body limit; wrap the resulting http.Handler with http.MaxBytesHandler to cap request size (e.g., 10<<20 for 10MB or a configurable constant) before returning it from the function that builds the handler (the closure that returns the server passed into mcpsdk.NewStreamableHTTPHandler should remain the same); ensure you wrap the handler returned by mcpsdk.NewStreamableHTTPHandler (the value being returned now) with http.MaxBytesHandler(handler, maxBytes) so POST bodies cannot exceed the set limit.backend/app/api/routes.go (2)
235-251: ⚡ Quick winConsider rate limiting for MCP endpoints.
The MCP endpoints are properly protected with
userMW(authentication, tenant, and role checks), which is essential for securing AI agent access to inventory data. However, unlike the auth endpoints (lines 92, 95-96), there is no rate limiting applied.Consider adding rate limiting to prevent potential abuse, especially since AI agents might issue many rapid queries.
💡 Suggested approach
Create an MCP-specific rate limiter similar to
a.mwAuthRateLimitand apply it to the MCP endpoints:if a.conf.Mcp.Enabled { mcpHandler := hbmcp.NewHandler(hbmcp.Deps{ Services: a.services, Repos: a.repos, }) mcpHandlerFunc := chain.ToHandlerFunc(func(w http.ResponseWriter, r *http.Request) error { mcpHandler.ServeHTTP(w, r) return nil - }, userMW...) + }, append(userMW, a.mwMcpRateLimit)...) r.Get("/mcp", mcpHandlerFunc) r.Post("/mcp", mcpHandlerFunc) r.Delete("/mcp", mcpHandlerFunc) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/api/routes.go` around lines 235 - 251, Add an MCP-specific rate limiter and apply it to the MCP routes: when a.conf.Mcp.Enabled is true and you construct hbmcp.NewHandler(hbmcp.Deps{...}) create a middleware similar to a.mwAuthRateLimit (e.g., a.mwMcpRateLimit) and include it in the chain.ToHandlerFunc call that produces mcpHandlerFunc (in addition to userMW) so the GET/POST/DELETE "/mcp" routes are rate-limited; ensure the new middleware follows the same signature/behavior as a.mwAuthRateLimit and is configured for expected MCP throughput.
244-247: ⚡ Quick winError handling in external protocol handlers is intentional and correct.
Both the MCP and OpenTelemetry proxy handlers manage their own HTTP responses and error logging internally (OTEL uses
http.Error()andlog.Error(); MCP wraps the SDK's handler which handles its own response). Returningnilfrom the wrapper is appropriate since these aren't application logic errors for the error chain middleware to process.However, consider adding panic recovery around external SDK handler calls for production robustness—if
mcpHandler.ServeHTTP()or the OTEL SDK panics unexpectedly, the error chain middleware won't catch or log it. Both endpoints are already protected by authentication middleware.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/app/api/routes.go` around lines 244 - 247, Wrap the external SDK HTTP handler calls in a panic-recovery block so panics inside mcpHandler.ServeHTTP (and the OTEL handler) are caught, logged, and translated into an HTTP 500 instead of crashing the process; specifically, update the wrapper that creates mcpHandlerFunc (and the analogous otel wrapper) to defer a recover() that logs the panic (using the existing logger) and writes a 500 response, while still returning nil to the chain so the middleware flow remains unchanged.backend/internal/mcp/tools/search_items.go (1)
60-63: 💤 Low valueConsider enforcing maximum page size.
While the JSON schema comment suggests "max useful value is around 100," there's no enforcement. An AI agent could request an arbitrarily large page size, potentially causing performance issues.
Consider capping the page size:
pageSize := in.PageSize if pageSize <= 0 { pageSize = searchItemsDefaultPageSize + } else if pageSize > 100 { + pageSize = 100 }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/internal/mcp/tools/search_items.go` around lines 60 - 63, The handler currently sets pageSize from in.PageSize with only a default fallback (pageSize and searchItemsDefaultPageSize); add an upper bound constant (e.g., searchItemsMaxPageSize) and clamp pageSize after the existing default logic so pageSize = max(min(pageSize, searchItemsMaxPageSize), 1). Update any related validation or comments and use the new searchItemsMaxPageSize constant wherever pagination limits are documented or enforced.backend/internal/mcp/tools/get_maintenance_schedule.go (3)
18-22: ⚡ Quick winConsider adding offset/cursor for pagination beyond the first page.
The input struct accepts
limitbut nooffsetorcursor, preventing clients from accessing results beyond the first page. For a read-only MCP tool, pagination support would enable AI agents to explore larger maintenance schedules systematically.📄 Proposed enhancement to add offset parameter
type getMaintenanceScheduleInput struct { Status string `json:"status,omitempty" jsonschema:"scheduled | completed | both; defaults to scheduled"` Limit int `json:"limit,omitempty" jsonschema:"max number of entries to return; defaults to 50; capped at 200"` + Offset int `json:"offset,omitempty" jsonschema:"number of entries to skip; defaults to 0; use with limit for pagination"` GroupID uuid.UUID `json:"group_id,omitempty" jsonschema:"optional group to query; omit to use your default group; call list_my_groups to see available groups"` }Then pass
Offsetto the repository query alongsideLimit.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/internal/mcp/tools/get_maintenance_schedule.go` around lines 18 - 22, Add pagination to getMaintenanceScheduleInput by introducing an Offset (int) or Cursor (string) field alongside Limit in the getMaintenanceScheduleInput struct (e.g., add Offset int `json:"offset,omitempty"` or Cursor string `json:"cursor,omitempty"`). Update the handler/logic that calls the repository method (the code that consumes getMaintenanceScheduleInput and invokes the repository query) to pass the new Offset/Cursor into the repository query parameters and enforce sane defaults/limits (e.g., default limit 50, cap 200). Also update any JSON schema tags for the new field and the repository method signature if needed so the repository uses the offset/cursor when fetching results.
51-54: Group access control relies on ResolveGroup authorization.The tool allows specifying any
group_id, relying onmcp.ResolveGroupto enforce permissions. Ensure thatResolveGroupproperly validates that the authenticated user has access to the requested group before returning the scoped context.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/internal/mcp/tools/get_maintenance_schedule.go` around lines 51 - 54, The code calls mcp.ResolveGroup(ctx, in.GroupID) without confirming that ResolveGroup enforces user authorization; update ResolveGroup (or add an explicit authorization check immediately after the call) to verify the authenticated user in ctx has access to the requested group ID before returning the scoped context sctx. Specifically: ensure ResolveGroup performs permission checks against the caller's identity/claims in ctx for the given group (in.GroupID) and returns a non-nil error if unauthorized, or alternatively call an authorization helper (e.g., AuthorizeGroupAccess(ctx, in.GroupID) or check sctx.User/claims) inside getMaintenanceSchedule to deny access when the user lacks permission. Ensure the error path returns an appropriate authorization error instead of proceeding.
99-100: ⚡ Quick winUse zero-time checks before serialization to respect
omitemptyintent.
e.ScheduledDateande.CompletedDateare non-nullabletime.Timevalues, so calling.String()is safe. However, zerotime.Timevalues will serialize to"0001-01-01 00:00:00 +0000 UTC"instead of being omitted despite theomitemptyJSON tag. Conditionally populate these fields only when non-zero, and consider using RFC3339 format for better interoperability with client applications.💡 Suggested fix
item := getMaintenanceScheduleItem{ ID: e.ID, ItemID: e.ItemID, ItemName: e.ItemName, Name: e.Name, Description: e.Description, - ScheduledDate: e.ScheduledDate.String(), - CompletedDate: e.CompletedDate.String(), Cost: e.Cost, } + if !e.ScheduledDate.IsZero() { + item.ScheduledDate = e.ScheduledDate.Format(time.RFC3339) + } + if !e.CompletedDate.IsZero() { + item.CompletedDate = e.CompletedDate.Format(time.RFC3339) + }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/internal/mcp/tools/get_maintenance_schedule.go` around lines 99 - 100, The ScheduledDate and CompletedDate fields are being set unconditionally from e.ScheduledDate/e.CompletedDate which serializes zero times instead of respecting omitempty; update the code in get_maintenance_schedule.go where these fields are assigned (look for the block setting ScheduledDate: and CompletedDate:) to only populate those JSON fields when e.ScheduledDate.IsZero() / e.CompletedDate.IsZero() are false, and format the times using e.ScheduledDate.Format(time.RFC3339) and e.CompletedDate.Format(time.RFC3339) so clients receive RFC3339 timestamps and zero values are omitted.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@backend/go.mod`:
- Line 24: The go.mod entry pinning github.com/modelcontextprotocol/go-sdk
v1.5.0 contains unresolved HIGH severity advisories; update the dependency to a
patched release (replace the version string for
github.com/modelcontextprotocol/go-sdk in go.mod and run `go get`/`go mod tidy`
to lock the fixed version) or remove/replace the module if no patch exists and
defer merging this PR until a safe version is available; ensure jsonschema-go
remains at v0.4.2 or higher and re-run `go list -m -u all` / security scanning
to confirm advisories are cleared before merging.
In `@backend/internal/mcp/registry.go`:
- Around line 17-19: Add a nil-check in the Register function to avoid appending
nil registrars: ensure the function (Register) returns early if the passed
Registrar is nil, so registry only contains non-nil entries and registerAll
won’t panic when replaying registrations; reference the Registrar type, the
Register function, and the registry slice to locate where to insert the guard.
In `@backend/internal/mcp/server.go`:
- Line 35: The code returns a shared mcpsdk.Server instance (the server
variable) for every HTTP request, but mcpsdk.Server v1.5.0 has mutable fields
(e.g., pendingNotifications, resourceSubscriptions) that are not
concurrency-safe; replace the singleton usage by constructing a fresh
*mcpsdk.Server per request inside the handler factory (or otherwise
clone/initialize a new server instance for each request) or wrap all accesses to
the shared server state with a mutex protecting pendingNotifications and
resourceSubscriptions and any other mutable fields used by methods invoked from
the HTTP request; update the function that currently returns server to instead
create and return a request-scoped *mcpsdk.Server (or ensure all handler calls
acquire the mutex) so concurrent requests cannot mutate the same internal maps.
In `@backend/internal/mcp/tools/get_maintenance_schedule.go`:
- Around line 76-85: The current code fetches all entries via
Repos.MaintEntry.GetAllMaintenance and then truncates the returned entries slice
(entries[:limit]), which is inefficient and dangerous; update the
MaintenanceFilters type to include Limit and Offset (e.g., Limit *int, Offset
*int), modify GetAllMaintenance to accept and apply those filters at the query
level (use the query's Limit/Offset) and then call GetAllMaintenance passing the
limit (and offset if used) instead of fetching everything, and finally remove
the in-memory truncation of entries in the function that currently slices
entries by limit.
---
Nitpick comments:
In `@backend/app/api/routes.go`:
- Around line 235-251: Add an MCP-specific rate limiter and apply it to the MCP
routes: when a.conf.Mcp.Enabled is true and you construct
hbmcp.NewHandler(hbmcp.Deps{...}) create a middleware similar to
a.mwAuthRateLimit (e.g., a.mwMcpRateLimit) and include it in the
chain.ToHandlerFunc call that produces mcpHandlerFunc (in addition to userMW) so
the GET/POST/DELETE "/mcp" routes are rate-limited; ensure the new middleware
follows the same signature/behavior as a.mwAuthRateLimit and is configured for
expected MCP throughput.
- Around line 244-247: Wrap the external SDK HTTP handler calls in a
panic-recovery block so panics inside mcpHandler.ServeHTTP (and the OTEL
handler) are caught, logged, and translated into an HTTP 500 instead of crashing
the process; specifically, update the wrapper that creates mcpHandlerFunc (and
the analogous otel wrapper) to defer a recover() that logs the panic (using the
existing logger) and writes a 500 response, while still returning nil to the
chain so the middleware flow remains unchanged.
In `@backend/internal/mcp/server.go`:
- Around line 34-37: The MCP transport handler returned by
mcpsdk.NewStreamableHTTPHandler currently has no request body limit; wrap the
resulting http.Handler with http.MaxBytesHandler to cap request size (e.g.,
10<<20 for 10MB or a configurable constant) before returning it from the
function that builds the handler (the closure that returns the server passed
into mcpsdk.NewStreamableHTTPHandler should remain the same); ensure you wrap
the handler returned by mcpsdk.NewStreamableHTTPHandler (the value being
returned now) with http.MaxBytesHandler(handler, maxBytes) so POST bodies cannot
exceed the set limit.
In `@backend/internal/mcp/tools/get_maintenance_schedule.go`:
- Around line 18-22: Add pagination to getMaintenanceScheduleInput by
introducing an Offset (int) or Cursor (string) field alongside Limit in the
getMaintenanceScheduleInput struct (e.g., add Offset int
`json:"offset,omitempty"` or Cursor string `json:"cursor,omitempty"`). Update
the handler/logic that calls the repository method (the code that consumes
getMaintenanceScheduleInput and invokes the repository query) to pass the new
Offset/Cursor into the repository query parameters and enforce sane
defaults/limits (e.g., default limit 50, cap 200). Also update any JSON schema
tags for the new field and the repository method signature if needed so the
repository uses the offset/cursor when fetching results.
- Around line 51-54: The code calls mcp.ResolveGroup(ctx, in.GroupID) without
confirming that ResolveGroup enforces user authorization; update ResolveGroup
(or add an explicit authorization check immediately after the call) to verify
the authenticated user in ctx has access to the requested group ID before
returning the scoped context sctx. Specifically: ensure ResolveGroup performs
permission checks against the caller's identity/claims in ctx for the given
group (in.GroupID) and returns a non-nil error if unauthorized, or alternatively
call an authorization helper (e.g., AuthorizeGroupAccess(ctx, in.GroupID) or
check sctx.User/claims) inside getMaintenanceSchedule to deny access when the
user lacks permission. Ensure the error path returns an appropriate
authorization error instead of proceeding.
- Around line 99-100: The ScheduledDate and CompletedDate fields are being set
unconditionally from e.ScheduledDate/e.CompletedDate which serializes zero times
instead of respecting omitempty; update the code in get_maintenance_schedule.go
where these fields are assigned (look for the block setting ScheduledDate: and
CompletedDate:) to only populate those JSON fields when e.ScheduledDate.IsZero()
/ e.CompletedDate.IsZero() are false, and format the times using
e.ScheduledDate.Format(time.RFC3339) and e.CompletedDate.Format(time.RFC3339) so
clients receive RFC3339 timestamps and zero values are omitted.
In `@backend/internal/mcp/tools/search_items.go`:
- Around line 60-63: The handler currently sets pageSize from in.PageSize with
only a default fallback (pageSize and searchItemsDefaultPageSize); add an upper
bound constant (e.g., searchItemsMaxPageSize) and clamp pageSize after the
existing default logic so pageSize = max(min(pageSize, searchItemsMaxPageSize),
1). Update any related validation or comments and use the new
searchItemsMaxPageSize constant wherever pagination limits are documented or
enforced.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 44b8f1b5-ae91-44ca-858b-ca26f9574bf1
⛔ Files ignored due to path filters (1)
backend/go.sumis excluded by!**/*.sum
📒 Files selected for processing (17)
backend/app/api/routes.gobackend/go.modbackend/internal/mcp/context.gobackend/internal/mcp/deps.gobackend/internal/mcp/registry.gobackend/internal/mcp/schema.gobackend/internal/mcp/server.gobackend/internal/mcp/tools/doc.gobackend/internal/mcp/tools/get_entity_path.gobackend/internal/mcp/tools/get_group_statistics.gobackend/internal/mcp/tools/get_item.gobackend/internal/mcp/tools/get_maintenance_schedule.gobackend/internal/mcp/tools/list_entity_types.gobackend/internal/mcp/tools/list_labels.gobackend/internal/mcp/tools/list_my_groups.gobackend/internal/mcp/tools/search_items.gobackend/internal/sys/config/conf.go
| github.com/gorilla/schema v1.4.1 | ||
| github.com/hay-kot/httpkit v0.0.11 | ||
| github.com/jackc/pgx/v5 v5.9.2 | ||
| github.com/modelcontextprotocol/go-sdk v1.5.0 |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
`Check OSV/GitHub advisories for:
- github.com/modelcontextprotocol/go-sdk v1.5.0
- github.com/google/jsonschema-go v0.4.2
Return unresolved HIGH/CRITICAL findings and affected version ranges.`
💡 Result:
github.com/modelcontextprotocol/go-sdk v1.5.0: Unresolved HIGH/CRITICAL findings (as of 2026-05-10): - GO-2026-4569 (GHSA-wvj2-96wp-fq3f, CVE-2026-27896): Improper Handling of Case Sensitivity. HIGH severity. Affected: >=1.3.1 (no fixed version listed; review_status: UNREVIEWED). v1.5.0 affected since introduced through 1.3.1 and no explicit fix beyond.[1][2][3] Other HIGH advisories listed on GitHub security page, published Feb-Mar 2026 (e.g., GHSA-q382-vc8q-7jhj null Unicode, GHSA-89xv-2j6f-qhc8 Cross-Site Tool Execution <=1.4.0, GHSA-xw59-hvm2-8pj6 DNS Rebinding <=1.4.0), but GO-2026-4569 explicitly covers >=1.3.1 range including v1.5.0.[2][4][5] github.com/google/jsonschema-go v0.4.2: No HIGH/CRITICAL unresolved findings in OSV/GitHub advisories. No published security advisories on GitHub.[6][7] Note: Unrelated vulns in other jsonschema packages (e.g., npm json-schema).[8]
Citations:
- 1: https://osv.dev/vulnerability/GO-2026-4569
- 2: https://github.com/modelcontextprotocol/go-sdk/security
- 3: https://osv.dev/vulnerability/GO-2026-4770
- 4: https://osv.dev/vulnerability/GHSA-89xv-2j6f-qhc8
- 5: https://osv.dev/vulnerability/GHSA-xw59-hvm2-8pj6
- 6: https://github.com/google/jsonschema-go/security
- 7: https://pkg.go.dev/github.com/google/jsonschema-go
- 8: https://osv.dev/GHSA-896r-f27r-55mw
Do not merge with unresolved HIGH severity security advisories in go-sdk dependency.
The MCP dependency chain contains an unresolved HIGH severity advisory (GO-2026-4569/CVE-2026-27896: Improper Handling of Case Sensitivity) affecting go-sdk v1.5.0. Additional HIGH advisories (GHSA-q382-vc8q-7jhj, GHSA-89xv-2j6f-qhc8, GHSA-xw59-hvm2-8pj6) also affect this version with no published fixes available. jsonschema-go v0.4.2 is clear. Either upgrade go-sdk to a patched version or defer this PR pending security resolution.
Also applies to: 156-156
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/go.mod` at line 24, The go.mod entry pinning
github.com/modelcontextprotocol/go-sdk v1.5.0 contains unresolved HIGH severity
advisories; update the dependency to a patched release (replace the version
string for github.com/modelcontextprotocol/go-sdk in go.mod and run `go get`/`go
mod tidy` to lock the fixed version) or remove/replace the module if no patch
exists and defer merging this PR until a safe version is available; ensure
jsonschema-go remains at v0.4.2 or higher and re-run `go list -m -u all` /
security scanning to confirm advisories are cleared before merging.
| func Register(r Registrar) { | ||
| registry = append(registry, r) | ||
| } |
There was a problem hiding this comment.
Guard against nil registrars to prevent panic at registration replay.
If Register(nil) ever happens, Line 23 panics when registerAll runs. A defensive guard here is cheap and avoids brittle startup failures.
Proposed fix
func Register(r Registrar) {
+ if r == nil {
+ panic("mcp.Register: nil registrar")
+ }
registry = append(registry, r)
}Also applies to: 22-23
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/internal/mcp/registry.go` around lines 17 - 19, Add a nil-check in
the Register function to avoid appending nil registrars: ensure the function
(Register) returns early if the passed Registrar is nil, so registry only
contains non-nil entries and registerAll won’t panic when replaying
registrations; reference the Registrar type, the Register function, and the
registry slice to locate where to insert the guard.
| registerAll(server, d) | ||
|
|
||
| return mcpsdk.NewStreamableHTTPHandler( | ||
| func(*http.Request) *mcpsdk.Server { return server }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In github.com/modelcontextprotocol/go-sdk/mcp v1.5.0, is *mcp.Server documented as safe for concurrent use across multiple simultaneous HTTP requests when reused as a singleton?
💡 Result:
No. In v1.5.0, the code/docs do not document mcp.Server as “safe for concurrent use” across multiple simultaneous HTTP requests when reused as a singleton. What the authoritative docs do say instead: - The Go SDK’s protocol documentation explicitly says MCP has no concurrency guarantees at the spec level, and the Go SDK only implements certain heuristics (e.g., async handling of tools/calls) rather than stating the underlying server object is concurrency-safe for reuse across concurrent HTTP requests. [1] What’s visible from the server implementation (indirectly relevant to the question): - mcp.NewServer builds a *Server with internal mutable fields like maps (e.g., pendingNotifications, resourceSubscriptions) that are not documented here as protected for concurrent access. [2] Practical implication for your scenario (singleton across concurrent HTTP requests): - Because there is no explicit “safe for concurrent use” guarantee for the singleton *mcp.Server object in v1.5.0, you should not assume it is safe. Instead, assume that per-request/per-session state must be handled by the session/transport layers and your handlers must be concurrency-safe themselves. The SDK doc indicates calls are handled asynchronously relative to each other, which increases the likelihood of shared-state issues if you rely on a singleton server object without synchronization. [1] If you want, share the specific HTTP entrypoint/transport you are using in v1.5.0 (e.g., streamable HTTP handler variant and how you store/route *mcp.Server), and I can check the exact transport/session code paths to determine where concurrency boundaries are enforced.
Citations:
- 1: https://github.com/modelcontextprotocol/go-sdk/blob/main/docs/protocol.md
- 2: https://raw.githubusercontent.com/modelcontextprotocol/go-sdk/main/mcp/server.go
🏁 Script executed:
cat -n backend/internal/mcp/server.go | head -60Repository: sysadminsmedia/homebox
Length of output: 1495
The shared *mcpsdk.Server instance is unsafe for concurrent use in v1.5.0—implement per-request server instances or add synchronization.
The web search confirms that mcp.Server v1.5.0 lacks documented concurrency guarantees and contains mutable internal fields (e.g., pendingNotifications, resourceSubscriptions) that are not protected for concurrent access. Reusing a singleton server across simultaneous HTTP requests creates a data race risk—concurrent tool invocations or protocol state updates may corrupt shared maps or introduce inconsistent state.
Mitigation options:
- Create a new
*mcpsdk.Serverper HTTP request instead of reusing a singleton, or - Add mutex synchronization to protect all shared state within tool handlers
Security note: Data races in protocol handlers could allow users to access each other's context or tool results, particularly in multi-tenant scenarios where requests carry different user/tenant information through context.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/internal/mcp/server.go` at line 35, The code returns a shared
mcpsdk.Server instance (the server variable) for every HTTP request, but
mcpsdk.Server v1.5.0 has mutable fields (e.g., pendingNotifications,
resourceSubscriptions) that are not concurrency-safe; replace the singleton
usage by constructing a fresh *mcpsdk.Server per request inside the handler
factory (or otherwise clone/initialize a new server instance for each request)
or wrap all accesses to the shared server state with a mutex protecting
pendingNotifications and resourceSubscriptions and any other mutable fields used
by methods invoked from the HTTP request; update the function that currently
returns server to instead create and return a request-scoped *mcpsdk.Server (or
ensure all handler calls acquire the mutex) so concurrent requests cannot mutate
the same internal maps.
| entries, err := d.Repos.MaintEntry.GetAllMaintenance(ctx, sctx.GID, repo.MaintenanceFilters{ | ||
| Status: status, | ||
| }) | ||
| if err != nil { | ||
| return nil, getMaintenanceScheduleOutput{}, fmt.Errorf("get maintenance schedule: %w", err) | ||
| } | ||
|
|
||
| if len(entries) > limit { | ||
| entries = entries[:limit] | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Check MaintenanceFilters struct to see if it supports limit/offset
# Find the MaintenanceFilters definition
ast-grep --pattern $'type MaintenanceFilters struct {
$$$
}'
# Also search for GetAllMaintenance signature to see what parameters it accepts
ast-grep --pattern $'func ($$$) GetAllMaintenance($$$) ($$$) {
$$$
}'Repository: sysadminsmedia/homebox
Length of output: 2922
Fetch-then-truncate causes inefficient database queries.
The code fetches ALL maintenance entries from the database, then truncates to the limit in memory (lines 83-85). With large maintenance datasets, this wastes database and memory resources.
Pass limit to GetAllMaintenance so the database can apply LIMIT directly. Note: MaintenanceFilters currently only includes a Status field—you'll need to extend it with Limit (and ideally Offset) fields first before updating the query.
Security note: This fetch-all-then-truncate pattern can become a DoS vector if a group accumulates thousands of maintenance entries, since the query returns all matching records regardless of the intended limit.
⚡ Proposed approach (requires MaintenanceFilters update first)
- Extend
MaintenanceFiltersinbackend/internal/data/repo/repo_maintenance.go:
type MaintenanceFilters struct {
Status MaintenanceFilterStatus `json:"status" schema:"status"`
Limit *int `json:"limit" schema:"limit"`
Offset *int `json:"offset" schema:"offset"`
}-
Update
GetAllMaintenanceto apply limits at the query level (use Ent's.Limit()and.Offset()on the query). -
Remove the in-memory truncation at lines 83-85.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@backend/internal/mcp/tools/get_maintenance_schedule.go` around lines 76 - 85,
The current code fetches all entries via Repos.MaintEntry.GetAllMaintenance and
then truncates the returned entries slice (entries[:limit]), which is
inefficient and dangerous; update the MaintenanceFilters type to include Limit
and Offset (e.g., Limit *int, Offset *int), modify GetAllMaintenance to accept
and apply those filters at the query level (use the query's Limit/Offset) and
then call GetAllMaintenance passing the limit (and offset if used) instead of
fetching everything, and finally remove the in-memory truncation of entries in
the function that currently slices entries by limit.
What type of PR is this?
What this PR does / why we need it:
Adds basic read-only MCP server Homebox for AI agents to be able to use.
Summary by CodeRabbit
Release Notes