Conversation
|
Container images for this PR have been built successfully!
Built from commit 6a43d22 |
335db97 to
b969ae9
Compare
b57eba7 to
d045275
Compare
d045275 to
64a1b0b
Compare
64a1b0b to
0dfa0f3
Compare
415b25b to
6714dc2
Compare
|
This pull request has merge conflicts. Please resolve the conflicts so the PR can stay up-to-date and reviewed. |
6714dc2 to
931aefa
Compare
931aefa to
0957d57
Compare
|
This pull request has merge conflicts. Please resolve the conflicts so the PR can stay up-to-date and reviewed. |
0957d57 to
5173788
Compare
5173788 to
bd2d6a6
Compare
ba1550f to
dc596d3
Compare
|
This pull request has merge conflicts. Please resolve the conflicts so the PR can stay up-to-date and reviewed. |
dc596d3 to
4be88a2
Compare
45bcfe4 to
1f17047
Compare
| @@ -80,6 +103,7 @@ func (s *SystemService) PruneAll(ctx context.Context, req system.PruneAllRequest | |||
|
|
|||
| if req.Images != nil && req.Images.Mode != system.PruneImageModeNone { | |||
| g.Go(func() error { | |||
| s.appendSystemPruneActivityMessageInternal(groupCtx, activityID, "Pruning images", 40) | |||
| slog.InfoContext(groupCtx, "Pruning images...", "mode", req.Images.Mode, "until", req.Images.Until) | |||
| localResult := &system.PruneAllResult{} | |||
There was a problem hiding this comment.
Concurrent prune goroutines not guarded
StartPruneAll spawns an unguarded background goroutine on every call. If the endpoint is hit twice in quick succession, two goroutines will run simultaneously against the same Docker daemon, each creating its own activity record and competing for the same Docker resources (containers, images, volumes). The old synchronous PruneAll path naturally serialised these operations through the HTTP request lifecycle. The new background path has no such protection, so a rapid double-click from the frontend is enough to trigger two concurrent prune operations and confuse the activity feed with duplicate in-progress records.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/internal/services/system_service.go
Line: 94-108
Comment:
**Concurrent prune goroutines not guarded**
`StartPruneAll` spawns an unguarded background goroutine on every call. If the endpoint is hit twice in quick succession, two goroutines will run simultaneously against the same Docker daemon, each creating its own activity record and competing for the same Docker resources (containers, images, volumes). The old synchronous `PruneAll` path naturally serialised these operations through the HTTP request lifecycle. The new background path has no such protection, so a rapid double-click from the frontend is enough to trigger two concurrent prune operations and confuse the activity feed with duplicate in-progress records.
How can I resolve this? If you propose a fix, please make it concise.|
This pull request has merge conflicts. Please resolve the conflicts so the PR can stay up-to-date and reviewed. |
1f17047 to
d570d52
Compare
d570d52 to
6a43d22
Compare
|
This pull request has merge conflicts. Please resolve the conflicts so the PR can stay up-to-date and reviewed. |

Checklist
mainbranchWhat This PR Implements
Fixes: #2415
Changes Made
Testing Done
./scripts/development/dev.sh startjust lint all)just test backendAI Tool Used (if applicable)
AI Tool:
Assistance Level:
What AI helped with:
I reviewed and edited all AI-generated output:
I ran all required tests and manually verified changes:
Additional Context
Disclaimer Greptiles Reviews use AI, make sure to check over its work.
To better help train Greptile on our codebase, if the comment is useful and valid Like the comment, if its not helpful or invalid Dislike
To have Greptile Re-Review the changes, mention
greptileai.Greptile Summary
This PR introduces a full "background activity" system for Arcane — a new
activities+activity_messagestable pair, anActivityServicewith SSE fan-out for real-time updates, and a frontend activity-center UI. Activity tracking is wired into image-update checks, system prune, vulnerability scans, auto-update, project operations, and container bulk actions, with history pruned by the existing event-cleanup scheduler job.ActivityServicewith subscriber channels, snapshot/heartbeat SSE streaming, and per-environment history pruning integrated into the event cleanup job. All existing services now accept an*ActivityServicedependency and emit start/append/complete lifecycle events.Confidence Score: 4/5
Safe to merge with the concurrent-prune issue addressed; all other changes are additive and well-guarded.
StartPruneAlllaunches an unguarded background goroutine on every HTTP call. Back-to-back requests from the frontend will start multiple simultaneous Docker prune goroutines and create duplicate in-progress activity records. Everything else — the SSE streaming, activity CRUD, history pruning, and migration — looks correct and well-tested.backend/internal/services/system_service.go — the
StartPruneAllmethod needs a concurrency guard before merging.Prompt To Fix All With AI
Reviews (4): Last reviewed commit: "feat: full background activity tasks" | Re-trigger Greptile