-
Notifications
You must be signed in to change notification settings - Fork 783
fix(daemon): make daemon start idempotent #1334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(daemon): make daemon start idempotent #1334
Conversation
Add resolveExternalDependencies() function that queries raw dependency records, finds external refs (external:project:id format), and resolves them via routing to fetch actual issue metadata from target rigs. This fixes cross-rig convoy tracking where bd dep list would return empty results for issues stored as external references. Fixes: steveyegge#1332 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When `bd daemon start` is called and a compatible daemon is already running, exit successfully (code 0) instead of failing (code 1). This fixes race conditions during `gt reload` where something auto-starts the bd daemon between the killall and start phases, causing "already running" errors. The fix is simple: if daemon is running AND healthy AND version-compatible, print a message and exit 0. Previously this was treated as an error. Fixes: steveyegge#1331 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! One issue found during review.
What's Good
- ✅ The daemon idempotency change is correct and minimal
- ✅ Good verbose logging in resolveExternalDependencies for debugging
- ✅ Graceful failure - returns nil on errors so local deps still work
- ✅ Proper resource cleanup (closes DB connections)
Hopefully not too many notifications. I was experimenting with gh cli and making mistakes.
|
|
||
| // Open storage for the target rig | ||
| targetDBPath := filepath.Join(targetBeadsDir, "beads.db") | ||
| targetStore, err := sqlite.New(ctx, targetDBPath) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uses wrong database initialization pattern
The PR uses raw sqlite.New() but other cross-rig commands use factory.NewFromConfig() which respects backend configuration:
// Current (inconsistent):
targetDBPath := filepath.Join(targetBeadsDir, "beads.db")
targetStore, err := sqlite.New(ctx, targetDBPath)
// Established pattern (from create.go:947, move.go, refile.go):
targetStore, err := factory.NewFromConfig(ctx, targetBeadsDir)
Using factory.NewFromConfig ensures correct backend selection and consistent behavior with other cross-rig operations.
| var result []*types.IssueWithDependencyMetadata | ||
| beadsDir := getBeadsDir() | ||
|
|
||
| for _, dep := range deps { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Performance consideration in resolveExternalDependencies
The function opens a new database connection for each external dependency. If multiple deps reference the same project, this is inefficient:
// Current: opens/closes DB per dependency
for _, dep := range deps {
targetStore, err := sqlite.New(ctx, targetDBPath) // Opens here
issue, err := targetStore.GetIssue(ctx, targetID)
_ = targetStore.Close() // Closes here
}Consider grouping by project first, then batch-fetching:
// Group deps by project, open each DB once
depsByProject := groupByProject(deps)
for project, projectDeps := range depsByProject {
targetStore := openOnce(project)
defer targetStore.Close()
for _, dep := range projectDeps {
// fetch issues
}
}
Summary
Make
bd daemon startidempotent - when a compatible daemon is already running, exit successfully (code 0) instead of failing (code 1).Problem
During
gt reload, there's a race condition where something auto-starts the bd daemon between the killall and start phases. This causes "already running" errors even though the reload should succeed.The root cause is that
bdauto-starts daemons when running any command. A hook, background task, or anybdcommand can trigger this.Solution
If the daemon is already running AND healthy AND version-compatible, print a message and exit 0 instead of exit 1. This makes the operation idempotent.
Test plan
bd daemon startreturns exit 0 when daemon already runninggt reloadcompletes successfully without race condition errorsFixes: #1331
🤖 Generated with Claude Code