-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(devservices): Add healthcheck wait condition and parallelize starting of containers #178
Conversation
I think relay sometimes doesn't pass the healthcheck, I may need to increase the retries there |
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #178 +/- ##
==========================================
+ Coverage 92.56% 92.85% +0.29%
==========================================
Files 22 22
Lines 1412 1470 +58
==========================================
+ Hits 1307 1365 +58
Misses 105 105 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
imo --wait
should be default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we also define a redis healthcheck in sentry's devservices config? otherwise we could run into Redis is loading the dataset in memory
during migrations in sentry sync.py
I agree
It is already defined: |
Decided to use in-house logic for healtchecks because it seems like |
devservices/utils/docker.py
Outdated
@@ -30,13 +87,14 @@ def get_matching_containers(label: str) -> list[str]: | |||
[ | |||
"docker", | |||
"ps", | |||
"-q", | |||
"--format", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docker container names are unique so this will be safe. Decided to make this change so that we could print a better status message if containers fail healthchecks
d4f7b3d
to
9814382
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Left a few comments but otherwise we should be good to go
container_names = get_container_names_for_project( | ||
cmd.project_name, cmd.config_path | ||
) | ||
containers_to_check.extend(container_names) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: since we don't use container_names
anywhere else, why don't we combine these two lines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually opted to do this here, since I think it's more readable than
containers_to_check.extend(get_container_names_for_project(cmd.project_name, cmd.config_path))
This starts containers in parallel and makes sure devservices waits until containers are healthy before succeeding.