-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove internal bazel build and Circle CI #1145
Conversation
@@ -941,31 +941,6 @@ def fork_func(param: int) -> int: # type: ignore | |||
assert result == 42 | |||
|
|||
|
|||
def test_subprocess_no_cleanup( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test was:
(a) not actually testing what it claimed to be testing
(b) not claiming to test something that I think we want to implement for now
(c) causing pytest to fork such that subsequent tests were computed twice
Thus I'm removing it.
Explanation:
(a) it claims to be confirming that if the body of a Sematic func forks and doesn't clean up one of the processes by an explicit exit, the Sematic post-processing will only occur for one of the processes. However, the way that it attempts to test this is simply by confirming that the return value from the func is what's expected. But as written, that return value will be the same for both forks, regardless of whether Sematic post-processing occurs. Thus it passes even if the behavior it wants to guard against is actually occurring.
(b) enforcing the behavior this is trying to guard against would require some fairly fiddly and race-condition-prone logic that I don't think we want to address at this moment. It should also be a fairly rare edge-case anyway, and generally will be innocuous even if it occurs.
(c) since neither fork is cleaned and the fork is from the interpreter running pytest, pytest itself continues to run in both forks. This wasn't much of an issue with bazel as the test runner because it ran each test file in its own sandboxed interpreter. So the only duplication was at the test file level, not at the whole test-suite level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic 🔪🔪🔪
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @augray
Now that we've migrated to a new tooling stack based on uv/GitHub Actions, we can drop bazel and CircleCI. This PR removes those things that are inessential.
With regards to bazel, it is important to note that there are two factors to consider:
(1) Support for bazel to manage development and maintenance of Sematic itself
(2) Making it possible and convenient for Sematic to run pipelines for python code & images produced with bazel
This PR drops (1) but not (2). However, the bazel support for examples was also dropped. The intent being to refer to https://github.com/sematic-ai/example_bazel as an example of how one can create a pipeline using Sematic + bazel.
With regards to bazel integration for (1), there are a number of accommodations made in the code for this, the most complex being support for producing images capable of being used as Ray head and worker nodes along with the pipeline. To ensure that these changes did not break that, I modified our
example_bazel
repo to use a small Ray cluster and ran it against the version of our bazel workspace from this branch. The branch used for this testing is here I confirmed that the pipeline ran, spun up an ephemeral Ray cluster, and used it before exiting.