-
Notifications
You must be signed in to change notification settings - Fork 31.1k
Fix detectron2 installation in docker files
#41975
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds the --no-build-isolation flag to pip install commands for detectron2 in Docker images. This flag prevents pip from creating an isolated build environment, which can help avoid build issues with detectron2's dependencies.
- Added
--no-build-isolationflag to detectron2 installation commands in two Dockerfiles
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| docker/transformers-doc-builder/Dockerfile | Added --no-build-isolation flag to the pip install command for detectron2 |
| docker/transformers-all-latest-gpu/Dockerfile | Added --no-build-isolation flag to the conditional pip install command for detectron2 |
|
@copilot Please let me know your opinion on the changes |
* detectron2 - part 1 * detectron2 - part 2 --------- Co-authored-by: ydshieh <[email protected]>
* make test forward and backward more robust * refactor compile part of test tensor parallel * linting * pass rank around instead of calling it over and over * Run slow v2 (#41914) * Super * Super * Super * Super --------- Co-authored-by: ydshieh <[email protected]> * Fix `detectron2` installation in docker files (#41975) * detectron2 - part 1 * detectron2 - part 2 --------- Co-authored-by: ydshieh <[email protected]> * Fix `autoawq[kernels]` installation in quantization docker file (#41978) fix autoawq[kernels] Co-authored-by: ydshieh <[email protected]> * add support for saving encoder only so any parakeet model can be loaded for inference (#41969) * add support for saving encoder only so any decoder model can be loaded Signed-off-by: nithinraok <[email protected]> * use convolution_bias * convert modular * convolution_bias in convertion script --------- Signed-off-by: nithinraok <[email protected]> Co-authored-by: Eustache Le Bihan <[email protected]> Co-authored-by: eustlb <[email protected]> --------- Signed-off-by: nithinraok <[email protected]> Co-authored-by: Yih-Dar <[email protected]> Co-authored-by: ydshieh <[email protected]> Co-authored-by: Nithin Rao <[email protected]> Co-authored-by: Eustache Le Bihan <[email protected]> Co-authored-by: eustlb <[email protected]>
What does this PR do?
See the issue reported in facebookresearch/detectron2#5495
--no-build-isolationfixes the issue.Merge directly.
The effect is demonstrated in https://github.com/huggingface/transformers/actions/runs/19012508548