Tegra, deactivation, CUDA, autograd#491
Conversation
Co-authored-by: Michał Górny <mgorny@gentoo.org>
pytorch itself already depends on pybind11 at runtime
…6.02.23.21.20.04 Other tools: - conda-build 26.1.0 - rattler-build 0.57.2 - rattler-build-conda-compat 1.4.11
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipe/meta.yaml:
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/22352622138. Examine the logs at this URL for more detail. |
|
Windows is not all that important in this PR; will need to fix conda-forge/.cirun#150 |
Le sigh. |
mgorny
left a comment
There was a problem hiding this comment.
Code-wise, the changes look good to me.
That doesn't look good, tho. |
|
Should be fixable: conda-forge/.cirun#166 (but the new images still need some work) |
|
So I fixed the paths length issue, but the problem is that's only for the "new" windows VM images, which are currently not useable due to the minor oversight of not containing any compilers (details). However, this PR does not really touch windows-related things, so I'm fine to merge it with windows being "broken". Hopefully we'll get the new images up-and-running soon; in a perfect world, this will also solve the upload issues here (see here) |
|
WFM — though we're going to need working Windows images for the RCs. |
In related news: water is wet. 🙃 We have multiple alternative:
Sidenote: cirun is down completely at the moment; looks like you jinxed it. ;-) |
|
Should we also pick #459 (comment) ? |
Good point, I had forgotten about this. Since this is a trivial patch, I won't push this here directly, but will include it in the merge commit directly. |
|
Regarding the windows file lengths I just checked, and with the file name itself we're actually a fair bit away from the windows maximum of 260 characters for the full path, but the checkout somehow doubles the feedstock name in the containing folder which brings us over the edge >>> q = ".ci_support/linux_aarch64_c_compiler_version13c_stdlib_version2.17channel_targetsconda-forge_maincuda_compiler_version12.9cxx_compiler_version13github_actions_labelscirun-openstack-gpu-2xlargeis_rcFalse.yaml"
>>> len(q)
207
>>> len("C:/Users/runnerx/_work/pytorch-cpu-feedstock/pytorch-cpu-feedstock/")
67 |
|
Sigh. Just after I spent my afternoon fixing the windows stuff so we can get the autograd fixes out, something in the Rube Goldberg machine that is our infrastructure cancelled the remaining two CUDA builds. However, both had already passed the build stage and were running the tests (successfully, until the interrupt). So all in all I think we can merge here. |
Tegra, deactivation, CUDA, autograd
|
Thanks a lot @h-vetinari ! FYI @Tobias-Fischer @ruben-arts once the builds are up it would be great to test them on a Jetson Orin |
|
For completeness, here are the two commits that were merged as "part" of this PR, but won't show up in the diff here due to the way git and the GH UI work:
You're lucky, the tegra build got drawn into the first round of jobs that got an agent. Based on the previous build (and assuming no other hiccups), it should be available in 11-12h. |
|
As a side quest, I tried to fix the windows upload issues here with an idea taken from magma (credit to @carterbox, this hadn't occurred to me before 🤦). Unfortunately, 80d18ca did not move the needle substantially; there was an improvement (524MB -> 482MB), but not enough to fix the upload issues on windows. So here's the manual upload of win+CUDA12.8: $ gh run download 22476589969 --repo conda-forge/pytorch-cpu-feedstock --name conda_artifacts_22476589969_win_64_channel_targetsconda-forge_maincu_hca575dce
$ unzip pytorch-cpu-feedstock_conda_artifacts_.zip
$ cd bld/win-64 && rm current_repodata.json index.html repodata*
$ ls
libtorch-2.10.0-cuda128_mkl_h0dfedc6_303.conda pytorch-gpu-2.10.0-cuda128_mkl_hc88b545_303.conda
pytorch-2.10.0-cuda128_mkl_py310_hb45e230_303.conda pytorch-tests-2.10.0-cuda128_mkl_py310_hf0eca92_303.conda
pytorch-2.10.0-cuda128_mkl_py311_h1fa9b41_303.conda pytorch-tests-2.10.0-cuda128_mkl_py311_hc85c64c_303.conda
pytorch-2.10.0-cuda128_mkl_py312_hf522e72_303.conda pytorch-tests-2.10.0-cuda128_mkl_py312_hb3d0777_303.conda
pytorch-2.10.0-cuda128_mkl_py313_hef97583_303.conda pytorch-tests-2.10.0-cuda128_mkl_py313_hd85d54a_303.conda
pytorch-2.10.0-cuda128_mkl_py314_h26fed52_303.conda pytorch-tests-2.10.0-cuda128_mkl_py314_hfe9566a_303.conda
$ ls | xargs anaconda upload
$ DELEGATE=h-vetinari
PACKAGE_VERSION=2.10.0
for package in libtorch pytorch pytorch-gpu pytorch-tests; do
anaconda copy --from-label main --to-label main --to-owner conda-forge ${DELEGATE}/${package}/${PACKAGE_VERSION}
done |
|
Cc @AdamDHines let’s try |
Picks up #485 & #489, as well as the suggested fixes for #487 & #459
Fixes #487
Fixes #459
Closes #489
Closes #485