Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove dependency on cugraph-ops #99

Merged
merged 22 commits into from
Jan 17, 2025

Conversation

tingyu66
Copy link
Member

Address #81

@tingyu66 tingyu66 requested review from a team as code owners December 19, 2024 03:59
@tingyu66 tingyu66 marked this pull request as draft December 19, 2024 04:00
Copy link

copy-pr-bot bot commented Dec 19, 2024

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@tingyu66
Copy link
Member Author

/ok to test

@tingyu66 tingyu66 added breaking Introduces a breaking change improvement Improves an existing functionality labels Dec 19, 2024
packages:
- pytorch-cuda=12.4
- matrix: {cuda: "11.8"}
- pytorch-gpu>=2.3=*cuda120*
Copy link
Contributor

@bdice bdice Dec 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already using conda-forge, I think? pytorch-gpu is a conda-forge package, not a pytorch channel package. Also, the latest conda-forge builds are built with CUDA 12.6. CUDA 12.0 is no longer used to build.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For compatibility reasons we may want to stick to older builds of pytorch-gpu (built with cuda120) for now. We will hopefully be able to relax this in the future.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this PR switches to conda-forge::pytorch-gpu since pytorch channel will discontinue.

Also, the latest conda-forge builds are built with CUDA 12.6. CUDA 12.0 is no longer used to build.

For compatibility reasons we may want to stick to older builds of pytorch-gpu (built with cuda120) for now. We will hopefully be able to relax this in the future.

Oh, I had not noticed that the most recent build (_306) is only against 12.6. I agree with keeping 12.0 for better backward compatibility. However, the CUDA 11 build seems missing. Do we have details on their build matrix?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CUDA 11 builds were dropped recently. You may need an older version for CUDA 11 compatibility. I also saw this while working on rapidsai/cudf#17475. mamba search -c conda-forge "pytorch=*=cuda118*" indicates the latest version with CUDA 11 support is 2.5.1 build 303. The latest is 2.5.1 build 306.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For completeness, the latest CUDA 12.0 build was also 2.5.1 build 303.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, thanks. It shouldn't be a dealbreaker unless another test component ends up requiring a newer version of torch on CUDA 11 down the line.

Copy link
Contributor

@bdice bdice Jan 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytorch-gpu requires __cuda, and is not installable on systems without a CUDA driver. This makes it impossible to resolve the conda environment needed for devcontainers jobs in CI, which are CPU-only.

Note: many CUDA packages, including RAPIDS, are explicitly designed not to have __cuda as a run requirement, because it makes it impossible to install on a CPU node before using that environment on another system with a GPU.

It looks like if we just use pytorch instead of pytorch-gpu, we still get GPU builds:

CUDA 11 driver present:

CONDA_OVERRIDE_CUDA="11.8" conda create -n test --dry-run pytorch

shows

pytorch  2.5.1  cuda118_py313h40cdc2d_303  conda-forge

CUDA 12 driver present:

CONDA_OVERRIDE_CUDA="12.5" conda create -n test --dry-run pytorch

shows

pytorch  2.5.1  cuda126_py313hae2543e_306  conda-forge

No CUDA driver present:

CONDA_OVERRIDE_CUDA="" mamba create -n test --dry-run pytorch

shows

pytorch  2.5.1  cpu_mkl_py313_h90df46e_108  conda-forge

This should be sufficient. Let's try using just pytorch instead of pytorch-gpu with specific CUDA build selectors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two benefits here, if my proposal above works.

  1. devcontainers CI job would get CPU-only builds, which should still be fine for builds
  2. We don't need to specify CUDA versions, so this dependency doesn't have to be "specific" to CUDA 11/12

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, let's try with "pytorch" instead of "pytorch-gpu".

That opens up a risk that there may be situations where the solver chooses a CPU-only version because of some conflict, but hopefully cugraph-pyg can detect that with torch.cuda.is_available() or similar and raise an informative error saying something like "if using conda, try 'conda install cugraph-pyg pytorch-gpu'".

We don't need to specify CUDA versions, so this dependency doesn't have to be "specific" to CUDA 11/12

I looked into this today... we shouldn't have needed to specify CUDA versions in build strings for pytorch-gpu anyway, as long as we're pinning the cuda-version package somewhere (for example, in the run: dependencies of cugraph).

Looks like pytorch-gpu is == pinned to a specific pytorch.

Screenshot 2025-01-07 at 2 00 40 PM

And the pytorch CUDA builds all have run: dependencies on cuda-version.

Screenshot 2025-01-07 at 2 02 22 PM

So here in cugraph-pyg, just having cuda-version as a run: dependency would be enough to ensure a compatible pytorch-gpu / pytorch is pulled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jameslamb These simplifications to drop build string info are only possible now with conda-forge, iirc. I believe more complexity was required when we used the pytorch channel, and we probably just carried that over when switching to conda-forge.

Copy link

copy-pr-bot bot commented Dec 22, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@tingyu66
Copy link
Member Author

/ok to test

@jameslamb
Copy link
Member

/ok to test

@jameslamb jameslamb self-requested a review January 6, 2025 20:15
Copy link
Member

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @tingyu66 , I'd like to help keep this moving forward. Left some questions for your consideration.

dependencies.yaml Show resolved Hide resolved
python/cugraph-pyg/cugraph_pyg/nn/conv/__init__.py Outdated Show resolved Hide resolved
@jameslamb
Copy link
Member

/ok to test

@tingyu66
Copy link
Member Author

tingyu66 commented Jan 6, 2025

/ok to test

@tingyu66
Copy link
Member Author

tingyu66 commented Jan 7, 2025

/ok to test

@tingyu66 tingyu66 marked this pull request as ready for review January 7, 2025 22:36
@jameslamb
Copy link
Member

/ok to test

@jameslamb jameslamb self-requested a review January 16, 2025 20:24
Copy link
Member

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes look good to me as-is.

However, it looks like this would still leave pylibwholegraph with a hard runtime dependency on pylibcugraphops.

I pulled this branch today and looked around like this:

git grep -E -i 'cugraph.*ops'

from pylibcugraphops.pytorch.operators import mha_gat_n2n as GATConvAgg
from pylibcugraphops.pytorch import SampledCSC

from pylibcugraphops.pytorch.operators import agg_concat_n2n as SAGEConvAgg
from pylibcugraphops.pytorch import SampledCSC

Can that be addressed in this PR? Without removing that, we can't stop building and shipping pylibcugraphops packages.

And there are other places (like cugraph-pyg tests decorated @pytest.mark.cugraph_ops), that @bdice mentioned here #99 (review) ... can all those things also be removed?

@tingyu66
Copy link
Member Author

These changes look good to me as-is.

However, it looks like this would still leave pylibwholegraph with a hard runtime dependency on pylibcugraphops.

I pulled this branch today and looked around like this:

git grep -E -i 'cugraph.*ops'

from pylibcugraphops.pytorch.operators import mha_gat_n2n as GATConvAgg
from pylibcugraphops.pytorch import SampledCSC

from pylibcugraphops.pytorch.operators import agg_concat_n2n as SAGEConvAgg
from pylibcugraphops.pytorch import SampledCSC

Can that be addressed in this PR? Without removing that, we can't stop building and shipping pylibcugraphops packages.

And there are other places (like cugraph-pyg tests decorated @pytest.mark.cugraph_ops), that @bdice mentioned here #99 (review) ... can all those things also be removed?

I will remove the usage of the pytest decorator. I didn't realize that pylibwholegraph.torch.gnn_model also includes the cugraph-ops code path. I'll address it once I’m back at my keyboard today.

@jameslamb
Copy link
Member

jameslamb commented Jan 16, 2025

Thank you! Sorry for not noting it earlier. I recommend you look at everything matched by this:

git grep -E -i 'cugraph.*ops'

And see if it can be removed.

Also update this to latest branch-25.02, to pull in Alex's testing fixes from #82

@bdice
Copy link
Contributor

bdice commented Jan 16, 2025

CI is currently passing. This PR is already pretty large. If this is in a relatively good state, albeit partially incomplete, perhaps we could merge this and continue the remaining work in a follow-up PR.

@tingyu66 tingyu66 requested review from a team as code owners January 17, 2025 03:32
@tingyu66
Copy link
Member Author

/ok to test

@tingyu66
Copy link
Member Author

CI is currently passing. This PR is already pretty large. If this is in a relatively good state, albeit partially incomplete, perhaps we could merge this and continue the remaining work in a follow-up PR.

@bdice While we're still waiting for the codeowner review, I went ahead and squeezed in the fix. Thanks!

-> git grep -E -i 'cugraph.*ops'

cugraph-dgl/cugraph_dgl/dataloading/utils/sampling_helpers.py:    # and int64 respectively. Since pylibcugraphops binding code doesn't
cugraph-pyg/cugraph_pyg/tests/sampler/test_sampler_utils.py:@pytest.mark.cugraph_ops
cugraph-pyg/cugraph_pyg/tests/sampler/test_sampler_utils.py:@pytest.mark.cugraph_ops
cugraph-pyg/cugraph_pyg/tests/sampler/test_sampler_utils_mg.py:@pytest.mark.cugraph_ops
cugraph-pyg/cugraph_pyg/tests/sampler/test_sampler_utils_mg.py:@pytest.mark.cugraph_ops
cugraph-pyg/pytest.ini:          cugraph_ops: Tests requiring cugraph-ops

@jameslamb Above are the only cugraph-ops occurrences. The first one is a helpful note. Regarding the pytest markers, I've lost track of why they're marked that way in the first place, as they do not call cugraph-ops internally. @alexbarghi-nv, are those uniform_neighbor_sample calls in these tests specific to cugraph-ops use cases?

@alexbarghi-nv
Copy link
Member

Originally they used cugraph-ops at the C++ level but it's been taken out now. We can safely remove these markers.

@tingyu66
Copy link
Member Author

/ok to test

@tingyu66 tingyu66 changed the title make cugraph-ops optional for cugraph-gnn packages remove dependency on cugraph-ops Jan 17, 2025
Copy link
Member

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much! I think all of my suggestions have been addressed and that we should merge this.

@alexbarghi-nv it looks like your approval would count for the other codeowners groups that need to approve here.

Copy link
Contributor

@bdice bdice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build changes look good! Thanks for all this cleanup work!

@bdice bdice removed the request for review from KyleFromNVIDIA January 17, 2025 16:11
@alexbarghi-nv
Copy link
Member

/merge

@rapids-bot rapids-bot bot merged commit d38b832 into rapidsai:branch-25.02 Jan 17, 2025
82 checks passed
@tingyu66 tingyu66 deleted the rm-cugraph-ops branch January 17, 2025 20:03
raydouglass pushed a commit to rapidsai/workflows that referenced this pull request Jan 17, 2025
Contributes to rapidsai/build-infra#155
(private issue)

Stops triggering nightly builds and tests of `cugraph-ops`.

## Notes for Reviewers

This should not be merged until the following are complete:

* [x] rapidsai/cugraph-gnn#99
bdice pushed a commit to rapidsai/devcontainers that referenced this pull request Jan 18, 2025
Contributes to rapidsai/build-infra#155
(private issue)

Removes references to `cugraph-ops`. RAPIDS is dropping `cugraph-ops`
completely (archiving https://github.com/rapidsai/cugraph-ops and no
longer publishing packages) in v25.02.

## Notes for Reviewers

Everything in the diff is based on this search:

```shell
git grep -i ops
```

This should not be merged until the following are complete:

* [x] rapidsai/cugraph-gnn#99
* [x] rapidsai/workflows#70
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking Introduces a breaking change improvement Improves an existing functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants