Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to process backend on multi-gpus? #42

Open
liangyongshi opened this issue Apr 20, 2022 · 8 comments
Open

how to process backend on multi-gpus? #42

liangyongshi opened this issue Apr 20, 2022 · 8 comments

Comments

@liangyongshi
Copy link

how to process backend on multi-gpus?

@liangyongshi
Copy link
Author

can the frontend and backend be processed respectively in two GPUS?

@liangyongshi
Copy link
Author

I process the froentend in 5 GPUs ,and report errors:
ii, jj = torch.as_tensor(es, device=self.device).unbind(dim=-1)
ValueError: not enough values to unpack (expected 2, got 0)
how to process the backend in multi GPUS ?

@xhangHU
Copy link

xhangHU commented Apr 27, 2022

Hi, I also encountered this problem, did you solve it?

@liangyongshi
Copy link
Author

liangyongshi commented Apr 29, 2022 via email

@billamiable
Copy link

billamiable commented Jun 23, 2022

Having same issue, anyone can help? Meanwhile, the current implementation actually runs global BA just before system termination, which is not real-time performance..

@buenos-dan
Copy link

Having same issue, anyone can help? Meanwhile, the current implementation actually runs global BA just before system termination, which is not real-time performance..

I have the same question.

@liangyongshi
Copy link
Author

liangyongshi commented Jul 26, 2022 via email

1 similar comment
@liangyongshi
Copy link
Author

liangyongshi commented Oct 11, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants