Replies: 1 comment 3 replies
-
I'm not sure if the repository is up-to-date but this might help: https://github.com/dfm/extending-jax |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
This is a general question.
Sometimes we need some customized operation which is not provided by framework lib, such as deformable convolution. In pytorch or tensorflow this can be done by cuda programing.
Considering jax/flax themself have the mechanism for parallel acceleration (XLA), so arise the quesiton: does it means we do not need cuda programming in jax/flax? How is the performance compered with cuda programming theoretically?
Beta Was this translation helpful? Give feedback.
All reactions