File tree Expand file tree Collapse file tree 1 file changed +4
-3
lines changed Expand file tree Collapse file tree 1 file changed +4
-3
lines changed Original file line number Diff line number Diff line change @@ -13,12 +13,13 @@ Current device/context
13
13
The :meth: `Device.set_current ` method ensures that the calling host thread has
14
14
an active CUDA context set to current. This CUDA context can be seen and accessed
15
15
by other GPU libraries without any code change. For libraries built on top of
16
- the CUDA runtime (``cudart ``), this is as if ``cudaSetDevice `` is called.
16
+ the `CUDA runtime <https://docs.nvidia.com/cuda/cuda-runtime-api/index.html >`_,
17
+ this is as if ``cudaSetDevice `` is called.
17
18
18
19
Since CUDA contexts are per-thread constructs, in a multi-threaded program each
19
20
host thread should call this method.
20
21
21
- Conversely, if any GPU library already set a device (or context) to current, this
22
+ Conversely, if any GPU library already sets a device (or context) to current, this
22
23
method ensures that the same device/context is picked up by and shared with
23
24
``cuda.core ``.
24
25
@@ -34,7 +35,7 @@ exposing their own stream types.
34
35
To address this issue, we propose the ``__cuda_stream__ `` protocol (currently version
35
36
0) as follows: For any Python objects that are meant to be interpreted as a stream, they
36
37
should add a ``__cuda_stream__ `` attribute that returns a 2-tuple: The version number
37
- (``0 ``) and the address of ``cudaStream_t ``:
38
+ (``0 ``) and the address of ``cudaStream_t `` (both as Python ` int `) :
38
39
39
40
.. code-block :: python
40
41
You can’t perform that action at this time.
0 commit comments