Skip to content

Commit

Permalink
Reorganized Backends Docs Page (#2481)
Browse files Browse the repository at this point in the history
* Updated Backends Section
* Removed logos
* Larger image text size
* Added table and cloud section
* Added Python / C++ tabs and Efrat's comments
* Updated table, Backend figure, and condensed fp64
* fixed typo

* Update docs/sphinx/using/backends/backends.rst

Co-authored-by: efratshabtai <[email protected]>
Signed-off-by: mawolf2023 <[email protected]>

* Review Changes 1/17
* Figure fix

* DCO Remediation Commit for Mark Wolf <[email protected]>

I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: fa6ff04
I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: 0bc2771
I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: 6c4305e
I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: a6efc5d
I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: 818e8ea
I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: de37d2f
I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: eead204
I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: d9bf00f
I, Mark Wolf <[email protected]>, hereby add my Signed-off-by to this commit: e57067a

Signed-off-by: Mark Wolf <[email protected]>

* Merging with mainline
* Resolved conflict in `simulators.rst` by adding `photonics.rst`

* DCO Remediation Commit for Pradnya Khalate <[email protected]>

I, Pradnya Khalate <[email protected]>, hereby add my Signed-off-by to this commit: fa09854

Signed-off-by: Pradnya Khalate <[email protected]>

* Fix spellings

Signed-off-by: Pradnya Khalate <[email protected]>

* Photonics plus multi-gpu examples and some ref updates

Signed-off-by: mawolf2023 <[email protected]>

* Fix links for docs generation
* Code formatting
* Spelling fixes
* Updates to the simulator table
* Removed shortened names from titles

Signed-off-by: Pradnya Khalate <[email protected]>

* white figure backgrounds

Signed-off-by: mawolf2023 <[email protected]>

* new orca logo

Signed-off-by: mawolf2023 <[email protected]>

* Update docs/sphinx/using/backends/sims/photonics.rst

Co-authored-by: efratshabtai <[email protected]>
Signed-off-by: mawolf2023 <[email protected]>

* Update docs/sphinx/using/backends/sims/photonics.rst

Co-authored-by: efratshabtai <[email protected]>
Signed-off-by: mawolf2023 <[email protected]>

* Update docs/sphinx/using/examples/multi_gpu_workflows.rst

Co-authored-by: efratshabtai <[email protected]>
Signed-off-by: mawolf2023 <[email protected]>

* Update docs/sphinx/using/examples/multi_gpu_workflows.rst

Co-authored-by: efratshabtai <[email protected]>
Signed-off-by: mawolf2023 <[email protected]>

* Update docs/sphinx/using/examples/multi_gpu_workflows.rst

Co-authored-by: efratshabtai <[email protected]>
Signed-off-by: mawolf2023 <[email protected]>

* edits 2/4

Signed-off-by: mawolf2023 <[email protected]>

---------

Signed-off-by: mawolf2023 <[email protected]>
Signed-off-by: Mark Wolf <[email protected]>
Signed-off-by: Pradnya Khalate <[email protected]>
Signed-off-by: mawolf2023 <[email protected]>
Co-authored-by: Eric Schweitz <[email protected]>
Co-authored-by: efratshabtai <[email protected]>
Co-authored-by: Ben Howe <[email protected]>
Co-authored-by: Pradnya Khalate <[email protected]>
Co-authored-by: mawolf2023 <[email protected]>
Co-authored-by: Pradnya Khalate <[email protected]>
  • Loading branch information
7 people authored Feb 4, 2025
1 parent 0a8c67e commit 0876ddb
Show file tree
Hide file tree
Showing 34 changed files with 2,055 additions and 2,011 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/config/spelling_allowlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -84,11 +84,13 @@ POSIX
PSIRT
Pauli
Paulis
Photonic
Photonics
PyPI
Pygments
QAOA
QCaaS
QEC
QIR
QIS
QPP
Expand All @@ -109,6 +111,7 @@ SLED
SLES
SLURM
SVD
Sqale
Stim
Superpositions
Superstaq
Expand Down Expand Up @@ -260,6 +263,7 @@ parallelizing
parameterization
performant
photonic
photonics
precompute
precomputed
prepend
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/applications/python/vqe_advanced.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -480,7 +480,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, run the code again (the three previous cells) and specify `num_qpus` to be more than one if you have access to multiple GPUs and notice resulting speedup. Thanks to CUDA-Q, this code could be used without modification in a setting where multiple physical QPUs were availible."
"Now, run the code again (the three previous cells) and specify `num_qpus` to be more than one if you have access to multiple GPUs and notice resulting speedup. Thanks to CUDA-Q, this code could be used without modification in a setting where multiple physical QPUs were available."
]
},
{
Expand Down
171 changes: 0 additions & 171 deletions docs/sphinx/examples/python/executing_photonic_kernels.ipynb

This file was deleted.

41 changes: 34 additions & 7 deletions docs/sphinx/examples/python/measuring_kernels.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -69,32 +69,59 @@
"id": "fb5dd767-5db7-4847-b04e-ae5695066800",
"metadata": {},
"source": [
"### Midcircuit Measurement and Conditional Logic\n",
"### Mid-circuit Measurement and Conditional Logic\n",
"\n",
"In certain cases, it it is helpful for some operations in a quantum kernel to depend on measurement results following previous operations. This is accomplished in the following example by performing a Hadamard on qubit 0, then measuring qubit 0 and savig the result as `b0`. Then, an if statement performs a Hadamard on qubit 1 only if `b0` is 1. Measuring this qubit 1 verifies this process as a 1 is the result 25% of the time."
"In certain cases, it it is helpful for some operations in a quantum kernel to depend on measurement results following previous operations. This is accomplished in the following example by performing a Hadamard on qubit 0, then measuring qubit 0 and saving the result as `b0`. Then, qubit 0 can be reset and used later in the computation. In this case it is flipped ot a 1. Finally, an if statement performs a Hadamard on qubit 1 if `b0` is 1. \n",
"\n",
"The results show qubit 0 is one, indicating the reset worked, and qubit 1 has a 75/25 distribution, demonstrating the mid-circuit measurement worked as expexted."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 6,
"id": "44001a51-3733-472c-8bc1-ee694e957708",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{ \n",
" __global__ : { 10:728 11:272 }\n",
" b0 : { 0:505 1:495 }\n",
"}\n",
"\n"
]
}
],
"source": [
"@cudaq.kernel\n",
"def kernel():\n",
" q = cudaq.qvector(2)\n",
" \n",
" h(q[0])\n",
" b0 = mz(q[0])\n",
" reset(q[0])\n",
" x(q[0])\n",
" \n",
" if b0:\n",
" h(q[1])\n",
" mz(q[1])"
" h(q[1]) \n",
"\n",
"print(cudaq.sample(kernel))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d525be71-a745-43a5-a7ca-a2720c536f8c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,4 @@ You are browsing the documentation for |version| version of CUDA-Q. You can find
Other Versions <versions.rst>

.. |---| unicode:: U+2014 .. EM DASH
:trim:
:trim:
2 changes: 1 addition & 1 deletion docs/sphinx/releases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ The full change log can be found `here <https://github.com/NVIDIA/cuda-quantum/r

**0.7.0**

The 0.7.0 release adds support for using :doc:`NVIDIA Quantum Cloud <using/backends/nvqc>`,
The 0.7.0 release adds support for using :doc:`NVIDIA Quantum Cloud <using/backends/cloud/nvqc>`,
giving you access to our most powerful GPU-accelerated simulators even if you don't have an NVIDIA GPU.
With 0.7.0, we have furthermore greatly increased expressiveness of the Python and C++ language frontends.
Check out our `documentation <https://nvidia.github.io/cuda-quantum/0.7.0/using/quick_start.html>`__
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,59 +16,67 @@
exit(0)

np.random.seed(1)
cudaq.set_target("nvidia", option="mqpu")
cudaq.set_target("nvidia")

qubit_count = 5
sample_count = 10000
h = spin.z(0)
parameter_count = qubit_count

# Below we run a circuit for 10000 different input parameters.
# prepare 10000 different input parameter sets.
parameters = np.random.default_rng(13).uniform(low=0,
high=1,
size=(sample_count,
parameter_count))

kernel, params = cudaq.make_kernel(list)

qubits = kernel.qalloc(qubit_count)
qubits_list = list(range(qubit_count))
@cudaq.kernel
def kernel(params: list[float]):

qubits = cudaq.qvector(5)

for i in range(5):
rx(params[i], qubits[i])


for i in range(qubit_count):
kernel.rx(params[i], qubits[i])
# [End prepare]

# [Begin single]
import timeit
import time

start_time = time.time()
cudaq.observe(kernel, h, parameters)
end_time = time.time()
print(end_time - start_time)

timeit.timeit(lambda: cudaq.observe(kernel, h, parameters),
number=1) # Single GPU result.
# [End single]

# [Begin split]
print('We have', parameters.shape[0],
'parameters which we would like to execute')
print('There are', parameters.shape[0], 'parameter sets to execute')

xi = np.split(
parameters,
4) # We split our parameters into 4 arrays since we have 4 GPUs available.
4) # Split the parameters into 4 arrays since 4 GPUs are available.

print('We split this into', len(xi), 'batches of', xi[0].shape[0], ',',
print('Split parameters into', len(xi), 'batches of', xi[0].shape[0], ',',
xi[1].shape[0], ',', xi[2].shape[0], ',', xi[3].shape[0])
# [End split]

# [Begin multiple]
# Timing the execution on a single GPU vs 4 GPUs,
# one will see a 4x performance improvement if 4 GPUs are available.
# one will see a nearly 4x performance improvement if 4 GPUs are available.

cudaq.set_target("nvidia", option="mqpu")
asyncresults = []
num_gpus = cudaq.num_available_gpus()

start_time = time.time()
for i in range(len(xi)):
for j in range(xi[i].shape[0]):
qpu_id = i * num_gpus // len(xi)
asyncresults.append(
cudaq.observe_async(kernel, h, xi[i][j, :], qpu_id=qpu_id))

result = [res.get() for res in asyncresults]
end_time = time.time()
print(end_time - start_time)
# [End multiple]
Loading

0 comments on commit 0876ddb

Please sign in to comment.