Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update theme #142

Merged
merged 2 commits into from
Jan 29, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions .codespell_exclude_lines.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,31 @@ command-line argument to the ``az aks nodepool add`` command.
`Skip GPU driver installation (preview) <https://learn.microsoft.com/en-us/azure/aks/gpu-cluster?source=recommendations&tabs=add-ubuntu-gpu-node-pool#skip-gpu-driver-installation-preview>`__
After you start your Azure AKS cluster with an image that includes a preinstalled NVIDIA GPU Driver
Azure AKS <microsoft-aks.rst>
.. |prod-name-short| replace:: MKE
Mirantis Kubernetes Engine (MKE) gives you the power to build, run, and scale cloud-native
* - MKE 3.6.2+ and 3.5.7+
* A running MKE cluster with at least one control plane node and two worker nodes.
* A seed node to connect to the MKE instance, with Helm 3.x installed on the seed node.
* The kubeconfig file for the MKE cluster on the seed node.
You can get the file from the MKE web interface by downloading a client bundle.
Alternatively, if the MKE cluster is a managed cluster of a Mirantis Container Cloud (MCC) instance,
In this case, the MKE web interface can be accessed from the MCC web interface.
* You have an MKE administrator user name and password, and you have the MKE host URL.
Perform the following steps to prepare the MKE cluster:
#. MKE does not apply a label to worker nodes.
$ export MKE_USERNAME=<mke-username> \
MKE_PASSWORD=<mke-password> \
MKE_HOST=<mke-fqdn-or-ip-address>
#. Get an API key from MKE so that you can make API calls later:
'{"username":"'$MKE_USERNAME'","password":"'$MKE_PASSWORD'"}' \
https://$MKE_HOST/auth/login | jq --raw-output .auth_token)
#. Download the MKE configuration file:
$ curl --silent --insecure -X GET "https://$MKE_HOST/api/ucp/config-toml" \
#. Upload the edited MKE configuration file:
https://$MKE_HOST/api/ucp/config-toml
The MKE cluster is ready for you to install the GPU Operator with Helm.
Refer to the MKE product documentation for information about working with MKE.
* https://docs.mirantis.com/mke/3.6/overview.html
$ cat <<EOF > nvidia-container-microshift.te
$ checkmodule -m -M -o nvidia-container-microshift.mod nvidia-container-microshift.te
2023/06/22 14:25:38 Retreiving plugins.
2 changes: 1 addition & 1 deletion .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ variables:
CONTAINER_TEST_IMAGE: "${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}"
CONTAINER_RELEASE_IMAGE: "${CI_REGISTRY_IMAGE}:0.5.0"
BUILDER_IMAGE: ghcr.io/nvidia/cloud-native-docs:0.2.0
PUBLISHER_IMAGE: "${CI_REGISTRY_PUBLISHER}/publisher:2.0.0"
PUBLISHER_IMAGE: "${CI_REGISTRY_PUBLISHER}/publisher:3.1.0"

stages:
- .pre
Expand Down
47 changes: 23 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,25 +108,24 @@ Always update the openshift docset when there is a new gpu-operator docset versi
copyright_start = 2020
```

1. Update the version in `<component-name>/versions.json`:
1. Update the version in `<component-name>/versions1.json`:

```diff
diff --git a/container-toolkit/versions.json b/container-toolkit/versions.json
index 334338a..b15af73 100644
--- a/container-toolkit/versions.json
+++ b/container-toolkit/versions.json
@@ -1,7 +1,10 @@
{
- "latest": "1.13.1",
+ "latest": "NEW_VERSION",
"versions":
[
+ {
+ "version": "NEW_VERSION"
+ },
{
"version": "1.13.1"
},
diff --git a/container-toolkit/versions1.json b/container-toolkit/versions1.json
index 95429953..e2738987 100644
--- a/container-toolkit/versions1.json
+++ b/container-toolkit/versions1.json
@@ -1,6 +1,10 @@
[
{
"preferred": "true",
+ "url": "../1.17.4",
+ "version": "1.17.4"
+ },
+ {
"url": "../1.17.3",
"version": "1.17.3"
},
```

These values control the menu at the bottom of the table of contents and
Expand All @@ -137,13 +136,13 @@ Always update the openshift docset when there is a new gpu-operator docset versi
The documentation for the older releases is not removed, readers are just
less likely to browse the older releases.

### Tagging and Special Branch Naming
### Tagging for Publication

Changes to the default branch are not published on docs.nvidia.com.

Only tags or specially-named branches are published to docs.nvidia.com.
Only tags are published to docs.nvidia.com.

1. Create a tag or specially-named branch from your commit with the following naming pattern: `<component-name>-v<version>`.
1. Create a tag from your commit with the following naming pattern: `<component-name>-v<version>`.

*Example*

Expand All @@ -152,13 +151,13 @@ Only tags or specially-named branches are published to docs.nvidia.com.
```

The first three fields of the semantic version are used.
For a "do over," push a tag like `gpu-operator-v23.3.1.1`.
For a "do over," push a tag like `gpu-operator-v23.3.1-1`.

Always tag the openshift docset and driver-containers docset for each new gpu-operator docset release.
Always tag the openshift docset and for each new gpu-operator docset release.

1. Push the tag or specially-named branch to the repository.
1. Push the tag to the repository.

CI builds the documentation for the Git ref---currently for all software components.
CI builds the documentation for the Git ref, for all software components.
However, only the documentation for the `component-name` and specified version is updated on the web.
By default, the documentation for the "latest" URL is updated.

Expand Down
8 changes: 0 additions & 8 deletions container-toolkit/cdi-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,6 @@

# Support for Container Device Interface

```{contents}
---
depth: 2
local: true
backlinks: none
---
```

## About the Container Device Interface

As of the `v1.12.0` release the NVIDIA Container Toolkit includes support for generating Container Device Interface (CDI) specifications.
Expand Down
8 changes: 0 additions & 8 deletions container-toolkit/docker-specialized.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,6 @@

# Specialized Configurations with Docker

```{contents}
---
depth: 2
local: true
backlinks: none
---
```

## Environment variables (OCI spec)

Users can control the behavior of the NVIDIA container runtime using environment variables - especially for
Expand Down
8 changes: 0 additions & 8 deletions container-toolkit/install-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,6 @@

# Installing the NVIDIA Container Toolkit

```{contents}
---
depth: 2
local: true
backlinks: none
---
```

## Installation

(pre-requisites)=
Expand Down
8 changes: 0 additions & 8 deletions container-toolkit/sample-workload.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,5 @@
# Running a Sample Workload

```{contents}
---
depth: 2
local: true
backlinks: none
---
```

## Running a Sample Workload with Docker

After you install and configure the toolkit and install an NVIDIA GPU Driver,
Expand Down
7 changes: 0 additions & 7 deletions container-toolkit/supported-platforms.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,6 @@

# Supported Platforms

```{contents}
---
depth: 2
local: true
backlinks: none
---
```

## Linux Distributions

Expand Down
12 changes: 2 additions & 10 deletions container-toolkit/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,6 @@

# Troubleshooting

```{contents}
---
depth: 2
local: true
backlinks: none
---
```

## Troubleshooting with Docker

### Generating Debugging Logs
Expand Down Expand Up @@ -67,7 +59,7 @@ The conflicting repository references can be obtained by running and inspecting
$ grep "nvidia.github.io" /etc/apt/sources.list.d/*
```

The list of files with (possibly) conflicting references can be optained by running:
The list of files with possibly conflicting references can be obtained by running:

```console
$ grep -l "nvidia.github.io" /etc/apt/sources.list.d/* | grep -vE "/nvidia-container-toolkit.list\$"
Expand Down Expand Up @@ -105,7 +97,7 @@ allow this access for now by executing:

This occurs because `nvidia-docker` forwards the command line arguments with minor modifications to the `docker` executable.

To address this it is recommeded that the `docker` command be used directly specifying the `nvidia` runtime:
To address this, specify the NVIDIA runtime in the the `docker` command:

```console
$ sudo docker run --gpus=all --runtime=nvidia --rm nvcr.io/nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi
Expand Down
2 changes: 1 addition & 1 deletion container-toolkit/versions.json
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,6 @@
},
{
"version": "1.16.0"
},
}
]
}
31 changes: 31 additions & 0 deletions container-toolkit/versions1.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
[
{
"preferred": "true",
"url": "../1.17.3",
"version": "1.17.3"
},
{
"url": "../1.17.2",
"version": "1.17.2"
},
{
"url": "../1.17.1",
"version": "1.17.1"
},
{
"url": "../1.17.0",
"version": "1.17.0"
},
{
"url": "../1.16.2",
"version": "1.16.2"
},
{
"url": "../1.16.1",
"version": "1.16.1"
},
{
"url": "../1.16.0",
"version": "1.16.0"
}
]
38 changes: 6 additions & 32 deletions css/custom.css
Original file line number Diff line number Diff line change
@@ -1,33 +1,7 @@
/* style field lists */

img.broder {
border: 1px solid black;
}

.fa-chevron-right {
font-size: 10px;
}

ul.wy-breadcrumbs li {
font-size: 10pt;
}

/* adds scrollbar to sidenav */
.wy-side-scroll {
width: auto;
overflow-y: auto;
}

td > div.line-block, th > div.line-block {
margin-bottom: 0px !important;
}

table.docutils td > p {
margin-top: 16px;
white-space: normal;
}

table.docutils td > p:nth-of-type(1) {
margin-top: 0px;
white-space: normal;
/*!
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: Apache-2.0
*/
html[data-theme=light] .highlight .go {
font-style:unset
}
Loading