You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default, {product-title} does not manage the boot image. You can configure your cluster to update the boot image whenever you update your cluster by modifying the `MachineConfiguration` object.
Enabling the feature updates the boot image to the {op-system-first} boot image version appropriate for your cluster. If the cluster is again updated to a new {product-title} version in the future, the boot image is updated again. New nodes created after enabling the feature use the updated boot image. This feature has no effect on existing nodes.
14
+
15
+
To enable the boot image management feature for control plane machine sets or to re-enable the boot image management feature for worker machine sets where it was disabled, edit the `MachineConfiguration` object. You can enable the feature for all of the machine sets in the cluster or specific machine sets.
11
16
12
17
.Prerequisites
13
18
14
-
* You have enabled the `TechPreviewNoUpgrade` feature set by using the feature gates. For more information, see "Enabling features using feature gates" in the _Additional resources_ section.
19
+
* If you are enabling boot image management for control plane machine sets, you enabled the required Technology Preview features for your cluster by editing the `FeatureGate` CR named `cluster`.
Copy file name to clipboardExpand all lines: modules/mco-update-boot-images-disable.adoc
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,8 @@
7
7
[id="mco-update-boot-images-disable_{context}"]
8
8
= Disabling updated boot images
9
9
10
-
To disable the updated boot image feature, edit the `MachineConfiguration` object so that the `machineManagers` field is an empty array.
10
+
[role="_abstract"]
11
+
You can disable the boot image management feature so that the Machine Config Operator (MCO) no longer manages or updates the boot image in the affected machine sets. For example, you could disable this feature for the worker nodes in order to use a custom boot image that you do not want changed.
11
12
12
13
If you disable this feature after some nodes have been created with the new boot image version, any existing nodes retain their current boot image. Turning off this feature does not rollback the nodes or machine sets to the originally-installed boot image. The machine sets retain the boot image version that was present when the feature was enabled and is not updated again when the cluster is upgraded to a new {product-title} version in the future.
Copy file name to clipboardExpand all lines: modules/nodes-nodes-viewing-listing-pods.adoc
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,8 @@
6
6
[id="nodes-nodes-viewing-listing-pods_{context}"]
7
7
= Listing pods on a node in your cluster
8
8
9
-
You can list all the pods on a specific node.
9
+
[role="_abstract"]
10
+
You can list all of the pods on a node by using the `oc get pods` command along with specific flags. This command shows the number of pods on that node, the state of the pods, number of pod restarts, and the age of the pods.
OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available
135
136
MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
136
137
DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
137
138
PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
138
139
Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status
Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID
@@ -201,25 +202,27 @@ Events: <11>
201
202
Normal Starting 6d kubelet, m01.example.com Starting kubelet.
202
203
#...
203
204
----
204
-
<1> The name of the node.
205
-
<2> The role of the node, either `master` or `worker`.
206
-
<3> The labels applied to the node.
207
-
<4> The annotations applied to the node.
208
-
<5> The taints applied to the node.
209
-
<6> The node conditions and status. The `conditions` stanza lists the `Ready`, `PIDPressure`, `MemoryPressure`, `DiskPressure` and `OutOfDisk` status. These condition are described later in this section.
210
-
<7> The IP address and hostname of the node.
211
-
<8> The pod resources and allocatable resources.
212
-
<9> Information about the node host.
213
-
<10> The pods on the node.
214
-
<11> The events reported by the node.
215
-
216
-
ifndef::openshift-rosa,openshift-dedicated[]
217
-
205
+
where:
206
+
+
207
+
--
208
+
`Names`:: Specifies the name of the node.
209
+
`Roles`:: Specifies the role of the node, either `master` or `worker`.
210
+
`Labels`:: Specifies the labels applied to the node.
211
+
`Annotations`:: Specifies the annotations applied to the node.
212
+
`Taints`:: Specifies the taints applied to the node.
213
+
`Conditions`:: Specifies the node conditions and status. The `conditions` stanza lists the `Ready`, `PIDPressure`, `MemoryPressure`, `DiskPressure` and `OutOfDisk` status. These condition are described later in this section.
214
+
`Addresses`:: Specifies the IP address and hostname of the node.
215
+
`Capacity`:: Specifies the pod resources and allocatable resources.
216
+
`Information`:: Specifies information about the node host.
217
+
`Non-terminated Pods`:: Specifies the pods on the node.
218
+
`Events`:: Specifies the events reported by the node.
The control plane label is not automatically added to newly created or updated master nodes. If you want to use the control plane label for your nodes, you can manually configure the label. For more information, see _Understanding how to update labels on nodes_ in the _Additional resources_ section.
221
225
====
222
-
223
226
endif::openshift-rosa,openshift-dedicated[]
224
227
225
228
Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section:
Copy file name to clipboardExpand all lines: modules/nodes-nodes-viewing-memory.adoc
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,8 @@
6
6
[id="nodes-nodes-viewing-memory_{context}"]
7
7
= Viewing memory and CPU usage statistics on your nodes
8
8
9
-
You can display usage statistics about nodes, which provide the runtime
10
-
environments for containers. These usage statistics include CPU, memory, and
11
-
storage consumption.
9
+
[role="_abstract"]
10
+
You can display usage statistics about nodes, including CPU, memory, and storage consumption. These statistics can help you ensure your cluster is running efficiently.
You can delete a node from a {product-title} cluster that does not use machine sets by using the `oc delete node` command and decommissioning the node.
12
+
10
13
When you delete a node using the CLI, the node object is deleted in Kubernetes,
11
14
but the pods that exist on the node are not deleted. Any bare pods not backed by
12
15
a replication controller become inaccessible to {product-title}. Pods backed by
13
16
replication controllers are rescheduled to other available nodes. You must
14
17
delete local manifest pods.
15
18
16
-
.Procedure
19
+
The following procedure deletes a node from an {product-title} cluster running on bare metal.
17
20
18
-
Delete a node from an {product-title} cluster running on bare metal by completing
19
-
the following steps:
21
+
.Procedure
20
22
21
23
. Mark the node as unschedulable:
22
24
+
@@ -32,7 +34,7 @@ $ oc adm cordon <node_name>
32
34
$ oc adm drain <node_name> --force=true
33
35
----
34
36
+
35
-
This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.
37
+
This step might fail if the node is offline or unresponsive. Even if the node does not respond, the node might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.
** Set the length of time to wait before giving up using the `--timeout`flag. A
85
-
value of `0` sets an infinite length of time:
84
+
** Set the length of time to wait before giving up using the `--timeout` option with the `oc adm drain` command. A
85
+
value of `0` sets an infinite length of time.
86
86
+
87
87
[source,terminal]
88
88
----
89
89
$ oc adm drain <node1> <node2> --timeout=5s
90
90
----
91
91
92
-
** Delete pods even if there are pods using `emptyDir` volumes by setting the `--delete-emptydir-data` flag to `true`. Local data is deleted when the node
93
-
is drained:
92
+
** Delete pods even if there are pods using `emptyDir` volumes by setting the `--delete-emptydir-data=true` option with the `oc adm drain` command. Local data is deleted when the node
0 commit comments