Skip to content

OSDOCS-14996-17: Documented the 4.17.34 z-stream RNs #95031

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: enterprise-4.17
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 44 additions & 0 deletions release_notes/ocp-4-17-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2890,6 +2890,50 @@ This section will continue to be updated over time to provide notes on enhanceme
For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly.
====

// 4.17.34
[id="ocp-4-17-34_{context}"]
=== RHBA-2025:9289 - {product-title} {product-version}.34 bug fix update

Issued: 25 June 2025

{product-title} release {product-version}.34 is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHBA-2025:9289[RHBA-2025:9289] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHBA-2025:9290[RHBA-2025:9290] advisory.

Space precluded documenting all of the container images for this release in the advisory.

You can view the container images in this release by running the following command:

[source,terminal]
----
$ oc adm release info 4.17.34 --pullspecs
----

[id="ocp-4-17-34-known-issues_{context}"]
==== Known issues

* A known issue exists where a Technology Preview-enabled cluster has Sigstore verification for payload images in `policy.json`, but the Podman version in the base image does not support Sigstore configuration, so the new node is not available. As a workaround, the node starts running when the Podman version in the base image does not support Sigstore, so use the default `policy.json` that does not have Sigstore verification if the base image is 4.11 or earlier. (link:https://issues.redhat.com/browse/OCPBUGS-52313[OCPBUGS-52313])

[id="ocp-4-17-34-bug-fixes_{context}"]
==== Bug fixes

* Previously, if you tried to update a hosted cluster that used in-place updates, the proxy variables were not honored and the update failed. With this release, the pod that performs in-place upgrades honors the cluster proxy settings. As a result, updates now work for hosted clusters that use in-place updates. (link:https://issues.redhat.com/browse/OCPBUGS-57432[OCPBUGS-57432])

* Previously, when you defined multiple bring-your-own (BYO) subnet CIDRs for the `machineNetwork` parameter in the `install-config.yaml` configuration file, the installation failed at the bootstrap stage. This situation occurred because the control plane nodes were blocked from reaching the machine config server (MCS) to get their necessary setup configurations. The root cause was an overly strict {aws-short} security group rule that limited MCS access to only the first specified machine network CIDR. With this release, a fix to the {aws-short} security group means that the installation succeeds when multiple CIDRs are specified in the `machineNetwork` parameter of the `install-config.yaml`. (link:https://issues.redhat.com/browse/OCPBUGS-57292[OCPBUGS-57292])

// SME
* Previously, a Machine Config Operator (MCO) incorrectly set an `Upgradeable=False` condition to all new nodes that were added to a cluster. The condition stated a `PoolUpdating` reason for set condition. With this release, the MCO now correctly sets `Upgradeable=True` condition to all new nodes that get added to a cluster so that the issue no longer exists. (link:https://issues.redhat.com/browse/OCPBUGS-57135[OCPBUGS-57135])

* Previously, the installer was not checking for ESXi hosts that were powered off within a {vmw-first} cluster, which caused the installation to fail because the OVA could not be uploaded. With this release, the installer now checks the power status of each ESXi host and skips any that are powered off, which resolves the issue and allows the OVA to be imported successfully. (link:https://issues.redhat.com/browse/OCPBUGS-56448[OCPBUGS-56448])

* Previously, in certain situations the gateway IP address for a node changed and caused the `OVN` cluster router, which manages the static route to the cluster subnet, to add a new static route with the new gateway IP address, without deleting the original one. As a result, a stale route still pointed to the switch subnet and this caused intermittent drops during egress traffic transfer. With this release, a patch applied to the `OVN` cluster router ensures that if the gateway IP address changes, the `OVN` cluster router updates the existing static route with the new gateway IP address. A stale route no longer points to the `OVN` cluster router so that egress traffic flow does not drop. (link:https://issues.redhat.com/browse/OCPBUGS-56443[OCPBUGS-56443])

* Previously, a pod with an IP address in an OVN localnet network was unreachable by other pods that ran on the same node but used the default network for communication. Communication between pods on different nodes was not impacted by this communication issue. With this release, communication between a localnet pod and a default network pod that both ran on the same node is improved so that this issue no longer exists. (link:https://issues.redhat.com/browse/OCPBUGS-56244[OCPBUGS-56244])

* Previously, an `iptables-alerter` pod had to make several calls to the `crictl` command-line interface (CLI) for each pod that existed in a node to fetch information for the cluster. These calls required high CPU usage that impacted cluster performance. With this release, an `iptables-alerter` pod only needs to make a single call to `crictl` to fetch information for all pods that exist in a node. (link:https://issues.redhat.com/browse/OCPBUGS-55518[OCPBUGS-55518])

[id="ocp-4-17-34-updating_{context}"]
==== Updating
To update an {product-title} 4.17 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI].

// 4.17.33
[id="ocp-4-17-33_{context}"]
=== RHSA-2025:8552 - {product-title} {product-version}.33 bug fix update and security update
Expand Down