Updating a cluster between minor versions

    • Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions.

    • Have a recent in case your upgrade fails and you must restore your cluster to a previous state.

    • Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid upgrade path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster upgrade. See for more information.

    • Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.

    Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster upgrades. You must remove this setting before you can upgrade your cluster.

    If you are running cluster monitoring with an attached PVC for Prometheus, you might experience OOM kills during cluster upgrade. When persistent storage is in use for Prometheus, Prometheus memory usage doubles during cluster upgrade and for several hours after upgrade is complete. To avoid the OOM kill issue, allow worker nodes with double the size of memory that was available prior to the upgrade. For example, if you are running monitoring on the minimum recommended nodes, which is 2 cores with 8 GB of RAM, increase memory to 16 GB. For more information, see BZ#1925061.

    About the OpenShift Update Service

    The OpenShift Update Service (OSUS) provides over-the-air updates to OKD, including Fedora CoreOS (FCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components.

    The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the release image for that update to upgrade your cluster. The release artifacts are hosted in Quay as container images.

    To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available.

    Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available.

    Only upgrading to a newer version is supported. Reverting or rolling back your cluster to a previous version is not supported. If your upgrade fails, contact Red Hat support.

    During the upgrade process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes as specified by the maxUnavailable field on the machine configuration pool and marks them as unavailable. By default, this value is set to 1. The MCO then applies the new configuration and reboots the machine.

    If you use Fedora machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first.

    With the specification for the new version applied to the old kubelet, the Fedora machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service.

    The OpenShift Update Service is composed of an Operator and one or more application instances.

    During the upgrade process, nodes in the cluster might become temporarily unavailable. The MachineHealthCheck might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, remove any MachineHealthCheck resource that you have deployed before updating the cluster. However, a MachineHealthCheck resource that is deployed by default (such as machine-api-termination-handler) cannot be removed and will be recreated.

    Additional resources

    In OKD 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. Upgrade channels are tied to a minor version of OKD. For instance, OKD 4.8 upgrade channels recommend upgrades to 4.8 and upgrades within 4.8. They also recommend upgrades within 4.7 and from 4.7 to 4.8, to allow clusters on 4.7 to eventually upgrade to 4.8. They do not recommend upgrades to 4.9 or later releases. This strategy ensures that administrators explicitly decide to upgrade to the next minor version of OKD.

    Upgrade channels control only release selection and do not impact the version of the cluster that you install; the openshift-install binary file for a specific version of OKD always installs that version.

    Releases are added to the stable-4 channel after passing all tests.

    You can use the stable-4 channel to upgrade from a previous minor version of OKD.

    OKD maintains an upgrade recommendation service that understands the version of OKD you have installed as well as the path to take within the channel you choose to get you to the next release.

    You can imagine seeing the following in the stable-4 channel:

    • 4.8.0

    • 4.8.1

    • 4.8.3

    • 4.8.4

    The service recommends only upgrades that have been tested and have no serious issues. It will not suggest updating to a version of OKD that contains known vulnerabilities. For example, if your cluster is on 4.8.1 and OKD suggests 4.8.4, then it is safe for you to update from 4.8.1 to 4.8.4. Do not rely on consecutive patch numbers. In this example, 4.8.2 is not and never was available in the channel.

    The presence of an update recommendation in the stable-4 channel at any point is a declaration that the update is supported. While releases will never be removed from the channel, update recommendations that exhibit serious issues will be removed from the channel. Updates initiated after the update recommendation has been removed are still supported.

    If you manage the container images for your OKD clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact upgrades. During upgrade, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings.

    Performing a canary rollout update

    In some specific use cases, you might want a more controlled update process where you do not want specific nodes updated concurrently with the rest of the cluster. These use cases include, but are not limited to:

    • You have mission-critical applications that you do not want unavailable during the update. You can slowly test the applications on your nodes in small batches after the update.

    • You have a small maintenance window that does not allow the time for all nodes to be updated, or you have multiple maintenance windows.

    The rolling update process is not a typical update workflow. With larger clusters, it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that can affect the entire cluster. It is recommended that you carefully consider whether your organization wants to use a rolling update and carefully plan the implementation of the process before you start.

    The rolling update process described in this topic involves:

    • Creating one or more custom machine config pools (MCPs).

    • Labeling each node that you do not want to update immediately to move those nodes to the custom MCPs.

    • Pausing those custom MCPs, which prevents updates to those nodes.

    • Performing the cluster update.

    • Unpausing one custom MCP, which triggers the update on those nodes.

    If you want to use the canary rollout update process, see Performing a canary rollout update.

    If updates are available, you can update your cluster from the web console.

    You can find information about available OKD advisories and updates of the Customer Portal.

    Prerequisites

    • Have access to the web console as a user with admin privileges.

    Procedure

    1. From the web console, click AdministrationCluster Settings and review the contents of the Details tab.

    2. For production clusters, ensure that the Channel is set to the correct channel for your current minor version, such as stable-4.

      • If the Update status is not Updates available, you cannot upgrade your cluster.

      • Select channel indicates the cluster version that your cluster is running or is updating to.

    3. Select the highest available version and click Save.

      The Input channel Update status changes to Update to <product-version> in progress, and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes.

      If you are upgrading your cluster to the next minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are upgraded before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page.

    4. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel.

      • If updates are available, continue to perform updates in the current channel until you can no longer update.

      • If no updates are available, change the Channel to the stable-* channel for the next minor version, and update to the version that you want in that channel.

      You might need to perform several intermediate updates until you reach the version that you want.

    Changing the update server by using the web console

    Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates.

    Procedure

    1. Navigate to AdministrationCluster Settings, click version.

    2. Click the YAML tab and then edit the upstream parameter value:

      Example output

      1The <update-server-url> variable specifies the URL for the update server.
    3. Click Save.