Updating a cluster using the CLI

    • Have access to the cluster as a user with admin privileges. See .

    • Have a recent etcd backup in case your update fails and you must .

    • Support for Fedora7 workers is removed in OKD 4.13. You must replace Fedora7 workers with Fedora8 or FCOS workers before upgrading to OKD 4.13. Red Hat does not support in-place Fedora7 to Fedora8 updates for Fedora workers; those hosts must be replaced with a clean operating system install.

    • Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the next during a cluster update. See Updating installed Operators for more information.

    • Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy.

    • If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see .

    • Ensure that you address all Upgradeable=False conditions so the cluster allows an update to the next minor version. An alert displays at the top of the Cluster Settings page when you have one or more cluster Operators that cannot be upgraded. You can still update to the next available patch update for the minor release you are currently on.

    • Review the list of APIs that were removed in Kubernetes 1.26, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see Preparing to update to OKD 4.13.

    • If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If minAvailable is set to 1 in PodDisruptionBudget, the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain.

    Additional resources

    Pausing a MachineHealthCheck resource

    During the upgrade process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster.

    Prerequisites

    • Install the OpenShift CLI (oc).

    Procedure

    1. To list all the available MachineHealthCheck resources that you want to pause, run the following command:

    2. To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the MachineHealthCheck resource. Run the following command:

      1. $ oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=""

      The annotated MachineHealthCheck resource resembles the following YAML file:

      1. apiVersion: machine.openshift.io/v1beta1
      2. kind: MachineHealthCheck
      3. metadata:
      4. name: example
      5. namespace: openshift-machine-api
      6. annotations:
      7. cluster.x-k8s.io/paused: ""
      8. spec:
      9. selector:
      10. matchLabels:
      11. role: worker
      12. unhealthyConditions:
      13. status: "Unknown"
      14. timeout: "300s"
      15. - type: "Ready"
      16. status: "False"
      17. timeout: "300s"
      18. maxUnhealthy: "40%"
      19. currentHealthy: 5
      20. expectedMachines: 5

      Resume the machine health checks after updating the cluster. To resume the check, remove the pause annotation from the MachineHealthCheck resource by running the following command:

      1. $ oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-

    You can update, or upgrade, a single-node OKD cluster by using either the console or CLI.

    However, note the following limitations:

    • Restoring a single-node OKD cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your upgrade fails. If your control plane is healthy, you might be able to restore your cluster to a previous state by using the backup.

    • Updating a single-node OKD cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios:

      • If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads.

      • If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OKD because there is no other node in the cluster to reschedule the workloads to.

      • If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly.

    There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically.

    Additional resources

    Updating a cluster by using the CLI

    If updates are available, you can update your cluster by using the OpenShift CLI (oc).

    You can find information about available OKD advisories and updates in the errata section of the Customer Portal.

    Prerequisites

    • Install the OpenShift CLI (oc) that matches the version for your updated version.

    • Log in to the cluster as user with cluster-admin privileges.

    • Pause all MachineHealthCheck resources.

    Procedure

    1. View the available updates and note the version number of the update that you want to apply:

      1. $ oc adm upgrade

      Example output

      1. Cluster version is 4.9.23
      2. Upstream is unset, so the cluster will use an appropriate default.
      3. Channel: stable-4.9 (available channels: candidate-4.10, candidate-4.9, fast-4.10, fast-4.9, stable-4.10, stable-4.9, eus-4.10)
      4. Recommended updates:
      5. VERSION IMAGE
      6. 4.9.24 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032
      7. 4.9.25 quay.io/openshift-release-dev/ocp-release@sha256:2eafde815e543b92f70839972f585cc52aa7c37aa72d5f3c8bc886b0fd45707a
      8. 4.9.26 quay.io/openshift-release-dev/ocp-release@sha256:3ccd09dd08c303f27a543351f787d09b83979cd31cf0b4c6ff56cd68814ef6c8
      9. 4.9.27 quay.io/openshift-release-dev/ocp-release@sha256:1c7db78eec0cf05df2cead44f69c0e4b2c3234d5635c88a41e1b922c3bedae16
      10. 4.9.29 quay.io/openshift-release-dev/ocp-release@sha256:b04ca01d116f0134a102a57f86c67e5b1a3b5da1c4a580af91d521b8fa0aa6ec
      11. 4.9.31 quay.io/openshift-release-dev/ocp-release@sha256:2a28b8ebb53d67dd80594421c39e36d9896b1e65cb54af81fbb86ea9ac3bf2d7
      12. 4.9.32 quay.io/openshift-release-dev/ocp-release@sha256:ecdb6d0df547b857eaf0edb5574ddd64ca6d9aff1fa61fd1ac6fb641203bedfa
    2. Based on your organization requirements, set the appropriate upgrade channel. For example, you can set your channel to stable-4.12, fast-4.12, or eus-4.12. For more information about channels, refer to Understanding update channels and releases listed in the Additional resources section.

      For example, to set the channel to stable-4.13:

      1. $ oc adm upgrade channel stable-4.13

      For production clusters, you must subscribe to a , eus-, or fast-* channel.

      When you are ready to move to the next minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time.

    3. Apply an update:

      • To update to the latest version:

        1. $ oc adm upgrade --to-latest=true (1)
      • To update to a specific version:

        1. $ oc adm upgrade --to=<version> (1)
    4. Review the status of the Cluster Version Operator:

      1. $ oc adm upgrade
    5. After the update completes, you can confirm that the cluster version has updated to the new version:

      1. $ oc get clusterversion

      Example output

      If the oc get clusterversion command displays the following error while the PROGRESSING status is True, you can ignore the error.

      1. NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
      2. version 4.10.26 True True 24m Unable to apply 4.11.0-rc.7: an unknown error has occurred: MultipleErrors
    6. If you are upgrading your cluster to the next minor version, such as version X.y to X.(y+1), it is recommended to confirm that your nodes are upgraded before deploying workloads that rely on a new feature:

      1. $ oc get nodes

      Example output

      1. NAME STATUS ROLES AGE VERSION
      2. ip-10-0-168-251.ec2.internal Ready master 82m v1.26.0
      3. ip-10-0-170-223.ec2.internal Ready master 82m v1.26.0
      4. ip-10-0-179-95.ec2.internal Ready worker 70m v1.26.0
      5. ip-10-0-182-134.ec2.internal Ready worker 70m v1.26.0
      6. ip-10-0-211-16.ec2.internal Ready master 82m v1.26.0
      7. ip-10-0-250-100.ec2.internal Ready worker 69m v1.26.0

    Additional resources

    You can update along a recommended conditional upgrade path using the web console or the OpenShift CLI (oc). When a conditional update is not recommended for your cluster, you can update along a conditional upgrade path using the OpenShift CLI (oc) 4.10 or later.

    Procedure

    1. To view the description of the update when it is not recommended because a risk might apply, run the following command:

      1. $ oc adm upgrade --include-not-recommended
    2. If the cluster administrator evaluates the potential known risks and decides it is acceptable for the current cluster, then the administrator can waive the safety guards and proceed the update by running the following command:

      1. $ oc adm upgrade --allow-not-recommended --to <version> (1)
      1<version> is the supported but not recommended update version that you obtained from the output of the previous command.

    ifndef::openshift-origin

    Additional resources

    Changing the update server by using the CLI

    Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. The default value for upstream is https://api.openshift.com/api/upgrades_info/v1/graph.

    Procedure

    • Change the upstream parameter value in the cluster version:

      Example output

      1. clusterversion.config.openshift.io/version patched