Updating OpenShift Logging

    To upgrade from cluster logging in OKD version 4.6 and earlier to OpenShift Logging 5.x, you update the OKD cluster to version 4.7 or 4.8. Then, you update the following operators:

    • From Elasticsearch Operator 4.x to OpenShift Elasticsearch Operator 5.x

    • From Cluster Logging Operator 4.x to Red Hat OpenShift Logging Operator 5.x

    To upgrade from a previous version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions.

    OKD 4.7 made the following name changes:

    • The cluster logging feature became the Red Hat OpenShift Logging 5.x product.

    • The Cluster Logging Operator became the Red Hat OpenShift Logging Operator.

    • The Elasticsearch Operator became OpenShift Elasticsearch Operator.

    To upgrade from cluster logging in OKD version 4.6 and earlier to OpenShift Logging 5.x, you update the OKD cluster to version 4.7 or 4.8. Then, you update the following operators:

    • From Elasticsearch Operator 4.x to OpenShift Elasticsearch Operator 5.x

    • From Cluster Logging Operator 4.x to Red Hat OpenShift Logging Operator 5.x

    If you update the operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, you delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again.

    Prerequisites

    • The OKD version is 4.7 or later.

    • The OpenShift Logging status is healthy:

      • All pods are .

      • The Elasticsearch cluster is healthy.

    • Your Elasticsearch and Kibana data is backed up.

    Procedure

    1. Update the OpenShift Elasticsearch Operator:

      1. From the web console, click OperatorsInstalled Operators.

      2. Select the openshift-operators-redhat project.

      3. Click the OpenShift Elasticsearch Operator.

      4. Click SubscriptionChannel.

      5. In the Change Subscription Update Channel window, select 5.0 or stable-5.x and click Save.

      6. Wait for a few seconds, then click OperatorsInstalled Operators.

        Verify that the OpenShift Elasticsearch Operator version is 5.x.x.

        Wait for the Status field to report Succeeded.

    2. Update the Cluster Logging Operator:

      1. Select the openshift-logging project.

      2. Click the Cluster Logging Operator.

      3. Click SubscriptionChannel.

      4. In the Change Subscription Update Channel window, select 5.0 or stable-5.x and click Save.

      5. Wait for a few seconds, then click OperatorsInstalled Operators.

        Verify that the Red Hat OpenShift Logging Operator version is 5.0.x or 5.x.x.

        Wait for the Status field to report Succeeded.

    3. Check the logging components:

      1. Ensure that all Elasticsearch pods are in the Ready status:

        Example output

        1. NAME READY STATUS RESTARTS AGE
        2. elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m
        3. elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m
        4. elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
      2. Ensure that the Elasticsearch cluster is healthy:

        1. $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health
        1. {
        2. "cluster_name" : "elasticsearch",
        3. "status" : "green",
        4. }
      3. Ensure that the Elasticsearch cron jobs are created:

        1. $ oc project openshift-logging
        1. $ oc get cronjob
        1. NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
        2. elasticsearch-im-app */15 * * * * False 0 <none> 56s
        3. elasticsearch-im-audit */15 * * * * False 0 <none> 56s
        4. elasticsearch-im-infra */15 * * * * False 0 <none> 56s
      4. Verify that the log store is updated to 5.0 or 5.x and the indices are green:

        1. $ oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices

        Verify that the output includes the app-00000x, infra-00000x, audit-00000x, .security indices.

        Sample output with indices in a green status

      5. Verify that the log collector is updated to 5.0 or 5.x:

        1. $ oc get ds fluentd -o json | grep fluentd-init

        Verify that the output includes a fluentd-init container:

        1. "containerName": "fluentd-init"
      6. Verify that the log visualizer is updated to 5.0 or 5.x using the Kibana CRD:

        1. $ oc get kibana kibana -o json

        Verify that the output includes a Kibana pod with the ready status:

        Sample output with a ready Kibana pod

        1. [
        2. {
        3. "clusterCondition": {
        4. "kibana-5fdd766ffd-nb2jj": [
        5. {
        6. "reason": "ContainerCreating",
        7. "status": "True",
        8. "type": ""
        9. },
        10. {
        11. "lastTransitionTime": "2020-06-30T14:11:07Z",
        12. "reason": "ContainerCreating",
        13. "status": "True",
        14. "type": ""
        15. }
        16. ]
        17. "deployment": "kibana",
        18. "pods": {
        19. "failed": [],
        20. "notReady": []
        21. "ready": []
        22. },
        23. "replicaSets": [
        24. "kibana-5fdd766ffd"
        25. ],
        26. "replicas": 1
        27. }
        28. ]

    Updating OpenShift Logging to the current version

    To update OpenShift Logging from 5.x to the current version, you change the subscriptions for the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator.

    You must update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. You must also update both Operators to the same version.

    If you update the operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, you delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again.

    Prerequisites

    • The OKD version is 4.7 or later.

    • The OpenShift Logging status is healthy:

      • All pods are ready.

      • The Elasticsearch cluster is healthy.

    • Your Elasticsearch and Kibana data is backed up.

    Procedure

      1. From the web console, click OperatorsInstalled Operators.

      2. Select the openshift-operators-redhat project.

      3. Click the OpenShift Elasticsearch Operator.

      4. Click SubscriptionChannel.

      5. In the Change Subscription Update Channel window, select stable-5.x and click Save.

      6. Wait for a few seconds, then click OperatorsInstalled Operators.

        Verify that the OpenShift Elasticsearch Operator version is 5.x.x.

        Wait for the Status field to report Succeeded.

    1. Update the Red Hat OpenShift Logging Operator:

      1. From the web console, click OperatorsInstalled Operators.

      2. Select the openshift-logging project.

      3. Click the Red Hat OpenShift Logging Operator.

      4. Click SubscriptionChannel.

      5. In the Change Subscription Update Channel window, select stable-5.x and click Save.

      6. Wait for a few seconds, then click OperatorsInstalled Operators.

        Verify that the Red Hat OpenShift Logging Operator version is 5.x.x.

        Wait for the Status field to report Succeeded.

    2. Check the logging components:

      1. Ensure that all Elasticsearch pods are in the Ready status:

        1. $ oc get pod -n openshift-logging --selector component=elasticsearch

        Example output

        1. NAME READY STATUS RESTARTS AGE
        2. elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m
        3. elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m
        4. elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m
      2. Ensure that the Elasticsearch cluster is healthy:

        1. $ oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health
      3. Ensure that the Elasticsearch cron jobs are created:

        1. $ oc project openshift-logging
        1. $ oc get cronjob
        1. NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
        2. elasticsearch-im-app */15 * * * * False 0 <none> 56s
        3. elasticsearch-im-audit */15 * * * * False 0 <none> 56s
        4. elasticsearch-im-infra */15 * * * * False 0 <none> 56s
      4. Verify that the log collector is updated to 5.x:

        1. $ oc get ds fluentd -o json | grep fluentd-init

        Verify that the output includes a fluentd-init container:

        1. "containerName": "fluentd-init"
      5. Verify that the log visualizer is updated to 5.x using the Kibana CRD:

        Verify that the output includes a Kibana pod with the ready status:

        Sample output with a ready Kibana pod

        1. [
        2. {
        3. "clusterCondition": {
        4. "kibana-5fdd766ffd-nb2jj": [
        5. {
        6. "lastTransitionTime": "2020-06-30T14:11:07Z",
        7. "reason": "ContainerCreating",
        8. "status": "True",
        9. "type": ""
        10. },
        11. {
        12. "lastTransitionTime": "2020-06-30T14:11:07Z",
        13. "reason": "ContainerCreating",
        14. "status": "True",
        15. "type": ""
        16. }
        17. ]
        18. },
        19. "deployment": "kibana",
        20. "pods": {
        21. "failed": [],
        22. "notReady": []
        23. "ready": []
        24. },
        25. "replicaSets": [
        26. "kibana-5fdd766ffd"
        27. ],
        28. "replicas": 1
        29. }