Viewing OpenShift Logging status

    You can view the status of your Red Hat OpenShift Logging Operator.

    Prerequisites

    • The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

    Procedure

    1. Change to the project.

    2. To view the OpenShift Logging status:

      1. Get the OpenShift Logging status:

        1. $ oc get clusterlogging instance -o yaml

        Example output

        1. apiVersion: logging.openshift.io/v1
        2. kind: ClusterLogging
        3. ....
        4. status: (1)
        5. collection:
        6. logs:
        7. fluentdStatus:
        8. daemonSet: fluentd (2)
        9. nodes:
        10. fluentd-2rhqp: ip-10-0-169-13.ec2.internal
        11. fluentd-6fgjh: ip-10-0-165-244.ec2.internal
        12. fluentd-6l2ff: ip-10-0-128-218.ec2.internal
        13. fluentd-54nx5: ip-10-0-139-30.ec2.internal
        14. fluentd-flpnn: ip-10-0-147-228.ec2.internal
        15. fluentd-n2frh: ip-10-0-157-45.ec2.internal
        16. pods:
        17. failed: []
        18. notReady: []
        19. ready:
        20. - fluentd-2rhqp
        21. - fluentd-54nx5
        22. - fluentd-6fgjh
        23. - fluentd-6l2ff
        24. - fluentd-flpnn
        25. - fluentd-n2frh
        26. logstore: (3)
        27. elasticsearchStatus:
        28. - ShardAllocationEnabled: all
        29. cluster:
        30. activePrimaryShards: 5
        31. activeShards: 5
        32. initializingShards: 0
        33. numDataNodes: 1
        34. numNodes: 1
        35. pendingTasks: 0
        36. relocatingShards: 0
        37. status: green
        38. clusterName: elasticsearch
        39. nodeConditions:
        40. elasticsearch-cdm-mkkdys93-1:
        41. nodeCount: 1
        42. client:
        43. failed:
        44. notReady:
        45. ready:
        46. - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
        47. data:
        48. failed:
        49. notReady:
        50. ready:
        51. - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
        52. master:
        53. failed:
        54. notReady:
        55. ready:
        56. - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c
        57. visualization: (4)
        58. kibanaStatus:
        59. - deployment: kibana
        60. pods:
        61. failed: []
        62. notReady: []
        63. ready:
        64. - kibana-7fb4fd4cc9-f2nls
        65. replicaSets:
        66. - kibana-7fb4fd4cc9
        67. replicas: 1

    The following are examples of some condition messages from the Status.Nodes section of the OpenShift Logging instance.

    A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node:

    1. nodes:
    2. - conditions:
    3. - lastTransitionTime: 2019-03-15T15:57:22Z
    4. message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not
    5. be allocated on this node.
    6. reason: Disk Watermark Low
    7. status: "True"
    8. type: NodeStorage
    9. deploymentName: example-elasticsearch-clientdatamaster-0-1
    10. upgradeStatus: {}

    A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes:

    Example output

    1. nodes:
    2. - conditions:
    3. - lastTransitionTime: 2019-03-15T16:04:45Z
    4. message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated
    5. reason: Disk Watermark High
    6. type: NodeStorage
    7. deploymentName: cluster-logging-operator
    8. upgradeStatus: {}

    A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster:

    Example output

    A status message similar to the following indicates that the requested PVC could not bind to PV:

    Example output

    1. Node Conditions:
    2. elasticsearch-cdm-mkkdys93-1:
    3. Last Transition Time: 2019-06-26T03:37:32Z
    4. Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times)
    5. Reason: Unschedulable
    6. Status: True
    7. Type: Unschedulable

    A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes:

    Example output

    1. Status:
    2. Collection:
    3. Logs:
    4. Fluentd Status:
    5. Daemon Set: fluentd
    6. Nodes:
    7. Pods:
    8. Failed:
    9. Not Ready:
    10. Ready:

    Viewing the status of logging subsystem components

    You can view the status for a number of logging subsystem components.

    • The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

    Procedure

    1. Change to the openshift-logging project.

      1. $ oc project openshift-logging
    2. View the status of the logging subsystem for Red Hat OpenShift environment:

      1. $ oc describe deployment cluster-logging-operator

      Example output

    3. View the status of the logging subsystem replica set:

      1. Get the name of a replica set:

        Example output

        1. $ oc get replicaset

        Example output

        1. NAME DESIRED CURRENT READY AGE
        2. cluster-logging-operator-574b8987df 1 1 1 159m
        3. elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m
        4. elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m
        5. elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m
        6. kibana-5bd5544f87 1 1 1 157m
      2. Get the status of the replica set:

        1. $ oc describe replicaset cluster-logging-operator-574b8987df
        1. Name: cluster-logging-operator-574b8987df
        2. ....
        3. Replicas: 1 current / 1 desired
        4. Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
        5. ....
        6. Events:
        7. Type Reason Age From Message