Monitor Node Health

    To learn how to install and use Node Problem Detector, see Node Problem Detector project documentation.

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using or you can use one of these Kubernetes playgrounds:

    Limitations

    • Node Problem Detector only supports file based kernel log. Log tools such as journald are not supported.

    • Node Problem Detector uses the kernel log format for reporting kernel issues. To learn how to extend the kernel log format, see .

    Some cloud providers enable Node Problem Detector as an Addon. You can also enable Node Problem Detector with kubectl or by creating an Addon pod.

    kubectl provides the most flexible management of Node Problem Detector. You can overwrite the default configuration to fit it into your environment or to detect customized node problems. For example:

    1. Create a Node Problem Detector configuration similar to node-problem-detector.yaml:

    2. Start node problem detector with kubectl:

      1. kubectl apply -f https://k8s.io/examples/debug/node-problem-detector.yaml

    Using an Addon pod to enable Node Problem Detector

    If you are using a custom cluster bootstrap solution and don’t need to overwrite the default configuration, you can leverage the Addon pod to further automate the deployment.

    Create node-problem-detector.yaml, and save the configuration in the Addon pod’s directory /etc/kubernetes/addons/node-problem-detector on a control plane node.

    Overwrite the configuration

    The default configuration is embedded when building the Docker image of Node Problem Detector.

    However, you can use a to overwrite the configuration:

    1. Change the configuration files in config/

    2. Create the ConfigMap node-problem-detector-config:

    3. Change the node-problem-detector.yaml to use the ConfigMap:

      1. apiVersion: apps/v1
      2. kind: DaemonSet
      3. metadata:
      4. name: node-problem-detector-v0.1
      5. labels:
      6. k8s-app: node-problem-detector
      7. version: v0.1
      8. kubernetes.io/cluster-service: "true"
      9. spec:
      10. selector:
      11. matchLabels:
      12. version: v0.1
      13. kubernetes.io/cluster-service: "true"
      14. template:
      15. metadata:
      16. labels:
      17. k8s-app: node-problem-detector
      18. version: v0.1
      19. kubernetes.io/cluster-service: "true"
      20. spec:
      21. hostNetwork: true
      22. containers:
      23. - name: node-problem-detector
      24. image: k8s.gcr.io/node-problem-detector:v0.1
      25. securityContext:
      26. privileged: true
      27. resources:
      28. limits:
      29. cpu: "200m"
      30. requests:
      31. cpu: "20m"
      32. memory: "20Mi"
      33. volumeMounts:
      34. - name: log
      35. mountPath: /log
      36. - name: config # Overwrite the config/ directory with ConfigMap volume
      37. mountPath: /config
      38. readOnly: true
      39. volumes:
      40. - name: log
      41. hostPath:
      42. path: /var/log/
      43. - name: config # Define ConfigMap volume
      44. configMap:
      45. name: node-problem-detector-config
    4. Recreate the Node Problem Detector with the new configuration file:

    Note: This approach only applies to a Node Problem Detector started with kubectl.

    Overwriting a configuration is not supported if a Node Problem Detector runs as a cluster Addon. The Addon manager does not support ConfigMap.

    Kernel Monitor is a system log monitor daemon supported in the Node Problem Detector. Kernel monitor watches the kernel log and detects known kernel issues following predefined rules.

    The Kernel Monitor matches kernel issues according to a set of predefined rule list in config/kernel-monitor.json. The rule list is extensible. You can expand the rule list by overwriting the configuration.

    To support a new NodeCondition, create a condition definition within the conditions field in config/kernel-monitor.json, for example:

    1. {
    2. "type": "NodeConditionType",
    3. "reason": "CamelCaseDefaultNodeConditionReason",
    4. "message": "arbitrary default node condition message"
    5. }

    Detect new problems

    To detect new problems, you can extend the rules field in config/kernel-monitor.json with a new rule definition:

    Check your kernel log path location in your operating system (OS) distribution. The Linux kernel log device is usually presented as /dev/kmsg. However, the log path location varies by OS distribution. The log field in config/kernel-monitor.json represents the log path inside the container. You can configure the field to match the device path as seen by the Node Problem Detector.

    Add support for another log format

    Recommendations and restrictions

    It is recommended to run the Node Problem Detector in your cluster to monitor node health. When running the Node Problem Detector, you can expect extra resource overhead on each node. Usually this is fine, because:

    • The kernel log grows relatively slowly.
    • A resource limit is set for the Node Problem Detector.
    • Even under high load, the resource usage is acceptable. For more information, see the Node Problem Detector .