Managing alerts

    • Alerting rules. Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed.

    • Alerts. An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an OKD cluster.

    • Silences. A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue.

    The Alerting UI is accessible through the Administrator perspective and the Developer perspective in the OKD web console.

    • In the Administrator perspective, select MonitoringAlerting. The three main pages in the Alerting UI in this perspective are the Alerts, Silences, and Alerting Rules pages.

    • In the Developer perspective, select Monitoring<project_name>Alerts. In this perspective, alerts, silences, and alerting rules are all managed from the Alerts page. The results shown in the Alerts page are specific to the selected project.

    In the Developer perspective, you can select from core OKD and user-defined projects that you have access to in the Project: list. However, alerts, silences, and alerting rules relating to core OKD projects are not displayed if you do not have cluster-admin privileges.

    Searching and filtering alerts, silences, and alerting rules

    You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options.

    In the Administrator perspective, the Alerts page in the Alerting UI provides details about alerts relating to default OKD and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown.

    You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option:

    • Alert State filters:

      • Firing. The alert is firing because the alert condition is true and the optional for duration has passed. The alert will continue to fire as long as the condition remains true.

      • Pending. The alert is active but is waiting for the duration that is specified in the alerting rule before it fires.

      • Silenced. The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications will not be sent for alerts that match all the listed values or regular expressions.

    • Severity filters:

      • Critical. The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team.

      • Warning. The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review.

      • Info. The alert is provided for informational purposes only.

      • None. The alert has no defined severity.

      • You can also create custom severity definitions for alerts relating to user-defined projects.

    • Source filters:

      • Platform. Platform-level alerts relate only to default OKD projects. These projects provide core OKD functionality.

      • User. User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled post-installation to provide observability into your own workloads.

    Understanding silence filters

    In the Administrator perspective, the Silences page in the Alerting UI provides details about silences applied to alerts in default OKD and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends.

    You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option:

    • Silence State filters:

      • Active. The silence is active and the alert will be muted until the silence is expired.

      • Pending. The silence has been scheduled and it is not yet active.

      • Expired. The silence has expired and notifications will be sent if the conditions for an alert are true.

    Understanding alerting rule filters

    In the Administrator perspective, the Alerting Rules page in the Alerting UI provides details about alerting rules relating to default OKD and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule.

    You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option:

    • Alert State filters:

      • Firing. The alert is firing because the alert condition is true and the optional for duration has passed. The alert will continue to fire as long as the condition remains true.

      • Pending. The alert is active but is waiting for the duration that is specified in the alerting rule before it fires.

      • Silenced. The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications will not be sent for alerts that match all the listed values or regular expressions.

      • Not Firing. The alert is not firing.

    • Severity filters:

      • Critical. The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team.

      • Warning. The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review.

      • Info. The alerting rule provides informational alerts only.

      • None. The alerting rule has no defined severity.

      • You can also create custom severity definitions for alerting rules relating to user-defined projects.

    • Source filters:

      • Platform. Platform-level alerting rules relate only to default OKD projects. These projects provide core OKD functionality.

      • User. User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled post-installation to provide observability into your own workloads.

    Searching and filtering alerts, silences, and alerting rules in the Developer perspective

    In the Developer perspective, the Alerts page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert.

    In this view, you can filter by alert state and severity. By default, all alerts in the selected project are displayed if you have permission to access the project. These filters are the same as those described for the Administrator perspective.

    The Alerting UI provides detailed information about alerts and their governing alerting rules and silences.

    Prerequisites

    • You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.

    Procedure

    To obtain information about alerts in the Administrator perspective:

    1. Open the OKD web console and navigate to the MonitoringAlertingAlerts page.

    2. Optional: Search for alerts by name using the Name field in the search list.

    3. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list.

    4. Optional: Sort the alerts by clicking one or more of the Name, Severity, State, and Source column headers.

    5. Select the name of an alert to navigate to its Alert Details page. The page includes a graph that illustrates alert time series data. It also provides information about the alert, including:

      • A description of the alert

      • Messages associated with the alerts

      • Labels attached to the alert

      • A link to its governing alerting rule

      • Silences for the alert, if any exist

    To obtain information about silences in the Administrator perspective:

    1. Navigate to the MonitoringAlertingSilences page.

    2. Optional: Filter the silences by name using the Search by name field.

    3. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied.

    4. Optional: Sort the silences by clicking one or more of the Name, Firing Alerts, and State column headers.

    5. Select the name of a silence to navigate to its Silence Details page. The page includes the following details:

      • Alert specification

      • Start time

      • End time

      • Silence state

      • Number and list of firing alerts

    To obtain information about alerting rules in the Administrator perspective:

    1. Navigate to the MonitoringAlertingAlerting Rules page.

    2. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list.

    3. Optional: Sort the alerting rules by clicking one or more of the Name, Severity, Alert State, and Source column headers.

    4. Select the name of an alerting rule to navigate to its Alerting Rule Details page. The page provides the following details about the alerting rule:

      • The expression that defines the condition for firing the alert

      • The time for which the condition should be true for an alert to fire

      • A graph for each alert governed by the alerting rule, showing the value with which the alert is firing

      • A table of all alerts governed by the alerting rule

    To obtain information about alerts, silences, and alerting rules in the Developer perspective:

    1. Navigate to the Monitoring<project_name>Alerts page.

    2. View details for an alert, silence, or an alerting rule:

      • Alert Details can be viewed by selecting > to the left of an alert name and then selecting the alert in the list.

      • Silence Details can be viewed by selecting a silence in the Silenced By section of the Alert Details page. The Silence Details page includes the following information:

        • Start time

        • End time

        • Silence state

        • Number and list of firing alerts

      • Alerting Rule Details can be viewed by selecting View Alerting Rule in the menu on the right of an alert in the Alerts page.

    Only alerts, silences, and alerting rules relating to the selected project are displayed in the Developer perspective.

    Managing alerting rules

    OKD monitoring ships with a set of default alerting rules. As a cluster administrator, you can view the default alerting rules.

    In OKD 4.8, you can create, view, edit, and remove alerting rules in user-defined projects.

    Alerting rule considerations

    • The default alerting rules are used specifically for the OKD cluster.

    • Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both.

    • Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing.

    You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules:

    • Minimize the number of alerting rules that you create for your project. Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you.

    • Create alerting rules for symptoms instead of causes. Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed.

    • Plan before you write your alerting rules. Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom.

    • Provide clear alert messaging. State the symptom and recommended actions in the alert message.

    • Include severity levels in your alerting rules. The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team.

    • Optimize alert routing. Deploy an alerting rule directly on the Prometheus instance in the openshift-user-workload-monitoring project if the rule does not query default OKD metrics. This reduces latency for alerting rules and minimizes the load on monitoring components.

    Additional resources

    Creating alerting rules for user-defined projects

    You can create alerting rules for user-defined projects. Those alerting rules will fire alerts based on the values of chosen metrics.

    Prerequisites

    • You have enabled monitoring for user-defined projects.

    • You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule.

    • You have installed the OpenShift CLI (oc).

    Procedure

    1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml.

    2. Add an alerting rule configuration to the YAML file. For example:

      When you create an alerting rule, a project label is enforced on it if a rule with the same name exists in another project.

      This configuration creates an alerting rule named example-alert. The alerting rule fires an alert when the version metric exposed by the sample service becomes 0.

      A user-defined alerting rule can include metrics for its own project and cluster metrics. You cannot include metrics for another user-defined project.

      For example, an alerting rule for the user-defined project ns1 can have metrics from ns1 and cluster metrics, such as the CPU and memory metrics. However, the rule cannot include metrics from ns2.

      Additionally, you cannot create alerting rules for the openshift-* core OKD projects. OKD monitoring by default provides a set of alerting rules for these projects.

    3. Apply the configuration file to the cluster:

      1. $ oc apply -f example-app-alerting-rule.yaml

      It takes some time to create the alerting rule.

    Reducing latency for alerting rules that do not query platform metrics

    If an alerting rule for a user-defined project does not query default cluster metrics, you can deploy the rule directly on the Prometheus instance in the openshift-user-workload-monitoring project. This reduces latency for alerting rules by bypassing Thanos Ruler when it is not required. This also helps to minimize the overall load on monitoring components.

    Prerequisites

    • You have enabled monitoring for user-defined projects.

    • You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule.

    • You have installed the OpenShift CLI (oc).

    Procedure

    1. Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml.

    2. Add an alerting rule configuration to the YAML file that includes a label with the key openshift.io/prometheus-rule-evaluation-scope and value leaf-prometheus. For example:

      1. apiVersion: monitoring.coreos.com/v1
      2. kind: PrometheusRule
      3. metadata:
      4. name: example-alert
      5. namespace: ns1
      6. openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus
      7. spec:
      8. groups:
      9. - name: example
      10. rules:
      11. - alert: VersionAlert
      12. expr: version{job="prometheus-example-app"} == 0

    If that label is present, the alerting rule is deployed on the Prometheus instance in the openshift-user-workload-monitoring project. If the label is not present, the alerting rule is deployed to Thanos Ruler.

    1. Apply the configuration file to the cluster:

      It takes some time to create the alerting rule.

    • See for details about OKD 4.8 monitoring architecture.

    Accessing alerting rules for user-defined projects

    To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view role for the project.

    Prerequisites

    • You have enabled monitoring for user-defined projects.

    • You are logged in as a user that has the monitoring-rules-view role for your project.

    • You have installed the OpenShift CLI (oc).

    Procedure

    1. You can list alerting rules in <project>:

      1. $ oc -n <project> get prometheusrule
    2. To list the configuration of an alerting rule, run the following:

      1. $ oc -n <project> get prometheusrule <rule> -o yaml

    As a cluster administrator, you can list alerting rules for core OKD and user-defined projects together in a single view.

    Prerequisites

    • You have access to the cluster as a user with the cluster-admin role.

    • You have installed the OpenShift CLI (oc).

    Procedure

    1. In the Administrator perspective, navigate to MonitoringAlertingAlerting Rules.

    2. Select the Platform and User sources in the Filter drop-down menu.

      The Platform source is selected by default.

    Removing alerting rules for user-defined projects

    You can remove alerting rules for user-defined projects.

    Prerequisites

    • You have enabled monitoring for user-defined projects.

    • You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule.

    • You have installed the OpenShift CLI (oc).

    Procedure

    • To remove rule <foo> in <namespace>, run the following:

    Additional resources

    You can create a silence to stop receiving notifications about an alert when it is firing. It might be useful to silence an alert after being first notified, while you resolve the underlying issue.

    When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires.

    You can view, edit, and expire existing silences.

    Silencing alerts

    You can either silence a specific alert or silence alerts that match a specification that you define.

    Prerequisites

    • You have access to the cluster as a developer or as a user with edit permissions for the project that you are viewing metrics for.

    Procedure

    To silence a specific alert:

    • In the Administrator perspective:

      1. Navigate to the MonitoringAlertingAlerts page of the OKD web console.

      2. For the alert that you want to silence, select the kebab in the right-hand column and select Silence Alert. The Silence Alert form will appear with a pre-populated specification for the chosen alert.

      3. Optional: Modify the silence.

      4. You must add a comment before creating the silence.

      5. To create the silence, select Silence.

    • In the Developer perspective:

      1. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert.

      2. Select Silence Alert. The Silence Alert form will appear with a prepopulated specification for the chosen alert.

      3. Optional: Modify the silence.

      4. You must add a comment before creating the silence.

      5. To create the silence, select Silence.

    To silence a set of alerts by creating an alert specification in the Administrator perspective:

    1. Navigate to the MonitoringAlertingSilences page in the OKD web console.

    2. Set the schedule, duration, and label details for an alert in the Create Silence form. You must also add a comment for the silence.

    3. To create silences for alerts that match the label sectors that you entered in the previous step, select Silence.

    Editing silences

    You can edit a silence, which will expire the existing silence and create a new one with the changed configuration.

    Procedure

    To edit a silence in the Administrator perspective:

    1. Navigate to the MonitoringAlertingSilences page.

    2. For the silence you want to modify, select the in the last column and choose Edit silence.

      Alternatively, you can select ActionsEdit Silence in the Silence Details page for a silence.

    3. In the Edit Silence page, enter your changes and select Silence. This will expire the existing silence and create one with the chosen configuration.

    To edit a silence in the Developer perspective:

    1. Navigate to the Monitoring<project_name>Alerts page.

    2. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert.

    3. Select the name of a silence in the Silenced By section in that page to navigate to the Silence Details page for the silence.

    4. Select the name of a silence to navigate to its Silence Details page.

    5. Select ActionsEdit Silence in the Silence Details page for a silence.

    6. In the Edit Silence page, enter your changes and select Silence. This will expire the existing silence and create one with the chosen configuration.

    You can expire a silence. Expiring a silence deactivates it forever.

    Procedure

    To expire a silence in the Administrator perspective:

    1. Navigate to the MonitoringAlertingSilences page.

    2. For the silence you want to modify, select the kebab in the last column and choose Expire silence.

      Alternatively, you can select ActionsExpire Silence in the Silence Details page for a silence.

    To expire a silence in the Developer perspective:

    1. Navigate to the Monitoring<project_name>Alerts page.

    2. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert.

    3. Select the name of a silence in the Silenced By section in that page to navigate to the Silence Details page for the silence.

    4. Select the name of a silence to navigate to its Silence Details page.

    5. Select ActionsExpire Silence in the Silence Details page for a silence.

    Sending notifications to external systems

    In OKD 4.8, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OKD to send alerts to the following receiver types:

    • PagerDuty

    • Webhook

    • Email

    • Slack

    Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review.

    Checking that alerting is operational by using the watchdog alert

    OKD monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider.

    Configuring alert receivers

    You can configure alert receivers to ensure that you learn about important issues with your cluster.

    Prerequisites

    • You have access to the cluster as a user with the cluster-admin role.

    Procedure

    1. In the Administrator perspective, navigate to AdministrationCluster SettingsGlobal ConfigurationAlertmanager.

      Alternatively, you can navigate to the same page through the notification drawer. Select the bell icon at the top right of the OKD web console and choose Configure in the AlertmanagerReceiverNotConfigured alert.

    2. Select Create Receiver in the Receivers section of the page.

    3. In the Create Receiver form, add a Receiver Name and choose a Receiver Type from the list.

    4. Edit the receiver configuration:

      • For PagerDuty receivers:

        1. Choose an integration type and add a PagerDuty integration key.

        2. Add the URL of your PagerDuty installation.

        3. Select Show advanced configuration if you want to edit the client and incident details or the severity specification.

      • For webhook receivers:

        1. Add the endpoint to send HTTP POST requests to.

        2. Select Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver.

      • For email receivers:

        1. Add the email address to send notifications to.

        2. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details.

        3. Choose whether TLS is required.

        4. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration.

      • For Slack receivers:

        1. Add the URL of the Slack webhook.

        2. Add the Slack channel or user name to send notifications to.

        3. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames.

    1. By default, firing alerts with labels that match all of the selectors will be sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver:

      1. Add routing label names and values in the Routing Labels section of the form.

      2. Select Regular Expression if want to use a regular expression.

      3. Select Add Label to add further routing labels.

    2. Select Create to create the receiver.

    You can overwrite the default Alertmanager configuration by editing the alertmanager-main secret inside the openshift-monitoring project.

    Prerequisites

    • You have access to the cluster as a user with the cluster-admin role.

    Procedure

    To change the Alertmanager configuration from the CLI:

    1. Print the currently active Alertmanager configuration into file alertmanager.yaml:

      1. $ oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml
    2. Edit the configuration in alertmanager.yaml:

      1. resolve_timeout: 5m
      2. route:
      3. group_wait: 30s
      4. group_interval: 5m
      5. repeat_interval: 12h
      6. receiver: default
      7. routes:
      8. - match:
      9. alertname: Watchdog
      10. repeat_interval: 5m
      11. receiver: watchdog
      12. - match:
      13. service: <your_service> (1)
      14. routes:
      15. - match:
      16. <your_matching_rules> (2)
      17. receiver: <receiver> (3)
      18. receivers:
      19. - name: default
      20. - name: watchdog
      21. - name: <receiver>
      22. # <receiver_configuration>

      The following Alertmanager configuration example configures PagerDuty as an alert receiver:

      With this configuration, alerts of critical severity that are fired by the example-app service are sent using the team-frontend-page receiver. Typically these types of alerts would be paged to an individual or a critical response team.

    3. Apply the new configuration in the file:

    To change the Alertmanager configuration from the OKD web console:

    1. Navigate to the AdministrationCluster SettingsGlobal ConfigurationAlertmanagerYAML page of the web console.

    2. Modify the YAML configuration file.

    3. Select Save.

    Additional resources

    Next steps