Alerting

    • Writing triggered: the user writes data to the original time series, and every time a piece of data is inserted, the judgment logic of will be triggered. If the alerting requirements are met, an alert is sent to the data sink, The data sink then forwards the alert to the external terminal.

      • This mode is suitable for scenarios that need to monitor every piece of data in real time.
      • Since the operation in the trigger will affect the data writing performance, it is suitable for scenarios that are not sensitive to the original data writing performance.
    • Continuous query: the user writes data to the original time series, ContinousQuery periodically queries the original time series, and writes the query results into the new time series, Each write triggers the judgment logic of trigger, If the alerting requirements are met, an alert is sent to the data sink, The data sink then forwards the alert to the external terminal.

      • This mode is suitable for scenarios where data needs to be regularly queried within a certain period of time.
      • Since the timing query hardly affects the writing of the original time series, it is suitable for scenarios that are sensitive to the performance of the original data writing performance.

    With the introduction of the trigger module and the sink module into IoTDB, at present, users can use these two modules with AlertManager to realize the writing triggered alerting mode.

    Precompiled binaries

    The pre-compiled binary file can be downloaded .

    Running command:

    Docker image

    Running command:

    1. docker run --name alertmanager -d -p 127.0.0.1:9093:9093 quay.io/prometheus/alertmanager

    The following is an example, which can cover most of the configuration rules. For detailed configuration rules, see .

    Example:

    In the following example, we used the following configuration:

    1. # alertmanager.yml
    2. global:
    3. smtp_smarthost: ''
    4. smtp_from: ''
    5. smtp_auth_username: ''
    6. smtp_auth_password: ''
    7. smtp_require_tls: false
    8. route:
    9. group_by: ['alertname']
    10. group_wait: 1m
    11. group_interval: 10m
    12. repeat_interval: 10h
    13. receivers:
    14. - name: 'email'
    15. email_configs:
    16. - to: ''
    17. inhibit_rules:
    18. - source_match:
    19. severity: 'critical'
    20. target_match:
    21. severity: 'warning'
    22. equal: ['alertname']

    The AlertManager API is divided into two versions, v1 and v2. The current AlertManager API version is v2 (For configuration see api/v2/openapi.yaml (opens new window)).

    The user defines a trigger by creating a Java class and writing the logic in the hook. Please refer to for the specific configuration process and the usage method of AlertManagerSink related tools provided by the Sink module.

    The following example creates the org.apache.iotdb.trigger.AlertingExample class, Its alertManagerHandler member variables can send alerts to the AlertManager instance at the address of http://127.0.0.1:9093/.

    When value> 100.0, send an alert of critical severity; when 50.0 <value <= 100.0, send an alert of warning severity .

    The following SQL statement registered the trigger named root-ln-wf01-wt01-alert on the root.ln.wf01.wt01.temperature time series, whose operation logic is defined by org.apache.iotdb.trigger.AlertingExample java class.

    1. CREATE TRIGGER `root-ln-wf01-wt01-alert`
    2. AFTER INSERT
    3. ON root.ln.wf01.wt01.temperature
    4. AS "org.apache.iotdb.trigger.AlertingExample"

    When we finish the deployment and startup of AlertManager as well as the creation of Trigger, we can test the alerting by writing data to the time series.

    alerting