Use file-based service discovery to discover scrape targets

    In this guide, we will:

    • Install and run a Prometheus locally
    • Create a file specifying the host and port information for the Node Exporter
    • Install and run a Prometheus instance that is configured to discover the Node Exporter using the targets.json file

    See this section of the guide. The Node Exporter runs on port 9100. To ensure that the Node Exporter is exposing metrics:

    The metrics output should look something like this:

    1. # HELP go_gc_duration_seconds A summary of the GC invocation durations.
    2. # TYPE go_gc_duration_seconds summary
    3. go_gc_duration_seconds{quantile="0"} 0
    4. go_gc_duration_seconds{quantile="0.25"} 0
    5. go_gc_duration_seconds{quantile="0.5"} 0
    6. ...

    Like the Node Exporter, Prometheus is a single static binary that you can install via tarball. Download the latest release for your platform and untar it:

    1. wget https://github.com/prometheus/prometheus/releases/download/v*/prometheus-*.*-amd64.tar.gz
    2. cd prometheus-*.*

    This configuration specifies that there is a job called node (for the Node Exporter) that retrieves host and port information for Node Exporter instances from a targets.json file.

    Now create that targets.json file and add this content to it:

    1. [
    2. {
    3. "labels": {
    4. },
    5. "targets": [
    6. "localhost:9100"
    7. ]
    8. }
    9. ]

    NOTE: In this guide we’ll work with JSON service discovery configurations manually for the sake of brevity. In general, however, we recommend that you use some kind of JSON-generating process or tool instead.

    This configuration specifies that there is a node job with one target: localhost:9100.

    1. ./prometheus

    If Prometheus has started up successfully, you should see a line like this in the logs:

    With Prometheus up and running, you can explore metrics exposed by the node service using the Prometheus . If you explore the up{job="node"} metric, for example, you can see that the Node Exporter is being appropriately discovered.

    When using Prometheus’ file-based service discovery mechanism, the Prometheus instance will listen for changes to the file and automatically update the scrape target list, without requiring an instance restart. To demonstrate this, start up a second Node Exporter instance on port 9200. First navigate to the directory containing the Node Exporter binary and run this command in a new terminal window:

    1. ./node_exporter --web.listen-address=":9200"

    Now modify the config in targets.json by adding an entry for the new Node Exporter:

    1. [
    2. {
    3. "localhost:9100"
    4. "labels": {
    5. "job": "node"
    6. }
    7. },
    8. {
    9. "targets": [
    10. "localhost:9200"
    11. ],
    12. "labels": {
    13. "job": "node"
    14. }
    15. }
    16. ]

    In this guide, you installed and ran a Prometheus Node Exporter and configured Prometheus to discover and scrape metrics from the Node Exporter using file-based service discovery.