If you deploy the TiKV cluster using TiUP, the monitoring and alert services are automatically deployed, and no manual deployment is needed.
Assume that the TiKV cluster topology is as follows:
# Extracts the package.
tar -xzf prometheus-2.8.1.linux-amd64.tar.gz
tar -xzf node_exporter-0.17.0.linux-amd64.tar.gz
tar -xzf grafana-6.1.6.linux-amd64.tar.gz
Step 2: Start node_exporter
on all nodes
cd node_exporter-0.17.0.linux-amd64
# Starts the node_exporter service.
$ ./node_exporter --web.listen-address=":9100" \
--log.level="info" &
Edit the Prometheus configuration file:
...
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default value (10s).
external_labels:
cluster: 'test-cluster'
monitor: "prometheus"
scrape_configs:
- job_name: 'overwritten-nodes'
honor_labels: true # Do not overwrite job & instance labels.
static_configs:
- targets:
- '192.168.199.113:9100'
- '192.168.199.114:9100'
- '192.168.199.115:9100'
- '192.168.199.116:9100'
- '192.168.199.117:9100'
- '192.168.199.118:9100'
static_configs:
- targets:
- '192.168.199.113:2379'
- '192.168.199.114:2379'
- '192.168.199.115:2379'
- job_name: 'tikv'
honor_labels: true # Do not overwrite job & instance labels.
static_configs:
- targets:
- '192.168.199.116:20180'
- '192.168.199.117:20180'
- '192.168.199.118:20180'
...
Start the Prometheus service:
$ ./prometheus \
--config.file="./prometheus.yml" \
--web.listen-address=":9090" \
--web.external-url="http://192.168.199.113:9090/" \
--web.enable-admin-api \
--log.level="info" \
--storage.tsdb.path="./data.metrics" \
--storage.tsdb.retention="15d" &
Step 4: Start Grafana on Node1
Edit the Grafana configuration file:
...
[paths]
data = ./data
logs = ./data/log
plugins = ./data/plugins
[server]
http_port = 3000
[database]
[session]
[analytics]
check_for_updates = true
[security]
admin_user = admin
admin_password = admin
[snapshots]
[auth.anonymous]
[auth.basic]
[auth.ldap]
[smtp]
[emails]
[log]
mode = file
[log.console]
[log.file]
level = info
format = text
[log.syslog]
[event_publisher]
[dashboards.json]
enabled = false
path = ./data/dashboards
[metrics]
[grafana_net]
url = https://grafana.net
...
Start the Grafana service:
$ ./bin/grafana-server \
--config="./conf/grafana.ini" &
Log in to the Grafana Web interface.
- Default address: http://localhost:3000
- Default account: admin
- Default password: admin
For the Change Password step, you can choose Skip.
In the Grafana sidebar menu, click Data Source within the Configuration.
Click Add data source.
Specify the data source information.
- Specify a Name for the data source.
- For Type, select Prometheus.
- For URL, specify the Prometheus address.
- Specify other fields as needed.
- Click Add to save the new data source.
Step 2: Import a Grafana dashboard
To import a Grafana dashboard for the PD server and the TiKV server, take the following steps respectively:
In the sidebar menu, click Dashboards -> Import to open the Import Dashboard window.
Click Upload .json File to upload a JSON file (Download TiKV Grafana configuration file and ).
Click Load.
Select a Prometheus data source.
Click New dashboard in the top menu and choose the dashboard you want to view.