Telemetry

When telemetry is being streamed to an external metrics store, the interval is defined to be that store’s flush interval.

To view this data, you must send a signal to the Consul process: on Unix, this is while on Windows it is BREAK. Once Consul receives the signal, it will dump the current telemetry information to the agent’s stderr.

This telemetry information can be used for debugging or otherwise getting a better view of what Consul is doing. Review the Monitoring and Metrics tutorial to learn how collect and interpret Consul data.

Additionally, if the are provided, the telemetry information will be streamed to a statsite or server where it can be aggregated and flushed to Graphite or any other metrics store. For a configuration example for Telegraf, review the Monitoring with Telegraf tutorial.

This information can also be viewed with the in JSON format or using Prometheus format.

Sample output of telemetry dump

Key Metrics

These are some metrics emitted that can help you understand the health of your cluster at a glance. A Grafana dashboard is also available, which is maintained by the Consul team and displays these metrics for easy visualization. For a full list of metrics emitted by Consul, see

Metric NameDescriptionUnitType
consul.kvs.applyMeasures the time it takes to complete an update to the KV store.mstimer
consul.txn.applyMeasures the time spent applying a transaction operation.mstimer
consul.raft.applyCounts the number of Raft transactions applied during the interval. This metric is only reported on the leader.raft transactions / intervalcounter
consul.raft.commitTimeMeasures the time it takes to commit a new entry to the Raft log on the leader.mstimer

Why they’re important: Taken together, these metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. Generally these should all be fairly consistent and no more than a few milliseconds. Sudden changes in any of the timing values could be due to unexpected load on the Consul servers, or due to problems on the servers themselves.

What to look for: Deviations (in any of these metrics) of more than 50% from baseline over the previous hour.

Leadership changes

Metric NameDescriptionUnitType
consul.raft.leader.lastContactMeasures the time since the leader was last able to contact the follower nodes when checking its leader lease.mstimer
consul.raft.state.candidateIncrements whenever a Consul server starts an election.electionscounter
consul.raft.state.leaderIncrements whenever a Consul server becomes a leader.leaderscounter
consul.server.isLeaderTrack if a server is a leader(1) or not(0).1 or 0gauge

Why they’re important: Normally, your Consul cluster should have a stable leader. If there are frequent elections or leadership changes, it would likely indicate network issues between the Consul servers, or that the Consul servers themselves are unable to keep up with the load.

What to look for: For a healthy cluster, you’re looking for a lastContact lower than 200ms, leader > 0 and candidate == 0. Deviations from this might indicate flapping leadership.

Autopilot

Metric NameDescriptionUnitType
consul.autopilot.healthyTracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0.health stategauge

Why it’s important: Autopilot can expose the overall health of your cluster with a simple boolean.

What to look for: Alert if healthy is 0. Some other indicators of an unhealthy cluster would be:

  • consul.raft.commitTime - This can help reflect the speed of state store changes being performed by the agent. If this number is rising, the server may be experiencing an issue due to degraded resources on the host.
  • - Check for deviation from the recommended values. This can indicate failed leadership elections or flapping nodes.

Memory usage

Metric NameDescriptionUnitType
consul.runtime.alloc_bytesMeasures the number of bytes allocated by the Consul process.bytesgauge
consul.runtime.sys_bytesMeasures the total number of bytes of memory obtained from the OS.bytesgauge

Why they’re important: Consul keeps all of its data in memory. If Consul consumes all available memory, it will crash.

What to look for: If consul.runtime.sys_bytes exceeds 90% of total available system memory.

NOTE: This metric is calculated using Go’s runtime package . This will have a different output than using information gathered from top. For more information, see GH-4734.

Why it’s important: GC pause is a “stop-the-world” event, meaning that all runtime threads are blocked until GC completes. Normally these pauses last only a few nanoseconds. But if memory usage is high, the Go runtime may GC so frequently that it starts to slow down Consul.

What to look for: Warning if total_gc_pause_ns exceeds 2 seconds/minute, critical if it exceeds 5 seconds/minute.

NOTE: total_gc_pause_ns is a cumulative counter, so in order to calculate rates (such as GC/minute), you will need to apply a function such as InfluxDB’s .

Network activity - RPC Count

Metric NameDescriptionUnitType
consul.client.rpcIncrements whenever a Consul agent in client mode makes an RPC request to a Consul serverrequestscounter
consul.client.rpc.exceededIncrements whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent’s configuration.requestscounter
consul.client.rpc.failedIncrements whenever a Consul agent in client mode makes an RPC request to a Consul server and fails.requestscounter

Why they’re important: These measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. A high RPC count, especially from consul.client.rpcexceeded meaning that the requests are being rate-limited, could imply a misconfigured Consul agent.

What to look for: Sudden large changes to the consul.client.rpc metrics (greater than 50% deviation from baseline). consul.client.rpc.exceeded or consul.client.rpc.failed count > 0, as it implies that an agent is being rate-limited or fails to make an RPC request to a Consul server

Raft Replication Capacity Issues

Metric NameDescriptionUnitType
consul.raft.fsm.lastRestoreDurationMeasures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it’s value since most servers only restore during restarts which are typically infrequent.msgauge
consul.raft.leader.oldestLogAgeThe number of milliseconds since the oldest log in the leader’s log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with consul.raft.fsm.lastRestoreDuration and consul.raft.rpc.installSnapshot to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated.msgauge
consul.raft.rpc.installSnapshotMeasures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state.mstimer

Why they’re important: These metrics allow operators to monitor the health and capacity of raft replication on servers. When Consul is handling large amounts of data and high write throughput it is possible for the cluster to get into the following state:

  • Write throughput is high (say 500 commits per second or more) and constant
  • The leader is writing out a large snapshot every minute or so
  • The snapshot is large enough that it takes considerable time to restore from disk on a restart or from the leader if a follower gets behind
  • Disk IO available allows the leader to write a snapshot faster than it can be restored from disk on a follower

In this state, followers must be able to restore a snapshot into memory and resume replication in under 80 seconds otherwise they will never be able to rejoin the cluster until write rates reduce. If they take more than 20 seconds then there will be a chance that they are unlucky with timing when they restart and have to download a snapshot again from the servers one or more times. If they take 50 seconds or more then they will likely fail to catch up more often than they succeed and will remain non-voters for some time until they happen to complete the restore just before the leader truncates its logs.

In the worst case, the follower will be left continually downloading snapshots from the leader which are always too old to use by the time they are restored. This can put additional strain on the leader transferring large snapshots repeatedly as well as reduce the fault tolerance and serving capacity of the cluster.

Since Consul 1.5.3 has been configurable. Increasing it allows the leader to retain more logs and give followers more time to restore and catch up. The tradeoff is potentially slower appends which eventually might affect write throughput and latency negatively so setting it arbitrarily high is not recommended. Before Consul 1.10.0 it required a rolling restart to change this configuration on the leader though and since no followers could restart without loosing health this could mean loosing cluster availability and needing to recover the cluster from a loss of quorum.

Since Consul 1.10.0 raft_trailing_logs is now reloadable with consul reload or SIGHUP allowing operators to increase this without the leader restarting or loosing leadership allowing the cluster to be recovered gracefully.

Monitoring these metrics can help avoid or diagnose this state.

What to look for:

consul.raft.leader.oldestLogAge should look like a saw-tooth wave increasing linearly with time until the leader takes a snapshot and then jumping down as the oldest logs are truncated. The lowest point on that line should remain comfortably higher (i.e. 2x or more) than the time it takes to restore a snapshot.

There are two ways a snapshot can be restored on a follower: from disk on startup or from the leader during an installSnapshot RPC. The leader only sends an installSnapshot RPC if the follower is new and has no state, or if it’s state is too old for it to catch up with the leaders logs.

consul.raft.fsm.lastRestoreDuration shows the time it took to restore from either source the last time it happened. Most of the time this is when the server was started. It’s a gauge that will always show the last restore duration (in Consul 1.10.0 and later) however long ago that was.

consul.raft.rpc.installSnapshot is the timing information from the leader’s perspective when it installs a new snapshot on a follower. It includes the time spent transferring the data as well as the follower restoring it. Since these events are typically infrequent, you may need to graph the last value observed, for example using max_over_time with a large range in Prometheus. While the restore part will also be reflected in lastRestoreDuration, it can be useful to observe this too since the logs need to be able to cover this entire operation including the snapshot delivery to ensure followers can always catch up safely.

Graphing consul.raft.leader.oldestLogAge on the same axes as the other two metrics here can help see at a glance if restore times are creeping dangerously close to the limit of what the leader is retaining at the current write rate.

Note that if servers don’t restart often, then the snapshot could have grown significantly since the last restore happened so last restore times might not reflect what would happen if an agent restarts now.

License Expiration

Enterprise

Metric NameDescriptionUnitType
consul.system.licenseExpirationNumber of hours until the Consul Enterprise license will expire.hoursgauge

Why they’re important:

This measurement indicates how many hours are left before the Consul Enterprise license expires. When the license expires some Consul Enterprise features will cease to work. An example of this is that after expiration, it is no longer possible to create or modify resources in non-default namespaces or to manage namespace definitions themselves even though reads of namespaced resources will still work.

What to look for:

This metric should be monitored to ensure that the license doesn’t expire to prevent degradation of functionality.

Metric NameDescriptionUnitType
consul.raft.boltdb.freelistBytesRepresents the number of bytes necessary to encode the freelist metadata. When raft_boltdb.NoFreelistSync is set to false these metadata bytes must also be written to disk for each committed log.bytesgauge
consul.raft.boltdb.logsPerBatchMeasures the number of logs being written per batch to the db.logssample
consul.raft.boltdb.storeLogsMeasures the amount of time spent writing logs to the db.mstimer
consul.raft.boltdb.writeCapacityTheoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can performlogs/secondsample

Requirements:

Why they’re important:

The consul.raft.boltdb.storeLogs metric is a direct indicator of disk write performance of a Consul server. If there are issues with the disk or performance degradations related to Bolt DB, these metrics will show the issue and potentially the cause as well.

What to look for:

The primary thing to look for are increases in the consul.raft.boltdb.storeLogs times. Its value will directly govern an upper limit to the throughput of write operations within Consul.

There can be a number of potential issues that can cause this. Often times it could be performance of the underlying disks that is the issue. Other times it may be caused by Bolt DB behavior. Bolt DB keeps track of free space within the raft.db file. When needing to allocate data it will use existing free space first before further expanding the file. By default, Bolt DB will write a data structure containing metadata about free pages within the DB to disk for every log storage operation. Therefore if the free space within the database grows excessively large, such as after a large spike in writes beyond the normal steady state and a subsequent slow down in the write rate, then Bolt DB could end up writing a large amount of extra data to disk for each log storage operation. This has the potential to drastically increase disk write throughput, potentially beyond what the underlying disks can keep up with. To detect this situation you can look at the consul.raft.boltdb.freelistBytes metric. This metric is a count of the extra bytes that are being written for each log storage operation beyond the log data itself. While not a clear indicator of an actual issue, this metric can be used to diagnose why the consul.raft.boltdb.storeLogs metric is high.

If Bolt DB log storage performance becomes an issue and is caused by free list management then setting to true in the server’s configuration may help to reduce disk IO and log storage operation times. Disabling free list syncing will however increase the startup time for a server as it must scan the raft.db file for free space instead of loading the already populated free list structure.

This is a full list of metrics emitted by Consul.

These metrics are used to monitor the health of the Consul servers.

MetricDescriptionUnitType
consul.acl.ResolveTokenMeasures the time it takes to resolve an ACL token.mstimer
Measures the time it takes to resolve an ACL token to an Identity. This metric was removed in Consul 1.12. The time will now be reflected in consul.acl.ResolveToken.mstimer
consul.acl.token.cache_hitIncrements if Consul is able to resolve a token’s identity, or a legacy token, from the cache.cache read opcounter
consul.acl.token.cache_missIncrements if Consul cannot resolve a token’s identity, or a legacy token, from the cache.cache read opcounter
consul.cache.bypassCounts how many times a request bypassed the cache because no cache-key was provided.countercounter
consul.cache.fetch_successCounts the number of successful fetches by the cache.countercounter
consul.cache.fetch_errorCounts the number of failed fetches by the cache.countercounter
consul.cache.evict_expiredCounts the number of expired entries that are evicted.countercounter
consul.raft.applied_indexRepresents the raft applied index.indexgauge
consul.raft.applyCounts the number of Raft transactions occurring over the interval, which is a general indicator of the write load on the Consul servers.raft transactions / intervalcounter
consul.raft.barrierCounts the number of times the agent has started the barrier i.e the number of times it has issued a blocking call, to ensure that the agent has all the pending operations that were queued, to be applied to the agent’s FSM.blocks / intervalcounter
consul.raft.boltdb.freelistBytesRepresents the number of bytes necessary to encode the freelist metadata. When raft_boltdb.NoFreelistSync is set to false these metadata bytes must also be written to disk for each committed log.bytesgauge
consul.raft.boltdb.freePageBytesRepresents the number of bytes of free space within the raft.db file.bytesgauge
consul.raft.boltdb.getLogMeasures the amount of time spent reading logs from the db.mstimer
consul.raft.boltdb.logBatchSizeMeasures the total size in bytes of logs being written to the db in a single batch.bytessample
consul.raft.boltdb.logsPerBatchMeasures the number of logs being written per batch to the db.logssample
consul.raft.boltdb.logSizeMeasures the size of logs being written to the db.bytessample
consul.raft.boltdb.numFreePagesRepresents the number of free pages within the raft.db file.pagesgauge
consul.raft.boltdb.numPendingPagesRepresents the number of pending pages within the raft.db that will soon become free.pagesgauge
consul.raft.boltdb.openReadTxnRepresents the number of open read transactions against the dbtransactionsgauge
consul.raft.boltdb.totalReadTxnRepresents the total number of started read transactions against the dbtransactionsgauge
consul.raft.boltdb.storeLogsMeasures the amount of time spent writing logs to the db.mstimer
consul.raft.boltdb.txstats.cursorCountCounts the number of cursors created since Consul was started.cursorscounter
consul.raft.boltdb.txstats.nodeCountCounts the number of node allocations within the db since Consul was started.allocationscounter
consul.raft.boltdb.txstats.nodeDerefCounts the number of node dereferences in the db since Consul was started.dereferencescounter
consul.raft.boltdb.txstats.pageAllocRepresents the number of bytes allocated within the db since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase.bytesgauge
consul.raft.boltdb.txstats.pageCountRepresents the number of pages allocated since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase.pagesgauge
consul.raft.boltdb.txstats.rebalanceCounts the number of node rebalances performed in the db since Consul was started.rebalancescounter
consul.raft.boltdb.txstats.rebalanceTimeMeasures the time spent rebalancing nodes in the db.mstimer
consul.raft.boltdb.txstats.spillCounts the number of nodes spilled in the db since Consul was started.spillscounter
consul.raft.boltdb.txstats.spillTimeMeasures the time spent spilling nodes in the db.mstimer
consul.raft.boltdb.txstats.splitCounts the number of nodes split in the db since Consul was started.splitscounter
consul.raft.boltdb.txstats.writeCounts the number of writes to the db since Consul was started.writescounter
consul.raft.boltdb.txstats.writeTimeMeasures the amount of time spent performing writes to the db.mstimer
consul.raft.boltdb.writeCapacityTheoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can performlogs/secondsample
consul.raft.commitNumLogsMeasures the count of logs processed for application to the FSM in a single batch.logsgauge
consul.raft.commitTimeMeasures the time it takes to commit a new entry to the Raft log on the leader.mstimer
consul.raft.fsm.lastRestoreDurationMeasures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it’s value since most servers only restore during restarts which are typically infrequent.msgauge
consul.raft.fsm.snapshotMeasures the time taken by the FSM to record the current state for the snapshot.mstimer
consul.raft.fsm.applyMeasures the time to apply a log to the FSM.mstimer
consul.raft.fsm.enqueueMeasures the amount of time to enqueue a batch of logs for the FSM to apply.mstimer
consul.raft.fsm.restoreMeasures the time taken by the FSM to restore its state from a snapshot.mstimer
consul.raft.last_indexRepresents the raft applied index.indexgauge
consul.raft.leader.dispatchLogMeasures the time it takes for the leader to write log entries to disk.mstimer
consul.raft.leader.dispatchNumLogsMeasures the number of logs committed to disk in a batch.logsgauge
consul.raft.leader.lastContactMeasures the time since the leader was last able to contact the follower nodes when checking its leader lease. It can be used as a measure for how stable the Raft timing is and how close the leader is to timing out its lease.The lease timeout is 500 ms times the , so this telemetry value should not be getting close to that configured value, otherwise the Raft timing is marginal and might need to be tuned, or more powerful servers might be needed. See the Server Performance guide for more details.mstimer
consul.raft.leader.oldestLogAgeThe number of milliseconds since the oldest log in the leader’s log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with consul.raft.fsm.lastRestoreDuration and consul.raft.rpc.installSnapshot to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated. Note: this metric won’t be emitted until the leader writes a snapshot. After an upgrade to Consul 1.10.0 it won’t be emitted until the oldest log was written after the upgrade.msgauge
consul.raft.replication.heartbeatMeasures the time taken to invoke appendEntries on a peer, so that it doesn’t timeout on a periodic basis.mstimer
consul.raft.replication.appendEntriesMeasures the time it takes to replicate log entries to followers. This is a general indicator of the load pressure on the Consul servers, as well as the performance of the communication between the servers.mstimer
consul.raft.replication.appendEntries.rpcMeasures the time taken by the append entries RFC, to replicate the log entries of a leader agent onto its follower agent(s)mstimer
consul.raft.replication.appendEntries.logsMeasures the number of logs replicated to an agent, to bring it up to speed with the leader’s logs.logs appended/ intervalcounter
consul.raft.restoreCounts the number of times the restore operation has been performed by the agent. Here, restore refers to the action of raft consuming an external snapshot to restore its state.operation invoked / intervalcounter
consul.raft.restoreUserSnapshotMeasures the time taken by the agent to restore the FSM state from a user’s snapshotmstimer
consul.raft.rpc.appendEntriesMeasures the time taken to process an append entries RPC call from an agent.mstimer
consul.raft.rpc.appendEntries.storeLogsMeasures the time taken to add any outstanding logs for an agent, since the last appendEntries was invokedmstimer
consul.raft.rpc.appendEntries.processLogsMeasures the time taken to process the outstanding log entries of an agent.mstimer
consul.raft.rpc.installSnapshotMeasures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state.mstimer
consul.raft.rpc.processHeartBeatMeasures the time taken to process a heartbeat request.mstimer
consul.raft.rpc.requestVoteMeasures the time taken to process the request vote RPC call.mstimer
consul.raft.snapshot.createMeasures the time taken to initialize the snapshot process.mstimer
consul.raft.snapshot.persistMeasures the time taken to dump the current snapshot taken by the Consul agent to the disk.mstimer
consul.raft.snapshot.takeSnapshotMeasures the total time involved in taking the current snapshot (creating one and persisting it) by the Consul agent.mstimer
consul.serf.snapshot.appendLineMeasures the time taken by the Consul agent to append an entry into the existing log.mstimer
consul.serf.snapshot.compactMeasures the time taken by the Consul agent to compact a log. This operation occurs only when the snapshot becomes large enough to justify the compaction .mstimer
consul.raft.state.candidateIncrements whenever a Consul server starts an election. If this increments without a leadership change occurring it could indicate that a single server is overloaded or is experiencing network connectivity issues.election attempts / intervalcounter
consul.raft.state.leaderIncrements whenever a Consul server becomes a leader. If there are frequent leadership changes this may be indication that the servers are overloaded and aren’t meeting the soft real-time requirements for Raft, or that there are networking problems between the servers.leadership transitions / intervalcounter
consul.raft.state.followerCounts the number of times an agent has entered the follower mode. This happens when a new agent joins the cluster or after the end of a leader election.follower state entered / intervalcounter
consul.raft.transition.heartbeat_timeoutThe number of times an agent has transitioned to the Candidate state, after receive no heartbeat messages from the last known leader.timeouts / intervalcounter
consul.raft.verify_leaderThis metric doesn’t have a direct correlation to the leader change. It just counts the number of times an agent checks if it is still the leader or not. For example, during every consistent read, the check is done. Depending on the load in the system, this metric count can be high as it is incremented each time a consistent read is completed.checks / intervalCounter
consul.rpc.accept_connIncrements when a server accepts an RPC connection.connectionscounter
consul.catalog.registerMeasures the time it takes to complete a catalog register operation.mstimer
consul.catalog.deregisterMeasures the time it takes to complete a catalog deregister operation.mstimer
consul.server.isLeaderTrack if a server is a leader(1) or not(0)1 or 0gauge
consul.fsm.registerMeasures the time it takes to apply a catalog register operation to the FSM.mstimer
consul.fsm.deregisterMeasures the time it takes to apply a catalog deregister operation to the FSM.mstimer
consul.fsm.session.Measures the time it takes to apply the given session operation to the FSM.mstimer
consul.fsm.kvs.Measures the time it takes to apply the given KV operation to the FSM.mstimer
consul.fsm.tombstone.Measures the time it takes to apply the given tombstone operation to the FSM.mstimer
consul.fsm.coordinate.batch-updateMeasures the time it takes to apply the given batch coordinate update to the FSM.mstimer
consul.fsm.prepared-query.Measures the time it takes to apply the given prepared query update operation to the FSM.mstimer
consul.fsm.txnMeasures the time it takes to apply the given transaction update to the FSM.mstimer
consul.fsm.autopilotMeasures the time it takes to apply the given autopilot update to the FSM.mstimer
consul.fsm.persistMeasures the time it takes to persist the FSM to a raft snapshot.mstimer
consul.fsm.intentionMeasures the time it takes to apply an intention operation to the state store.mstimer
consul.fsm.caMeasures the time it takes to apply CA configuration operations to the FSM.mstimer
consul.fsm.ca.leafMeasures the time it takes to apply an operation while signing a leaf certificate.mstimer
consul.fsm.acl.tokenMeasures the time it takes to apply an ACL token operation to the FSM.mstimer
consul.fsm.acl.policyMeasures the time it takes to apply an ACL policy operation to the FSM.mstimer
consul.fsm.acl.bindingruleMeasures the time it takes to apply an ACL binding rule operation to the FSM.mstimer
consul.fsm.acl.authmethodMeasures the time it takes to apply an ACL authmethod operation to the FSM.mstimer
consul.fsm.system_metadataMeasures the time it takes to apply a system metadata operation to the FSM.mstimer
consul.kvs.applyMeasures the time it takes to complete an update to the KV store.mstimer
consul.leader.barrierMeasures the time spent waiting for the raft barrier upon gaining leadership.mstimer
consul.leader.reconcileMeasures the time spent updating the raft store from the serf member information.mstimer
consul.leader.reconcileMemberMeasures the time spent updating the raft store for a single serf member’s information.mstimer
consul.leader.reapTombstonesMeasures the time spent clearing tombstones.mstimer
consul.leader.replication.acl-policies.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL policy replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.acl-policies.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL policies in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.acl-roles.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL role replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.acl-roles.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL roles in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.acl-tokens.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL token replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.acl-tokens.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL tokens in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.config-entries.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of config entry replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.config-entries.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of config entries in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.federation-state.statusThis will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of federation state replication was successful or 0 if there was an error.healthygauge
consul.leader.replication.federation-state.indexThis will only be emitted by the leader in a secondary datacenter. Increments to the index of federation states in the primary datacenter that have been successfully replicated.indexgauge
consul.leader.replication.namespaces.status
Enterprise
This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of namespace replication was successful or 0 if there was an error.
healthygauge
consul.leader.replication.namespaces.index
Enterprise
This will only be emitted by the leader in a secondary datacenter. Increments to the index of namespaces in the primary datacenter that have been successfully replicated.
indexgauge
consul.prepared-query.applyMeasures the time it takes to apply a prepared query update.mstimer
consul.prepared-query.explainMeasures the time it takes to process a prepared query explain request.mstimer
consul.prepared-query.executeMeasures the time it takes to process a prepared query execute request.mstimer
consul.prepared-query.execute_remoteMeasures the time it takes to process a prepared query execute request that was forwarded to another datacenter.mstimer
consul.rpc.raft_handoffIncrements when a server accepts a Raft-related RPC connection.connectionscounter
consul.rpc.request_errorIncrements when a server returns an error from an RPC request.errorscounter
consul.rpc.requestIncrements when a server receives a Consul-related RPC request.requestscounter
Increments when a server receives a read RPC request, indicating the rate of new read queries. See consul.rpc.queries_blocking for the current number of in-flight blocking RPC calls. This metric changed in 1.7.0 to only increment on the the start of a query. The rate of queries will appear lower, but is more accurate.queriescounter
consul.rpc.queries_blockingThe current number of in-flight blocking queries the server is handling.queriesgauge
consul.rpc.cross-dcIncrements when a server sends a (potentially blocking) cross datacenter RPC query.queriescounter
consul.rpc.consistentReadMeasures the time spent confirming that a consistent read can be performed.mstimer
consul.session.applyMeasures the time spent applying a session update.mstimer
consul.session.renewMeasures the time spent renewing a session.mstimer
consul.session_ttl.invalidateMeasures the time spent invalidating an expired session.mstimer
consul.txn.applyMeasures the time spent applying a transaction operation.mstimer
consul.txn.readMeasures the time spent returning a read transaction.mstimer
consul.grpc.client.request.countCounts the number of gRPC requests made by the client agent to a Consul server.requestscounter
consul.grpc.client.connection.countCounts the number of new gRPC connections opened by the client agent to a Consul server.connectionscounter
consul.grpc.client.connectionsMeasures the number of active gRPC connections open from the client agent to any Consul servers.connectionsgauge
consul.grpc.server.request.countCounts the number of gRPC requests received by the server.requestscounter
consul.grpc.server.connection.countCounts the number of new gRPC connections received by the server.connectionscounter
consul.grpc.server.connectionsMeasures the number of active gRPC connections open on the server.connectionsgauge
consul.grpc.server.stream.countCounts the number of new gRPC streams received by the server.streamscounter
consul.grpc.server.streamsMeasures the number of active gRPC streams handled by the server.streamsgauge
consul.xds.server.streamsMeasures the number of active xDS streams handled by the server split by protocol version.streamsgauge

Requirements:

  • Consul 1.12.0+

Label based RPC metrics were added in Consul 1.12.0 as a Beta feature to better understand the workload on a Consul server and, where that workload is coming from. The following metric(s) provide that insight

MetricDescriptionUnitType
consul.rpc.server.callMeasures the elapsed time taken to complete an RPC call.mssummary

Note that values of the consul.rpc.server.call may emit as 0 ms. That means that the elapsed time < 1 ms.

Labels

The the server workload metrics above come with the following labels:

Label NameDescriptionPossible values
methodThe name of the RPC method.The value of any RPC request in Consul.
erroredIndicates whether the RPC call errored.true or false.
request_typeWhether it is a read or write request.read, write or unreported.
rpc_typeThe RPC implementation.net/rpc or internal.
leaderWhether the server was a leader or not at the time of the request.true, false or unreported.

Label Explanations

The internal value for the rpc_type in the table above refers to leader and cluster management RPC operations that Consul performs. Historically, internal RPC operation metrics were accounted under the same metric names.

The unreported value for the request_type in the table above refers to RPC requests within Consul where it is difficult to ascertain whether a request is read or write type.

The unreported value for the leader label in the table above refers to RPC requests where Consul cannot determine the leadership status for a server.

Read Request Labels

In addition to the labels above, for read requests, the following may be populated:

Label NameDescriptionPossible values
blockingWhether the read request passed in a MinQueryIndex.true if a MinQueryIndex was passed, false otherwise.
target_datacenterThe target datacenter for the read request.The string value of the target datacenter for the request.
localityGives an indication of whether the RPC request is local or has been forwarded.local if current server data center is the same as target_datacenter, otherwise forwarded.

Here is a Prometheus style example of an RPC metric and its labels:

Sample output of telemetry dump

Any metric in this section can be turned off with the prefix_filter.

These metrics give insight into the health of the cluster as a whole.

Consul Connect’s built-in proxy is by default configured to log metrics to the same sink as the agent that starts it.

When running in this mode it emits some basic metrics. These will be expanded upon in the future.

All metrics are prefixed with consul.proxy.<proxied-service-id> to distinguish between multiple proxies on a given host. The table below use web as an example service name for brevity.

Labels

Most labels have a dst label and some have a src label. When using metrics sinks and timeseries stores that support labels or tags, these allow aggregating the connections by service name.

Assuming all services are using a managed built-in proxy, you can get a complete overview of both number of open connections and bytes sent and received between all services by aggregating over these metrics.

For example aggregating over all upstream (i.e. outbound) connections which have both src and dst labels, you can get a sum of all the bandwidth in and out of a given service or the total number of connections between two services.

Metrics Reference

MetricDescriptionUnitType
consul.proxy.web.runtime.*The same go runtime metrics as documented for the agent above.mixedmixed
consul.proxy.web.inbound.connsShows the current number of connections open from inbound requests to the proxy. Where supported a dst label is added indicating the service name the proxy represents.connectionsgauge
consul.proxy.web.inbound.rx_bytesIncrements by the number of bytes received from an inbound client connection. Where supported a dst label is added indicating the service name the proxy represents.bytescounter
consul.proxy.web.inbound.tx_bytesIncrements by the number of bytes transferred to an inbound client connection. Where supported a dst label is added indicating the service name the proxy represents.bytescounter
consul.proxy.web.upstream.connsShows the current number of connections open from a proxy instance to an upstream. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to.connectionsgauge
consul.proxy.web.inbound.rx_bytesIncrements by the number of bytes received from an upstream connection. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to.bytescounter
consul.proxy.web.inbound.tx_bytesIncrements by the number of bytes transferred to an upstream connection. Where supported a src label is added indicating the service name the proxy represents, and a dst label is added indicating the service name the upstream is connecting to.bytescounter