Configuring TLS on an Existing Cluster
If you’re not using Consul Connect, follow this process.
Run a Helm upgrade with the following config:
This upgrade will trigger a rolling update of the clients, as well as any other consul-k8s
components, such as sync catalog or client snapshot deployments.
Perform a rolling upgrade of the servers, as described in Upgrade Consul Servers.
Repeat steps 1 and 2, turning on TLS verification by setting
global.tls.verify
totrue
.
Gradual TLS Rollout with Consul Connect
Add a new identical node pool.
Cordon all nodes in the old pool by running
kubectl cordon
to ensure Kubernetes doesn’t schedule any new workloads on those nodes and instead schedules onto the new nodes, which shortly will be TLS-enabled.Create the following Helm config file for the upgrade:
tls:
enabled: true
# and `verify_incoming` to `false` on servers and clients,
# which allows TLS-disabled nodes to join the cluster.
verify: false
server:
updatePartition: <number_of_server_replicas>
updateStrategy: |
In this configuration, we’re setting
server.updatePartition
to the number of server replicas as described in Upgrade Consul Servers andclient.updateStrategy
toOnDelete
to manually trigger an upgrade of the clients.Run
helm upgrade
with the above config file. The upgrade will trigger an update of all components except clients and servers, such as the Consul Connect webhook deployment or the sync catalog deployment. Note that the sync catalog and the client snapshot deployments will not be in theready
state until the clients on their nodes are upgraded. It is OK to proceed to the next step without them being ready because Kubernetes will keep the old deployment pod around, and so there will be no downtime.At this point, all components (e.g., Consul Connect webhook and sync catalog) should be running on the new node pool.
Redeploy all your Connect-enabled applications. One way to trigger a redeploy is to run
kubectl drain
on the nodes in the old pool. Now that the Connect webhook is TLS-aware, it will add TLS configuration to the sidecar proxy. Also, Kubernetes should schedule these applications on the new node pool.Perform a rolling upgrade of the servers described in .
If everything is healthy, delete the old node pool.