Security Warning This tutorial is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.
To follow this tutorial, you will need the Google Cloud SDK (gcloud), as well as kubectl and helm.
Reference the following instruction for setting up the Google Cloud SDK as well as general documentation:
To initialize the Google command-line tool to use the Google Cloud SDK, you can use .
Reference the following instructions to download kubectl
and helm
:
Installing helm and kubectl with Homebrew on MacOS
Homebrew allows you to quickly install both Helm and kubectl
on MacOS & Linux.
Install kubectl
with Homebrew.
$ brew install kubernetes-cli
Install helm
on MacOS with Homebrew.
$ brew install kubernetes-helm
Service account authentication (optional)
You should create a GCP IAM service account and authenticate with it on the command line.
- To review the GCP IAM service account documentation, go here
Once you have obtained your GCP IAM service account key-file
, you can authenticate your local gcloud cli by running the following:
$ gcloud auth activate-service-account --key-file="<path-to/my-consul-service-account.json>"
Review the for creating and administering a Kubernetes cluster within GCP. Note, for a quick start, you can also easily create a GKE cluster from the GCP console by clicking “Create Cluster”, using the defaults, and clicking “Create.”
Configure kubectl to talk to your cluster
$ gcloud container clusters get-credentials my-consul-cluster --zone us-west1-b --project my-project
You can then run kubectl cluster-info to verify you are connected to your Kubernetes cluster:
Kubernetes master is running at https://<your GKE ip(s)>
GLBCDefaultBackend is running at https://<your GKE ip(s)>/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://<your GKE ip(s)>/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://<your GKE ip(s)>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://<your GKE ip(s)>/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use kubectl cluster-info dump
.
You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. By default, these methods will install a total of three Consul servers as well as one client per Kubernetes node into your GKE cluster. You can review the Consul Kubernetes installation documentation to learn more about these installation options.
To customize your deployment, you can pass a yaml file to be used during the deployment; it will override the Helm chart’s default values. The following values change your datacenter name and enable the Consul UI via a service.
global:
name: consul
ui:
enabled: true
service:
type: LoadBalancer
helm-consul-values.yaml
Install Consul in your cluster
You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI.
$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories
$ helm install --values helm-consul-values.yaml consul hashicorp/consul --version "0.40.0"
Note: You can review the official Helm chart values to learn more about the default settings.
Run the command kubectl get pods
to verify three servers and three clients were successfully created.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-dns ClusterIP 10.23.247.215 <none> 53/TCP,53/UDP 3m20s
consul-ui LoadBalancer 10.23.254.244 34.82.43.9 80:30749/TCP 3m20s
kubernetes ClusterIP 10.23.240.1 <none> 443/TCP 110m
You can verify that, in this case, the UI is exposed at http://34.82.43.9
over port 80. Navigate to the load balancer DNS name or external IP in your browser to interact with the Consul UI.
Click the Nodes tab and you can observe several Consul servers and agents running.
In addition to accessing Consul with the UI, you can manage Consul by directly connecting to the pod with kubectl
.
You can also use the Consul HTTP API by communicating to the local agent running on the Kubernetes node. Feel free to explore the if you are interested in learning more about using the Consul HTTP API with Kubernetes.
To access the pod and data directory, you can remote execute into the pod with the command kubectl
to start a shell session.
This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members.
$ consul members
Node Address Status Type Build Protocol DC Segment
consul-server-0 10.8.1.9:8301 alive server 1.11.2 2 hashidc1 <all>
consul-server-1 10.8.2.4:8301 alive server 1.11.2 2 hashidc1 <all>
consul-server-2 10.8.0.8:8301 alive server 1.11.2 2 hashidc1 <all>
gke-standard-cluster-1-default-pool-60f986c7-19nq 10.8.0.7:8301 alive client 1.11.2 2 hashidc1 <default>
gke-standard-cluster-1-default-pool-60f986c7-q7mn 10.8.1.8:8301 alive client 1.11.2 2 hashidc1 <default>
gke-standard-cluster-1-default-pool-60f986c7-xwz6 10.8.2.3:8301 alive client 1.11.2 2 hashidc1 <default>
When you have finished interacting with the pod, exit the shell.
$ exit
Using Consul environment variables
You can also access the Consul datacenter with your local Consul binary by enabling environment variables. You can read more about Consul environment variables documented .
In this case, since you are exposing HTTP via the load balancer/UI service, you can export the CONSUL_HTTP_ADDR
variable to point to the load balancer DNS name (or external IP) of your Consul UI service:
You can now use your local installation of the Consul binary to run Consul commands:
$ consul members
Node Address Status Type Build Protocol DC Segment
consul-server-0 10.8.1.9:8301 alive server 1.11.2 2 hashidc1 <all>
consul-server-1 10.8.2.4:8301 alive server 1.11.2 2 hashidc1 <all>
consul-server-2 10.8.0.8:8301 alive server 1.11.2 2 hashidc1 <all>
gke-standard-cluster-1-default-pool-60f986c7-19nq 10.8.0.7:8301 alive client 1.11.2 2 hashidc1 <default>
gke-standard-cluster-1-default-pool-60f986c7-q7mn 10.8.1.8:8301 alive client 1.11.2 2 hashidc1 <default>
gke-standard-cluster-1-default-pool-60f986c7-xwz6 10.8.2.3:8301 alive client 1.11.2 2 hashidc1 <default>