Security Warning This tutorial is not for production use. By default, the Helm chart will install an insecure configuration of Consul. Please refer to the Kubernetes deployment guide to determine how you can secure Consul on Kubernetes in production. Additionally, we recommend to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.

First, you’ll need to follow the directions for .

You will also need to install kubectl and helm.

Kind - 图2

Install kubectl with Homebrew.

  1. $ brew install kubernetes-cli

Install helm with Homebrew.

  1. $ brew install kubernetes-helm
  1. $ brew install kubernetes-helm

Start a Kind cluster

Once kind is installed, you can spin up any number of clusters. By default, kind names your cluster “kind”, but you may name it anything you like by specifying the --name option. This tutorial assumes the cluster is named dc1. Refer to the for information about how to specify additional parameters using a yaml configuration file.

  1. $ kind create cluster --name dc1
  1. $ kind create cluster --name dc1

The output will be similar to the following.

  1. Creating cluster "dc1" ...
  2. Ensuring node image (kindest/node:v1.18.2) 🖼
  3. Preparing nodes 📦
  4. Writing configuration 📜
  5. Starting control-plane 🕹️
  6. Installing CNI 🔌
  7. Installing StorageClass 💾
  8. Set kubectl context to "kind-dc1"
  9. You can now use your cluster with:
  10. kubectl cluster-info --context kind-dc1
  11. Have a nice day! 👋
  1. Creating cluster "dc1" ...
  2. Ensuring node image (kindest/node:v1.18.2) 🖼
  3. Preparing nodes 📦
  4. Writing configuration 📜
  5. Starting control-plane 🕹️
  6. Installing CNI 🔌
  7. Installing StorageClass 💾
  8. Set kubectl context to "kind-dc1"
  9. You can now use your cluster with:
  10. kubectl cluster-info --context kind-dc1
  11. Have a nice day! 👋

Note: kind does not ship with the Kubernetes dashboard by default. If, you wish to install the Kubernetes Dashboard, refer to the Kubernetes Dashboard project for instructions on how to install and view it.

You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. Feel free to review the to learn more about these installation options.

To customize your deployment, you can pass a yaml file to be used during the deployment; it will override the Helm chart’s default values. The chart comes with reasonable defaults, however, you will override a few values to integrate more easily with kind and enable useful features.

Create a custom values file called helm-consul-values.yaml with the following contents. This configuration will:

  • Set the prefix used for all resources in the Helm chart to consul
  • Name the Consul datacenter dc1
  • Configure the datacenter to run only 1 server
  • Enable the Consul UI and expose it via a NodePort
  • Enable Consul service mesh features by setting connectInject.enabled to true
  • Enable Consul service mesh CRDs by setting controller.enabled to true

With Transparent Proxy

  • With Transparent Proxy
  • Without Transparent Proxy
  1. $ cat > helm-consul-values.yaml <<EOF
  2. global:
  3. name: consul
  4. datacenter: dc1
  5. server:
  6. replicas: 1
  7. ui:
  8. enabled: true
  9. service:
  10. type: 'NodePort'
  11. connectInject:
  12. enabled: true
  13. controller:
  14. enabled: true
  15. EOF
  1. $ cat > helm-consul-values.yaml <<EOF
  2. global:
  3. name: consul
  4. datacenter: dc1
  5. server:
  6. replicas: 1
  7. ui:
  8. enabled: true
  9. service:
  10. type: 'NodePort'
  11. connectInject:
  12. enabled: true
  13. controller:
  14. enabled: true
  15. EOF
  1. $ cat > helm-consul-values.yaml <<EOF
  2. global:
  3. name: consul
  4. datacenter: dc1
  5. server:
  6. replicas: 1
  7. ui:
  8. enabled: true
  9. service:
  10. type: 'NodePort'
  11. connectInject:
  12. enabled: true
  13. transparentProxy:
  14. defaultEnabled: false
  15. controller:
  16. enabled: true
  17. EOF
  1. 1 2 3 4 5 6 7 8 9 1011121314151617$ cat > helm-consul-values.yaml <<EOF
  2. global:
  3. name: consul
  4. datacenter: dc1
  5. server:
  6. replicas: 1
  7. ui:
  8. enabled: true
  9. service:
  10. type: 'NodePort'
  11. connectInject:
  12. enabled: true
  13. transparentProxy:
  14. defaultEnabled: false
  15. controller:
  16. enabled: true
  17. EOF

Note: Transparent proxy is the default method for service to service communication within the service mesh since Consul 1.10. Check out the transparent proxy documentation to learn more.

Install Consul in your cluster

You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI.

Kind - 图5

  1. $ helm repo add hashicorp https://helm.releases.hashicorp.com
  2. "hashicorp" has been added to your repositories
  1. $ helm repo add hashicorp https://helm.releases.hashicorp.com
  2. "hashicorp" has been added to your repositories
  1. $ helm install --values helm-consul-values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "0.43.0"

Note: You can review the official Helm chart values to learn more about the default settings.

Access the Consul UI

Verify Consul was deployed properly by accessing the Consul UI. Run kubectl get pods to list your pods. Find the pod with consul-server in the name.

Run the command kubectl get pods to verify your Consul resources were successfully created.

  1. $ kubectl get pods --namespace consul
  2. NAME READY STATUS RESTARTS AGE
  3. consul-client-26lm7 1/1 Running 0 62s
  4. consul-connect-injector-7f5f9b9554-m8cr2 1/1 Running 0 62s
  5. consul-connect-injector-7f5f9b9554-w2mbz 1/1 Running 0 62s
  6. consul-controller-559465fd96-m7w2b 1/1 Running 0 62s
  7. consul-server-0 1/1 Running 0 62s
  8. consul-webhook-cert-manager-8595bff784-pj2z6 1/1 Running 0 62s
  1. $ kubectl get pods --namespace consul
  2. NAME READY STATUS RESTARTS AGE
  3. consul-client-26lm7 1/1 Running 0 62s
  4. consul-connect-injector-7f5f9b9554-m8cr2 1/1 Running 0 62s
  5. consul-connect-injector-7f5f9b9554-w2mbz 1/1 Running 0 62s
  6. consul-controller-559465fd96-m7w2b 1/1 Running 0 62s
  7. consul-server-0 1/1 Running 0 62s
  8. consul-webhook-cert-manager-8595bff784-pj2z6 1/1 Running 0 62s

Now, expose the Consul UI with kubectl port-forward with the consul-server-0 pod name as the target.

  1. $ kubectl port-forward consul-server-0 --namespace consul 8500:8500
  1. $ kubectl port-forward consul-server-0 --namespace consul 8500:8500

Visit the Consul UI at localhost:8500 in a browser on your development machine. You will observe a list of Consul’s services, nodes, and other resources. Currently, you should only find the consul service listed.

In addition to accessing Consul with the UI, you can manage Consul with the HTTP API or by directly connecting to the pod with kubectl.

To access the pod and data directory, you can remote execute into the pod with the command kubectl to start a shell session.

  1. $ kubectl exec --stdin --tty consul-server-0 --namespace consul -- /bin/sh
  1. $ kubectl exec --stdin --tty consul-server-0 --namespace consul -- /bin/sh

This allows you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members.

  1. $ consul members
  2. Node Address Status Type Build Protocol DC Partition Segment
  3. consul-server-0 10.244.0.8:8301 alive server 1.11.2 2 dc1 default <all>
  4. dc1-control-plane 10.244.0.5:8301 alive client 1.11.2 2 dc1 default <default>
  1. $ consul members
  2. Node Address Status Type Build Protocol DC Partition Segment
  3. consul-server-0 10.244.0.8:8301 alive server 1.11.2 2 dc1 default <all>
  4. dc1-control-plane 10.244.0.5:8301 alive client 1.11.2 2 dc1 default <default>

When you have finished interacting with the pod, exit the shell.

  1. $ exit
  1. $ exit

Consul HTTP API

You can use the Consul HTTP API by communicating with the local agent running on the Kubernetes node. Read the documentation to learn more about using the Consul HTTP API with Kubernetes.

Deploy services with Kubernetes

Now that you have a running Consul service mesh, you can deploy services to it.

You will now deploy a two-tier application made of a backend data service that returns a number (the counting service), and a frontend dashboard that pulls from the counting service over HTTP and displays the number.

Create a deployment definition, service, and service account for the counting service named counting.yaml.

With Transparent Proxy

Kind - 图7

  • With Transparent Proxy
  • Without Transparent Proxy
  1. $ cat > counting.yaml <<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: counting
  6. ---
  7. apiVersion: v1
  8. kind: Service
  9. metadata:
  10. name: counting
  11. spec:
  12. selector:
  13. app: counting
  14. - port: 9001
  15. ---
  16. apiVersion: apps/v1
  17. kind: Deployment
  18. metadata:
  19. labels:
  20. app: counting
  21. name: counting
  22. spec:
  23. replicas: 1
  24. selector:
  25. matchLabels:
  26. app: counting
  27. template:
  28. metadata:
  29. annotations:
  30. 'consul.hashicorp.com/connect-inject': 'true'
  31. labels:
  32. app: counting
  33. spec:
  34. containers:
  35. - name: counting
  36. image: hashicorp/counting-service:0.0.2
  37. ports:
  38. - containerPort: 9001
  39. EOF
  1. $ cat > counting.yaml <<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: counting
  6. ---
  7. apiVersion: v1
  8. kind: Service
  9. metadata:
  10. name: counting
  11. spec:
  12. selector:
  13. app: counting
  14. ports:
  15. - port: 9001
  16. targetPort: 9001
  17. ---
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. labels:
  22. app: counting
  23. name: counting
  24. spec:
  25. replicas: 1
  26. selector:
  27. matchLabels:
  28. app: counting
  29. template:
  30. metadata:
  31. annotations:
  32. 'consul.hashicorp.com/connect-inject': 'true'
  33. labels:
  34. app: counting
  35. spec:
  36. containers:
  37. - name: counting
  38. image: hashicorp/counting-service:0.0.2
  39. ports:
  40. - containerPort: 9001
  41. EOF
  1. $ cat > counting.yaml <<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: counting
  6. ---
  7. apiVersion: v1
  8. kind: Service
  9. metadata:
  10. name: counting
  11. spec:
  12. selector:
  13. app: counting
  14. ports:
  15. - port: 9001
  16. targetPort: 9001
  17. ---
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. labels:
  22. app: counting
  23. name: counting
  24. spec:
  25. replicas: 1
  26. selector:
  27. matchLabels:
  28. app: counting
  29. template:
  30. metadata:
  31. annotations:
  32. 'consul.hashicorp.com/connect-inject': 'true'
  33. labels:
  34. app: counting
  35. spec:
  36. containers:
  37. - name: counting
  38. image: hashicorp/counting-service:0.0.2
  39. ports:
  40. - containerPort: 9001
  41. EOF
  1. $ cat > counting.yaml <<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: counting
  6. ---
  7. apiVersion: v1
  8. kind: Service
  9. metadata:
  10. name: counting
  11. spec:
  12. selector:
  13. app: counting
  14. ports:
  15. - port: 9001
  16. targetPort: 9001
  17. ---
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. labels:
  22. app: counting
  23. name: counting
  24. spec:
  25. replicas: 1
  26. selector:
  27. matchLabels:
  28. app: counting
  29. template:
  30. metadata:
  31. annotations:
  32. 'consul.hashicorp.com/connect-inject': 'true'
  33. labels:
  34. app: counting
  35. spec:
  36. containers:
  37. - name: counting
  38. image: hashicorp/counting-service:0.0.2
  39. ports:
  40. - containerPort: 9001
  41. EOF

With Transparent Proxy

  • With Transparent Proxy
  • Without Transparent Proxy
  1. $ cat > dashboard.yaml <<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: dashboard
  6. ---
  7. apiVersion: v1
  8. kind: Service
  9. metadata:
  10. name: dashboard
  11. spec:
  12. selector:
  13. app: dashboard
  14. ports:
  15. - port: 9002
  16. targetPort: 9002
  17. apiVersion: apps/v1
  18. kind: Deployment
  19. metadata:
  20. labels:
  21. app: dashboard
  22. name: dashboard
  23. spec:
  24. selector:
  25. matchLabels:
  26. app: dashboard
  27. template:
  28. metadata:
  29. annotations:
  30. 'consul.hashicorp.com/connect-inject': 'true'
  31. labels:
  32. app: dashboard
  33. spec:
  34. containers:
  35. - name: dashboard
  36. image: hashicorp/dashboard-service:0.0.4
  37. ports:
  38. - containerPort: 9002
  39. env:
  40. - name: COUNTING_SERVICE_URL
  41. value: 'http://localhost:9001'
  42. EOF
  1. $ cat > dashboard.yaml <<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: dashboard
  6. ---
  7. apiVersion: v1
  8. kind: Service
  9. metadata:
  10. name: dashboard
  11. spec:
  12. selector:
  13. app: dashboard
  14. ports:
  15. - port: 9002
  16. targetPort: 9002
  17. ---
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. labels:
  22. app: dashboard
  23. name: dashboard
  24. spec:
  25. replicas: 1
  26. selector:
  27. matchLabels:
  28. app: dashboard
  29. template:
  30. metadata:
  31. annotations:
  32. 'consul.hashicorp.com/connect-inject': 'true'
  33. 'consul.hashicorp.com/connect-service-upstreams': 'counting:9001'
  34. labels:
  35. app: dashboard
  36. spec:
  37. containers:
  38. - name: dashboard
  39. image: hashicorp/dashboard-service:0.0.4
  40. ports:
  41. - containerPort: 9002
  42. env:
  43. - name: COUNTING_SERVICE_URL
  44. value: 'http://localhost:9001'
  45. EOF
  1. 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930313233343536373839404142434445$ cat > dashboard.yaml <<EOF
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: dashboard
  6. ---
  7. apiVersion: v1
  8. kind: Service
  9. metadata:
  10. name: dashboard
  11. spec:
  12. selector:
  13. app: dashboard
  14. ports:
  15. - port: 9002
  16. targetPort: 9002
  17. ---
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. metadata:
  21. labels:
  22. app: dashboard
  23. name: dashboard
  24. spec:
  25. replicas: 1
  26. selector:
  27. matchLabels:
  28. app: dashboard
  29. template:
  30. metadata:
  31. annotations:
  32. 'consul.hashicorp.com/connect-inject': 'true'
  33. 'consul.hashicorp.com/connect-service-upstreams': 'counting:9001'
  34. labels:
  35. app: dashboard
  36. spec:
  37. containers:
  38. - name: dashboard
  39. image: hashicorp/dashboard-service:0.0.4
  40. ports:
  41. - containerPort: 9002
  42. env:
  43. - name: COUNTING_SERVICE_URL
  44. value: 'http://localhost:9001'
  45. EOF

Use kubectl to deploy the counting service.

  1. $ kubectl apply -f counting.yaml
  2. serviceaccount/counting created
  3. service/counting created
  4. deployment.apps/counting created
  1. $ kubectl apply -f counting.yaml
  2. serviceaccount/counting created
  3. service/counting created
  4. deployment.apps/counting created

Use kubectl to deploy the dashboard service.

  1. $ kubectl apply -f dashboard.yaml
  2. serviceaccount/dashboard created
  3. service/dashboard created
  4. deployment.apps/dashboard created
  1. $ kubectl apply -f dashboard.yaml
  2. serviceaccount/dashboard created
  3. service/dashboard created
  4. deployment.apps/dashboard created

To verify the services were deployed, refresh the Consul UI until you observe that the counting and dashboard services are running.

Services

View the dashboard

To visit the dashboard, forward the pod’s port where the dashboard service is running to your local machine on the same port by providing the pod name (dashboard), which you specified in the service definition YAML file.

  1. $ kubectl port-forward deploy/dashboard 9002:9002
  2. Forwarding from 127.0.0.1:9002 -> 9002
  3. Forwarding from [::1]:9002 -> 9002
  1. $ kubectl port-forward deploy/dashboard 9002:9002
  2. Forwarding from 127.0.0.1:9002 -> 9002
  3. Forwarding from [::1]:9002 -> 9002

Visit in your web browser. It will display the dashboard UI with a number retrieved from the counting service using Consul service discovery.

Consul intentions provide you the ability to control which services are allowed to communicate. Next, you will use intentions to test the communication between the dashboard and counting services.

You can use a Consul ServiceIntention CRD to create an intention that prevents the dashboard service from reaching its upstream counting service.

Create a file named deny.yaml that denies communication between the two services.

  1. $ cat > deny.yaml <<EOF
  2. apiVersion: consul.hashicorp.com/v1alpha1
  3. kind: ServiceIntentions
  4. metadata:
  5. name: dashboard-to-counting
  6. spec:
  7. destination:
  8. name: counting
  9. sources:
  10. - name: dashboard
  11. action: deny
  12. EOF
  1. $ cat > deny.yaml <<EOF
  2. apiVersion: consul.hashicorp.com/v1alpha1
  3. kind: ServiceIntentions
  4. metadata:
  5. name: dashboard-to-counting
  6. spec:
  7. destination:
  8. name: counting
  9. sources:
  10. - name: dashboard
  11. action: deny
  12. EOF

Use kubectl to apply the intention.

  1. $ kubectl apply -f deny.yaml
  2. serviceintentions.consul.hashicorp.com/dashboard-to-counting created
  1. $ kubectl apply -f deny.yaml
  2. serviceintentions.consul.hashicorp.com/dashboard-to-counting created

Verify the services are no longer allowed to communicate by returning to the dashboard UI. The service will display a message that the “Counting Service is Unreachable”, and the count will display as “-1”.

Allow the application dashboard to communicate with the Counting service

Finally, remove the intention so that the services can communicate again.

  1. $ kubectl delete -f deny.yaml

Intentions take effect rather quickly. The next time you visit the you’ll notice that it’s successfully communicating with the backend counting service again.

Next steps

To learn more about Consul service mesh on Kubernetes, review the . To learn how to deploy Consul on a Kubernetes cluster, review the production deployment tutorial. To learn how to secure Consul and services for production, read the tutorial.