Security Warning This tutorial is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.

To follow this tutorial, you will need the binary installed, as well as kubectl and helm.

Reference the following instruction for setting up aws-cli as well as general documentation:

Reference the following instructions to download kubectl and helm:

Installing helm and kubectl with Homebrew

Homebrew allows you to quickly install both Helm and kubectl on MacOS & Linux.

Install kubectl with Homebrew.

Install helm with Homebrew.

    VPC and security group creation

    The AWS documentation for creating an EKS cluster assumes that you have a VPC and a dedicated security group created. The instructions on how to create these are here:

    You will need the SecurityGroups, VpcId, and SubnetId values for the EKS cluster creation step.

    At least a three node EKS cluster is required to deploy Consul using the official Consul Helm chart. Create a three node cluster on EKS by following the the EKS AWS documentation.

    Note: If using eksctl, you can use this command to create a three-node cluster: eksctl create cluster --name=<YOUR CLUSTER NAME> --region=<YOUR REGION> --nodes=3

    Configure kubectl to talk to your cluster

    1. $ aws eks update-kubeconfig --region <region where you deployed your cluster> --name <your cluster name>

    You can then run the command kubectl cluster-info to verify you are connected to your Kubernetes cluster:

    1. $ kubectl cluster-info
    2. Kubernetes master is running at https://<your K8s master location>.eks.amazonaws.com
    3. CoreDNS is running at https://<your CoreDNS location>.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

    You can also review the documentation for configuring kubectl and EKS here:

    You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. By default, these methods will install a total of three Consul servers as well as one client per Kubernetes node into your EKS cluster. You can review the Consul Kubernetes installation to learn more about these installation options.

    To customize your deployment, you can pass a yaml file to be used during the deployment; it will override the Helm chart’s default values. The following values change your datacenter name and enable the Consul UI via a service.

    helm-consul-values.yaml

    Install Consul in your cluster

    You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI.

    EKS (AWS) - 图2

    1. "hashicorp" has been added to your repositories
    1. $ helm install --values helm-consul-values.yaml consul hashicorp/consul --version "0.40.0"

    Note: You can review the official to learn more about the default settings.

    Run the command kubectl get pods to verify three servers and three clients were successfully created.

    1. $ kubectl get pods
    2. NAME READY STATUS RESTARTS AGE
    3. consul-5fkt7 1/1 Running 0 69s
    4. consul-8zkjc 1/1 Running 0 69s
    5. consul-lnr74 1/1 Running 0 69s
    6. consul-server-0 1/1 Running 0 69s
    7. consul-server-1 1/1 Running 0 69s

    You can verify that, in this case, the UI is exposed at http://aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com over port 80. Navigate to the load balancer DNS name or external IP in your browser to interact with the Consul UI.

    Click the Nodes tab and you can observe several Consul servers and agents running.

    Consul UI nodes tab

    In addition to accessing Consul with the UI, you can manage Consul by directly connecting to the pod with kubectl.

    You can also use the Consul HTTP API by communicating to the local agent running on the Kubernetes node. Feel free to explore the Consul API documentation if you are interested in learning more about using the Consul HTTP API with Kubernetes.

    To access the pod and data directory, you can remote execute into the pod with the command kubectl to start a shell session.

    1. $ kubectl exec --stdin --tty consul-server-0 -- /bin/sh

    This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members.

    1. Node Address Status Type Build Protocol DC Segment
    2. consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 <all>
    3. consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 <all>
    4. consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 <all>
    5. ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 <default>
    6. ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 <default>
    7. ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 <default>

    When you have finished interacting with the pod, exit the shell.

    1. $ exit

    Using Consul environment variables

    You can also access the Consul datacenter with your local Consul binary by enabling environment variables. You can read more about Consul environment variables documented here.

    In this case, since you are exposing HTTP via the load balancer/UI service, you can export the CONSUL_HTTP_ADDR variable to point to the load balancer DNS name (or external IP) of your Consul UI service:

    You can now use your local installation of the Consul binary to run Consul commands:

    1. $ consul members
    2. Node Address Status Type Build Protocol DC Partition Segment
    3. consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 default <all>
    4. consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 default <all>
    5. consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 default <all>
    6. ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 default <default>