Protect Kubernetes nodes

    Value

    Calico can automatically create host endpoints for your Kubernetes nodes. This means Calico can manage the lifecycle of host endpoints as your cluster evolves, ensuring nodes are always protected by policy.

    Features

    This how-to guide uses the following Calico features:

    • HostEndpoint
    • KubeControllersConfiguration
    • GlobalNetworkPolicy

    Host endpoints

    Each host has one or more network interfaces that it uses to communicate externally. You can represent these interfaces in Calico using host endpoints and then use network policy to secure them.

    Calico host endpoints can have labels, and they work the same as labels on workload endpoints. The network policy rules can apply to both workload and host endpoints using label selectors.

    Automatic host endpoints secure all of the host’s interfaces (i.e. in Linux, all the interfaces in the host network namespace). They are created by setting .

    Automatic host endpoints

    Calico creates a wildcard host endpoint for each node, with the host endpoint containing the same labels and IP addresses as its corresponding node. Calico will ensure these managed host endpoints maintain the same labels and IP addresses as its node by periodic syncs. This means that policy targeting these automatic host endpoints will function correctly with the policy put in place to select those nodes, even if over time the node’s IPs or labels change.

    Automatic host endpoints are differentiated from other host endpoints by the label projectcalico.org/created-by: calico-kube-controllers. Enable or disable automatic host endpoints by configuring the default KubeControllersConfiguration resource.

    How to

    To enable automatic host endpoints, edit the default KubeControllersConfiguration instance, and set spec.controllers.node.hostEndpoint.autoCreate to true:

    If successful, host endpoints are created for each of your cluster’s nodes:

    1. calicoctl get heps -owide

    The output may look similar to this:

    1. $ calicoctl get heps -owide
    2. NAME NODE INTERFACE IPS PROFILES
    3. ip-172-16-101-147.us-west-2.compute.internal-auto-hep ip-172-16-101-147.us-west-2.compute.internal * 172.16.101.147,192.168.228.128 projectcalico-default-allow
    4. ip-172-16-101-54.us-west-2.compute.internal-auto-hep ip-172-16-101-54.us-west-2.compute.internal * 172.16.101.54,192.168.107.128 projectcalico-default-allow
    5. ip-172-16-101-79.us-west-2.compute.internal-auto-hep ip-172-16-101-79.us-west-2.compute.internal * 172.16.101.79,192.168.91.64 projectcalico-default-allow
    6. ip-172-16-102-63.us-west-2.compute.internal-auto-hep ip-172-16-102-63.us-west-2.compute.internal * 172.16.102.63,192.168.108.192 projectcalico-default-allow

    Apply network policy to automatic host endpoints

    To apply policy that targets all Kubernetes nodes, first add a label to the nodes. The label will be synced to their automatic host endpoints.

    For example, to add the label kubernetes-host to all nodes and their host endpoints:

    And an example policy snippet:

    1. apiVersion: projectcalico.org/v3
    2. kind: GlobalNetworkPolicy
    3. metadata:
    4. name: all-nodes-policy
    5. spec:
    6. <rest of the policy>

    To select a specific set of host endpoints (and their corresponding Kubernetes nodes), use a policy selector that selects a label unique to that set of host endpoints. For example, if we want to add the label environment=dev to nodes named node1 and node2:

    1. kubectl label node node1 environment=dev
    2. kubectl label node node2 environment=dev

    Tutorial

    This tutorial will lock down Kubernetes node ingress to only allow SSH and required ports for Kubernetes to function. We will apply two policies: one for the master nodes. and one for the worker nodes.

    note

    Note: This tutorial was tested on a cluster created with kubeadm v1.18.2 on AWS, using a “stacked etcd” . Stacked etcd topology means the etcd pods are running on the masters. kubeadm uses stacked etcd by default. If your Kubernetes cluster is on a different platform, is running a variant of Kubernetes, or is running a topology with an external etcd cluster, please review the required ports for master and worker nodes in your cluster and adjust the policies in this tutorial as needed.

    First, let’s restrict ingress traffic to the master nodes. The ingress policy below contains three rules. The first rule allows access to the API server port from anywhere. The second rule allows all traffic to localhost, which allows Kubernetes to access control plane processes. These control plane processes includes the etcd server client API, the scheduler, and the controller-manager. This rule also allows localhost access to the kubelet API and calico/node health checks. And the final rule allows the etcd pods to peer with each other and allows the masters to access each others kubelet API.

    If you have not modified the failsafe ports, you should still have SSH access to the nodes after applying this policy. Now apply the ingress policy for the Kubernetes masters:

    1. calicoctl apply -f - << EOF
    2. apiVersion: projectcalico.org/v3
    3. kind: GlobalNetworkPolicy
    4. metadata:
    5. name: ingress-k8s-masters
    6. spec:
    7. selector: has(node-role.kubernetes.io/master)
    8. This rule allows ingress to the Kubernetes API server.
    9. ingress:
    10. - action: Allow
    11. protocol: TCP
    12. destination:
    13. ports:
    14. This rule allows all traffic to localhost.
    15. - action: Allow
    16. destination:
    17. nets:
    18. - 127.0.0.0/8
    19. This rule is required in multi-master clusters where etcd pods are colocated with the masters.
    20. Allow the etcd pods on the masters to communicate with each other. 2380 is the etcd peer port.
    21. This rule also allows the masters to access the kubelet API on other masters (including itself).
    22. - action: Allow
    23. protocol: TCP
    24. source:
    25. selector: has(node-role.kubernetes.io/master)
    26. destination:
    27. ports:
    28. - 2380
    29. - 10250
    30. EOF

    Note that the above policy selects the standard node-role.kubernetes.io/master label that kubeadm sets on master nodes.

    Next, we need to apply policy to restrict ingress to the Kubernetes workers. Before adding the policy we will add a label to all of our worker nodes, which then gets added to its automatic host endpoint. For this tutorial we will use kubernetes-worker. An example command to add the label to worker nodes: