Protect hosts tutorial

    In this example we will use pre-DNAT policy applied to the external interfaces of each cluster node:

    We use pre-DNAT policy for these purposes, instead of normal host endpoint policy, because:

    Note: This tutorial is intended to be used with named host endpoints, i.e. host endpoints with set to a specific interface name. This tutorial does not work, as-is, with host endpoints with interfaceName: "*".

    Here is the pre-DNAT policy that we need to disallow incoming external traffic in general:

    Specifically, this policy allows traffic coming from IP addresses that are known to be cluster-internal, and denies traffic from any other sources. For the cluster-internal IP addresses in this example, we assume 10.240.0.0/16 for the nodes’ own IP addresses, and 192.168.0.0/16 for IP addresses that Kubernetes will assign to pods; obviously you should adjust for the CIDRs that are in use in your own cluster.

    note

    The drop-other-ingress policy has a higher order value than allow-cluster-internal-ingress, so that it applies after allow-cluster-internal-ingress. The explicit policy is needed because there is no automatic default-drop semantic for pre-DNAT policy. There is a default-drop semantic for normal host endpoint policy but—as noted above—normal host endpoint policy is not always enforced.

    We also need policy to allow egress traffic through each node’s external interface. Otherwise, when we define host endpoints for those interfaces, no egress traffic will be allowed from local processes (except for traffic that is allowed by the failsafe rules. Because there is no default-deny rule for forwarded traffic, forwarded traffic will be allowed for host endpoints.

    These egress rules are defined as normal host endpoint policies, not pre-DNAT, because pre-DNAT policy does not support egress rules. (Which is because pre-DNAT policies are enforced at a point in the Linux networking stack where it is not yet determined what a packet’s outgoing interface will be.) Because these are normal host endpoint policies which do not apply to forwarded traffic (applyOnForward is false), they are not enforced for traffic that is sent from a local pod. The policy above allows applications or server processes running on the nodes themselves (as opposed to in pods) to connect outbound to any destination. In case you have a use case for restricting to particular IP addresses, you can achieve that by adding a corresponding destination spec.

    Now we can define a host endpoint for the outwards-facing interface of each node. The policies above all have a selector that makes them applicable to any endpoint with a host-endpoint label, so we should include that label in our definitions. For example, for eth0 on :

    After defining host endpoints for each node, you should find that internal cluster communications are all still working as normal—for example, that you can successfully execute commands like calicoctl get hep and calicoctl get pol—but that it is impossible to connect into the cluster from outside (except for any .
    For example, if the cluster includes a Kubernetes Service that is exposed as NodePort 31852, you should find, at this point, that that NodePort works from within the cluster, but not from outside.

    To open a pinhole for that NodePort, for external access, you can configure a pre-DNAT policy like this:

    If you wanted to make that NodePort accessible only through particular nodes, you could achieve that by giving those nodes a particular host-endpoint label:

    and then using host-endpoint=='<special-value>' as the selector of the allow-nodeport policy, instead of .