Calico

    Calico combines flexible networking capabilities with run-anywhere security enforcement to provide a solution with native Linux kernel performance and true cloud-native scalability. Calico provides developers and cluster operators with a consistent experience and set of capabilities whether running in public cloud or on-prem, on a single node or across a multi-thousand node cluster.

    To use the Calico, specify the following in the cluster spec.

    The following command sets up a cluster using Calico.

    1. kops create cluster \
    2. --zones $ZONES \
    3. --networking calico \
    4. --yes \
    5. --name myclustername.mydns.io

    In order to send network traffic to and from Kubernetes pods, Calico can use either of two networking encapsulation modes: IP-in-IP or . Though IP-in-IP encapsulation uses fewer bytes of overhead per packet than VXLAN encapsulation, [VXLAN can be a better choice when used in concert with Calico’s eBPF dataplane|https://docs.projectcalico.org/maintenance/troubleshoot/troubleshoot-ebpf#poor-performance\]. In particular, eBPF programs can redirect packets between Layer 2 devices, but not between devices at Layer 2 and Layer 3, as is required to use IP-in-IP tunneling.

    kOps chooses the IP-in-IP encapsulation mode by default, it still being the Calico project’s default choice, which is equivalent to writing the following in the cluster spec:

    1. calico:
    2. encapsulationMode: ipip

    To use the VXLAN encapsulation mode instead, add the following to the cluster spec:

    1. networking:
    2. calico:
    3. encapsulationMode: vxlan

    As of Calico version 3.17, in order to use IP-in-IP encapsulation, Calico must use its BIRD networking backend, in which it runs the BIRD BGP daemon in each “calico-node” container to distribute routes to each machine. With the BIRD backend Calico can use either IP-in-IP or VXLAN encapsulation between machines. For now, IP-in-IP encapsulation requires maintaining the routes with BGP, whereas VXLAN encapsulation does not. Conversely, with the VXLAN backend, Calico does not run the BIRD daemon and does not use BGP to maintain routes. This rules out use of IP-in-IP encapsulation, and allows only VXLAN encapsulation. Calico may remove this need for BGP with IP-in-IP encapsulation in the future.

    Enable Cross-Subnet mode in Calico

    Calico supports a new option for both of its IP-in-IP and VXLAN encapsulation modes where traffic is only encapsulated when it’s destined to subnets with intermediate infrastructure lacking Calico route awareness, for example, across heterogeneous public clouds or on AWS where traffic is crossing availability zones.

    With this mode, encapsulation is only performed selectively. This provides better performance in AWS multi-AZ deployments, or those spanning multiple VPC subnets within a single AZ, and in general when deploying on networks where pools of nodes with L2 connectivity are connected via a router.

    Read more here:

    To enable this mode in a cluster, add the following to the cluster spec:

    In the case of AWS, EC2 instances’ ENIs have source/destination checks enabled by default. When you enable cross-subnet mode in kOps 1.19+, it is equivalent to either:

    1. networking:
    2. calico:
    3. awsSrcDstCheck: Disable
    4. IPIPMode: CrossSubnet

    or

    1. calico:
    2. awsSrcDstCheck: Disable
    3. encapsulationMode: vxlan

    depending on which encapsulation mode you have selected.

    Cross-subnet mode is the default mode in kOps 1.22+ for both IP-in-IP and VXLAN encapsulation. It can be disabled or adjusted by setting the ipipMode, vxlanMode and awsSrcDstCheck options.

    In AWS an IAM policy will be added to all nodes to allow Calico to execute ec2:DescribeInstances and ec2:ModifyNetworkInterfaceAttribute, as required when awsSrcDstCheck is set. For older versions of kOps, an addon controller () will be deployed as a Pod (which will be scheduled on one of the masters) to facilitate the disabling of said source/destination address checks. Only the control plane nodes have an IAM policy to allow k8s-ec2-srcdst to execute ec2:ModifyInstanceAttribute.

    The Calico MTU is configurable by editing the cluster and setting mtu field in the Calico configuration. If left to its default empty value, Calico will inspect the network devices and choose a suitable MTU value automatically. If you decide to override this automatic tuning, specify a positive value for the mtu field. In AWS, VPCs support jumbo frames of size 9,001, so is either 8,981 for IP-in-IP encapsulation, 8,951 for VXLAN encapsulation, or 8,941 for WireGuard, in each case deducting the appropriate overhead for the encapsulation format.

    1. spec:
    2. networking:
    3. calico:
    4. mtu: 8981

    Configuring Calico to use Typha

    As of kOps 1.12 Calico uses the kube-apiserver as its datastore. The default setup does not make use of —a component intended to lower the impact of Calico on the Kubernetes API Server which is recommended in clusters over 50 nodes and is strongly recommended in clusters of 100+ nodes. It is possible to configure Calico to use Typha by editing a cluster and adding the field with a positive value to the Calico spec:

    For more details on enabling the eBPF dataplane please refer the .

    Enable the eBPF dataplane in kOps—while also disabling use of kube-proxy—as follows:

    1. enabled: false
    2. networking:
    3. calico:
    4. bpfEnabled: true

    You can further tune Calico’s eBPF dataplane with additional options, such as enabling DSR mode to eliminate network hops in node port traffic (feasible only when your cluster conforms to ) or increasing the log verbosity for Calico’s eBPF programs:

    1. kubeProxy:
    2. enabled: false
    3. networking:
    4. calico:
    5. bpfEnabled: true
    6. bpfExternalServiceMode: DSR
    7. bpfLogLevel: Debug

    Note: Transitioning to or from Calico’s eBPF dataplane in an existing cluster is disruptive. kOps cannot orchestrate this transition automatically today.

    Configuring WireGuard

    IntroducedMinimum K8s Version
    kOps 1.19k8s 1.16

    Calico supports WireGuard to encrypt pod-to-pod traffic. If you enable this options, WireGuard encryption is automatically enabled for all nodes. At the moment, kOps installs WireGuard automatically only when the host OS is Ubuntu. For other OSes, WireGuard has to be part of the base image or installed via a hook.

    For more details of Calico WireGuard please refer the Calico Docs.

    1. networking:
    2. calico:
    3. wireguardEnabled: true

    For help with Calico or to report any issues: * * Calico Users Slack

    For more general information on options available with Calico see the official : * See Calico Network Policy for details on the additional features not available with Kubernetes Network Policy. * See for help with the network options available with Calico.

    This is caused by nodes in the Calico etcd nodestore no longer existing. Due to the ephemeral nature of AWS EC2 instances, new nodes are brought up with different hostnames, and nodes that are taken offline remain in the Calico nodestore. This is unlike most datacentre deployments where the hostnames are mostly static in a cluster. Read this issue](https://github.com/kubernetes/kops/issues/3224) for more detail.

    • Use kOps to update the cluster and wait for calico-kube-controllers deployment and calico-node daemonset pods to be updated
    • Decommission all invalid nodes,
    • All nodes that are deleted from the cluster after this actions should be cleaned from calico’s etcd storage and the delay programming routes should be solved.