Get started with VPP networking

    caution

    The VPP dataplane is in beta and should not be used in production clusters. It has had lots of testing and is pretty stable. However, chances are that some bugs are still lurking around (please report these on the or Github). In addition, it still does not support all the features of Calico.

    Value

    The VPP dataplane mode has several advantages over standard Linux networking pipeline mode:

    • Scales to higher throughput, especially with WireGuard encryption enabled
    • Further improves encryption performance with IPsec
    • Native support for Kubernetes services without needing kube-proxy, which:
      • Reduces first-packet latency for packets to services
      • Preserves external client source IP addresses all the way to the pod

    The VPP dataplane is entirely compatible with the other Calico dataplanes, meaning you can have a cluster with VPP-enabled nodes along with regular nodes. This makes it possible to migrate a cluster from Linux or eBPF networking to VPP networking.

    In addition, the VPP dataplane offers some specific features for network-intensive applications, such as providing userspace packet interfaces to the pods (instead of regular Linux network devices), or exposing the VPP Host Stack to run optimized L4+ applications in the pods.

    Trying out the beta will give you a taste of these benefits and an opportunity to give feedback to the VPP dataplane team.

    Features

    This how-to guide uses the following Calico features:

    • calico/node
    • VPP dataplane

    The Vector Packet Processor (VPP) is a high-performance, open-source userspace network dataplane written in C, developed under the umbrella. It supports many standard networking features (L2 switching, L3 routing, NAT, encapsulations), and is easily extensible using plugins. The VPP dataplane uses plugins to efficiently implement Kubernetes services load balancing and Calico policies.

    Operator based installation

    This guide uses the Tigera operator to install Calico. The operator provides lifecycle management for Calico exposed via the Kubernetes API defined as a custom resource definition. While it is also technically possible to install Calico and configure it for VPP using manifests directly, only operator based installations are supported at this stage.

    How to

    This guide details two ways to install Calico with the VPP dataplane:

    • On a managed EKS cluster. This is the option that requires the least configuration
    • On a managed EKS cluster with the DPDK interface driver. This options is more complex to setup but provides better performance
    • On any Kubernetes cluster

    In all cases, here are the details of what you will get:

    • Install on EKS
    • Install on EKS with DPDK
    • Install on any cluster

    Install Calico with the VPP dataplane on an EKS cluster

    Requirements

    For these instructions, we will use eksctl to provision the cluster. However, you can use any of the methods in Getting Started with Amazon EKS

    Before you get started, make sure you have downloaded and configured the

    Provision the cluster

    1. First, create an Amazon EKS cluster without any nodes.

    2. Since this cluster will use Calico for networking, you must delete the aws-node DaemonSet to disable the default AWS VPC networking for the pods.

      1. kubectl delete daemonset -n kube-system aws-node
    1. Now that you have an empty cluster configured, you can install the Tigera operator.

      1. Then, you need to configure the Calico installation for the VPP dataplane. The yaml in the link below contains a minimal viable configuration for EKS. For more information on configuration options available in this manifest, see the installation reference.

        Get started with VPP networking - 图2note

        1. kubectl apply -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.24.0/yaml/calico/installation-eks.yaml
      2. Now is time to install the VPP dataplane components.

        1. kubectl apply -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.24.0/yaml/generated/calico-vpp-eks.yaml
      3. Finally, add nodes to the cluster.

        1. eksctl create nodegroup --cluster my-calico-cluster --node-type t3.medium --node-ami auto --max-pods-per-node 50

        tip

        The —max-pods-per-node option above, ensures that EKS does not limit the . For the full set of node group options, see eksctl create nodegroup --help.

      Requirements

      DPDK provides better performance compared to the standard install but it requires some additional customisations (hugepages, for instance) in the EKS worker instances. We have a bash script, init_eks.sh, which takes care of applying the required customizations and we make use of the preBootstrapCommands property of eksctl configuration file to execute the script during the worker node creation. These instructions require the latest version of eksctl.

      Provision the cluster

      1. First, create an Amazon EKS cluster without any nodes.

        1. eksctl create cluster --name my-calico-cluster --without-nodegroup
      2. Since this cluster will use Calico for networking, you must delete the aws-node DaemonSet to disable the default AWS VPC networking for the pods.

      Install and configure Calico with the VPP dataplane

      1. Now that you have an empty cluster configured, you can install the Tigera operator.

        1. kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml
      2. Then, you need to configure the Calico installation for the VPP dataplane. The yaml in the link below contains a minimal viable configuration for EKS. For more information on configuration options available in this manifest, see .

        Get started with VPP networking - 图4note

        Before applying this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to specify the default IP pool CIDR to match your desired pod network CIDR.

        1. kubectl apply -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.24.0/yaml/calico/installation-eks.yaml
      3. Now is time to install the VPP dataplane components.

        1. kubectl apply -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.24.0/yaml/generated/calico-vpp-eks-dpdk.yaml
      4. Finally, time to add nodes to the cluster. Since we need to customize the nodes for DPDK, we will use an eksctl config file with the preBootstrapCommands property to create the worker nodes. The following command will create a managed nodegroup with 2 t3.large worker nodes in the cluster:

        1. cat <<EOF | eksctl create nodegroup -f -
        2. apiVersion: eksctl.io/v1alpha5
        3. kind: ClusterConfig
        4. metadata:
        5. name: my-calico-cluster
        6. region: us-east-2
        7. managedNodeGroups:
        8. - name: my-calico-cluster-ng
        9. desiredCapacity: 2
        10. labels: {role: worker}
        11. preBootstrapCommands:
        12. - sudo curl -o /tmp/init_eks.sh "https://raw.githubusercontent.com/projectcalico/vpp-dataplane/master/scripts/init_eks.sh"
        13. - sudo chmod +x /tmp/init_eks.sh
        14. - sudo /tmp/init_eks.sh
        15. EOF

        Please edit the cluster name, region and other fields as appropriate for your cluster. In case you want to enable ssh access to the EKS worker instances, add the following to the above config file:

        1. ssh:
        2. publicKeyPath: <path to public key>

        For details on ssh access refer to Amazon EC2 key pairs and Linux instances.

      Install Calico with the VPP dataplane on any Kubernetes cluster

      The VPP dataplane has the following requirements:

      Required

      • A blank Kubernetes cluster, where no CNI was ever configured.

      • note

        If you are using kubeadm to create the cluster please make sure to specify the pod network CIDR using the --pod-network-cidr command-line argument, i.e., sudo kubeadm init --pod-network-cidr=192.168.0.0/16. If 192.168.0.0/16 is already in use within your network you must select a different pod network CIDR.

      Optional For some hardware, the following hugepages configuration may enable VPP to use more efficient drivers:

      • At least 512 x 2MB-hugepages are available (grep HugePages_Free /proc/meminfo)

      • The vfio-pci (vfio_pci on centos) or uio_pci_generic kernel module is loaded. For example:

        1. modprobe vfio-pci
        2. echo "vm.nr_hugepages = 512" >> /etc/sysctl.conf
        3. sysctl -p
        4. restart kubelet to take the changes into account
        5. you may need to use a different command depending on how kubelet was installed
        6. systemctl restart kubelet

      Install Calico and configure it for VPP

      1. Start by installing the Tigera operator on your cluster.

      2. Then, you need to configure the Calico installation for the VPP dataplane. The yaml in the link below contains a minimal viable configuration for VPP. For more information on configuration options available in this manifest, see the installation reference.

        Get started with VPP networking - 图6note

        Before applying this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to specify the default IP pool CIDR to match your desired pod network CIDR.

        1. kubectl apply -f https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.24.0/yaml/calico/installation-default.yaml

      Install the VPP dataplane components

      Start by getting the appropriate yaml manifest for the VPP dataplane resources:

      1. If you have configured hugepages on your machines
      2. curl -o calico-vpp.yaml https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.24.0/yaml/generated/calico-vpp.yaml
      1. curl -o calico-vpp.yaml https://raw.githubusercontent.com/projectcalico/vpp-dataplane/v3.24.0/yaml/generated/calico-vpp-nohuge.yaml

      Then locate the calico-vpp-config ConfigMap in this yaml manifest and configure it as follows.

      Required

      • vpp_dataplane_interface is the primary interface that VPP will use. It must be the name of a Linux interface, up and configured with an address. The address configured on this interface must be the node address in Kubernetes (kubectl get nodes -o wide).
      • service_prefix is the Kubernetes service CIDR. You can retrieve it by running:
      1. kubectl cluster-info dump | grep -m 1 service-cluster-ip-range

      If this command doesn’t return anything, you can leave the default value of 10.96.0.0/12.

      Optional

      • vpp_uplink_driver configures how VPP drives the physical interface. The supported values will depend on the interface type. Available values are:
        • "" : will automatically select and try drivers based on interface type and available resources, starting with the fastest
        • af_xdp : use an AF_XDP socket to drive the interface (requires kernel 5.4 or newer)
        • af_packet : use an AF_PACKET socket to drive the interface (not optimized but works everywhere)
        • avf : use the VPP native driver for Intel 700-Series and 800-Series interfaces (requires hugepages)
        • vmxnet3 : use the VPP native driver for VMware virtual interfaces (requires hugepages)
        • virtio : use the VPP native driver for Virtio virtual interfaces (requires hugepages)
        • rdma : use the VPP native driver for Mellanox CX-4 and CX-5 interfaces (requires hugepages)
        • dpdk : use the DPDK interface drivers with VPP (requires hugepages, works with most interfaces)
        • none : do not configure connectivity automatically. This can be used when configuring the interface manually

      Example

      1. kind: ConfigMap
      2. apiVersion: v1
      3. metadata:
      4. name: calico-config
      5. namespace: calico-vpp-dataplane
      6. data:
      7. service_prefix: 10.96.0.0/12
      8. vpp_dataplane_interface: eth1
      9. vpp_uplink_driver: ""
      10. ...

      Apply the configuration

      To apply the configuration, run:

      1. kubectl apply -f calico-vpp.yaml

      This will install all the resources required by the VPP dataplane in your cluster.

      Next steps

      After installing Calico with the VPP dataplane, you can benefit from the features of the VPP dataplane, such as fast or Wireguard encryption.

      • to configure and monitor your cluster.

      Security