Getting Started

    Antrea relies on for per-Node CIDR allocation. NodeIPAM can run within the Kubernetes kube-controller-manager, or within the Antrea Controller.

    NodeIPAM within kube-controller-manager

    When using kubeadm to create the Kubernetes cluster, passing --pod-network-cidr=<CIDR Range for Pods> to kubeadm init will enable NodeIpamController. Clusters created with kubeadm will always have CNI plugins enabled. Refer to for more information about setting up a Kubernetes cluster with kubeadm.

    When the cluster is deployed by other means then:

    • To enable NodeIpamController, kube-controller-manager should be started with the following flags:

      • --cluster-cidr=<CIDR Range for Pods>
    • To enable CNI network plugins, kubelet should be started with the --network-plugin=cni flag.

    • To enable masquerading of traffic for Service cluster IP via iptables, kube-proxy should be started with the --cluster-cidr=<CIDR Range for Pods> flag.

    NodeIPAM within Antrea Controller

    For further info about running NodeIPAM within Antrea Controller, see

    Open vSwitch

    As for OVS, when using the built-in kernel module, kernel version >= 4.6 is required. On the other hand, when building it from OVS sources, OVS version >= 2.6.0 is required.

    Red Hat Enterprise Linux and CentOS 7.x use kernel 3.10, but as changes to OVS kernel modules are regularly backported to these kernel versions, they should work with Antrea, starting with version 7.4.

    In case a node does not have a supported OVS module installed, you can install it following the instructions at: . Please be aware that the vport-stt module is not in the Linux tree and needs to be built from source, please build and load it manually before STT tunneling is enabled.

    Some experimental features disabled by default may have additional requirements, please refer to the Feature Gates documentation to determine whether it applies to you.

    Antrea will work out-of-the-box on most popular Operating Systems. Known issues encountered when running Antrea on specific OSes are documented .

    To deploy a released version of Antrea, pick a deployment manifest from the . For any given release <TAG> (e.g. v0.1.0), you can deploy Antrea as follows:

    To deploy the latest version of Antrea (built from the main branch), use the checked-in deployment yaml:

    1. kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml

    Antrea supports some experimental features that can be enabled or disabled, please refer to the for more information.

    Windows support

    If you want to add Windows Nodes to your cluster, please refer to these .

    ARM support

    Starting with v1.0, Antrea supports arm64 and arm/v7 Nodes. The installation instructions do not change when some (or all) Linux Nodes in a cluster use an ARM architecture: the same deployment YAML can be used, as the antrea/antrea-ubuntu Docker image is actually a manifest list with support for the amd64, arm64 and arm/v7 architectures.

    Note that while we do run a subset of the Kubernetes conformance tests on both the arm/v7 and arm64 Docker images (using as the Kubernetes distribution), our testing is not as thorough as for the amd64 image. However, we do not anticipate any issue.

    Install with Helm

    Starting with v1.8, Antrea can be installed and updated with Helm. Please refer to these .

    Deploying Antrea on a Cluster with Existing CNI

    The instructions above only apply when deploying Antrea in a new cluster. If you need to migrate your existing cluster from another CNI plugin to Antrea, you will need to do the following:

    • Delete previous CNI, including all resources (K8s objects, iptables rules, interfaces, …) created by that CNI.
    • Deploy Antrea.
    • Restart all Pods in the CNI network in order for Antrea to set-up networking for them. This does not apply to Pods which use the Node’s network namespace (i.e. Pods configured with hostNetwork: true). You may use to drain each Node or reboot all your Nodes.

    While this is in-progress, networking will be disrupted in your cluster. After deleting the previous CNI, existing Pods may not be reachable anymore.

    For example, when migrating from Flannel to Antrea, you will need to do the following:

    1. Delete Flannel with kubectl delete -f <path to your Flannel YAML manifest>.
    2. Delete Flannel bridge and tunnel interface with ip link delete flannel.1 && ip link delete flannel cni0 on each Node.
    3. Ensure are satisfied.
    4. Deploy Antrea.
    5. Drain and uncordon Nodes one-by-one. For each Node, run kubectl drain --ignore-daemonsets <node name> && kubectl uncordon <node name>. The --ignore-daemonsets flag will ignore DaemonSet-managed Pods, including the Antrea Agent Pods. If you have any other DaemonSet-managed Pods (besides the Antrea ones and system ones such as kube-proxy), they will be ignored and will not be drained from the Node. Refer to the for more information. Alternatively, you can also restart all the Pods yourself, or simply reboot your Nodes.

    To build the image locally, you can follow the instructions in the Contributor Guide.

    Deploying Antrea in Kind

    To deploy Antrea in a Kind cluster, please refer to this .

    To deploy Antrea in a cluster, please refer to this guide.

    Deploying Antrea in AKS, EKS, and GKE

    Deploying Antrea with Custom Certificates

    By default, Antrea generates the certificates needed for itself to run. To provide your own certificates, please refer to Securing Control Plane.

    Antctl: Installation and Usage

    To use antctl, the Antrea command-line tool, please refer to this guide.

    Antrea Network Policy

    Besides Kubernetes NetworkPolicy, Antrea also implements its own Network Policy CRDs, which provide advanced features including: policy priority, tiering, deny action, external entity, and policy statistics. For more information on usage of Antrea Network Policies, refer to the .

    Egress

    Antrea supports specifying which egress (SNAT) IP the traffic from the selected Pods to the external network should use and which Node the traffic should leave the cluster from. For more information, refer to the .

    Network Flow Visibility

    Antrea supports exporting network flow information using IPFIX, and provides a reference cookbook on how to visualize the exported network flows using Elastic Stack and Kibana dashboards. For more information, refer to the .

    Besides the default Encap mode, in which Pod traffic across Nodes will be encapsulated and sent over tunnels, Antrea also supports NoEncap and Hybrid traffic modes. In NoEncap mode, Antrea does not encapsulate Pod traffic, but relies on the Node network to route the traffic across Nodes. In Hybrid mode, Antrea encapsulates Pod traffic when the source Node and the destination Node are in different subnets, but does not encapsulate when the source and the destination Nodes are in the same subnet. Refer to to learn how to configure Antrea with NoEncap or Hybrid mode.

    Octant UI

    Antrea ships with an Octant UI plugin which can show runtime information of Antrea components and perform Antrea Traceflow operations. Refer to to learn how to install Octant and the Antrea plugin.

    OVS Hardware Offload

    Antrea can offload OVS flow processing to the NICs that support OVS kernel hardware offload using TC. The hardware offload can improve OVS performance significantly. For more information on how to configure OVS offload, refer to the .

    Prometheus Metrics

    Antrea supports exporting metrics to Prometheus. For more information, refer to the .

    Support for Services of type LoadBalancer

    By leveraging Antrea’s Service external IP management feature or configuring MetalLB to work with Antrea, Services of type LoadBalancer can be supported without requiring an external LoadBalancer. To learn more information, please refer to the .

    Traceflow

    Traceflow is a very useful network diagnosis feature in Antrea. It can trace and report the forwarding path of a specified packet in the Antrea network. For usage of this feature, refer to the .

    Traffic Encryption

    Antrea supports encrypting traffic between Linux Nodes using IPsec or WireGuard. To deploy Antrea with traffic encryption enabled, please refer to .

    Antrea Multi-cluster implements Multi-cluster Service API, which allows users to create multi-cluster Services that can be accessed cross clusters in a ClusterSet. Antrea Multi-cluster also supports Antrea ClusterNetworkPolicy replication. Multi-cluster admins can define ClusterNetworkPolicies to be replicated across the entire ClusterSet, and enforced in all member clusters. To learn more information about Antrea Multi-cluster, please refer to the .