Service of type LoadBalancer

    1. Using Antrea’s built-in external IP management for Services of type LoadBalancer
    2. Leveraging

    Antrea supports external IP management for Services of type LoadBalancer since version 1.5, which can work together with or kube-proxy to implement Services of type LoadBalancer, without requiring an external load balancer. With the external IP management feature, Antrea can allocate an external IP for a Service of type LoadBalancer from an , and select a Node based on the ExternalIPPool’s NodeSelector to host the external IP. Antrea configures the Service’s external IP on the selected Node, and thus Service requests to the external IP will get to the Node, and they are then handled by AntreaProxy or kube-proxy on the Node and distributed to the Service’s Endpoints. Antrea also implements a Node failover mechanism for Service external IPs. When Antrea detects a Node hosting an external IP is down, it will move the external IP to another available Node of the ExternalIPPool.

    If you are using kube-proxy in IPVS mode, you need to make sure strictARP is enabled in the kube-proxy configuration. For more information about how to configure kube-proxy, please refer to the section.

    If you are using kube-proxy iptables mode or AntreaProxy with proxyAll, no extra configuration change is needed.

    Configuration

    Enable Service external IP management feature

    At this moment, external IP management for Services is an alpha feature of Antrea. The ServiceExternalIP feature gate of antrea-agent and antrea-controller must be enabled for the feature to work. You can enable the ServiceExternalIP feature gate in the antrea-config ConfigMap in the Antrea deployment YAML:

    The feature works with both AntreaProxy and kube-proxy, including the following configurations:

    • AntreaProxy without proxyAll enabled - this is antrea-agent’s default configuration, in which kube-proxy serves the request traffic for Services of type LoadBalancer (while AntreaProxy handles Service requests from Pods).
    • AntreaProxy with proxyAll enabled - in this case, AntreaProxy handles all Service traffic, including Services of type LoadBalancer.
    • AntreaProxy disabled - kube-proxy handles all Service traffic, including Services of type LoadBalancer.

    Create an ExternalIPPool custom resource

    Service external IPs are allocated from an ExternalIPPool, which defines a pool of external IPs and the set of Nodes to which the external IPs can be assigned. To learn more information about ExternalIPPool, please refer to the Egress documentation. The example below defines an ExternalIPPool with IP range “10.10.0.2 - 10.10.0.10”, and it selects the Nodes with label “network-role: ingress-node” to host the external IPs:

    1. apiVersion: crd.antrea.io/v1alpha2
    2. kind: ExternalIPPool
    3. name: service-external-ip-pool
    4. spec:
    5. ipRanges:
    6. - start: 10.10.0.2
    7. end: 10.10.0.10
    8. nodeSelector:
    9. matchLabels:
    10. network-role: ingress-node

    Create a Service of type LoadBalancer

    For Antrea to manage the externalIP for a Service of type LoadBalancer, the Service should be annotated with service.antrea.io/external-ip-pool. For example:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: my-service
    5. annotations:
    6. spec:
    7. selector:
    8. app: MyApp
    9. ports:
    10. - protocol: TCP
    11. port: 80
    12. targetPort: 9376
    13. type: LoadBalancer

    You can also request a particular IP from an ExternalIPPool by setting the loadBalancerIP field in the Service spec to that specific IP available in the ExternalIPPool, Antrea will allocate the IP from the ExternalIPPool for the Service. For example:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: my-service
    5. annotations:
    6. service.antrea.io/external-ip-pool: "service-external-ip-pool"
    7. spec:
    8. selector:
    9. app: MyApp
    10. loadBalancerIP: "10.10.0.2"
    11. ports:
    12. - protocol: TCP
    13. port: 80
    14. targetPort: 9376

    Validate Service external IP

    You can validate that the Service can be accessed from the client using the <external IP>:<port> (10.10.0.2:80/TCP in the above example).

    As described above, the Service externalIP management by Antrea configures a Service’s external IP to a Node, so that the Node can receive Service requests. However, this requires that the externalIP on the Node be reachable through the Node network. The simplest way to achieve this is to reserve a range of IPs from the Node network subnet, and define Service ExternalIPPools with the reserved IPs, when the Nodes are connected to a layer 2 subnet. Or, another possible way might be to manually configure Node network routing (e.g. by adding a static route entry to the underlay router) to route the Service traffic to the Node that hosts the Service’s externalIP.

    As of now, Antrea supports Service externalIP management only on Linux Nodes. Windows Nodes are not supported yet.

    MetalLB also implements external IP management for Services of type LoadBalancer, and it can be deployed to a Kubernetes cluster with Antrea. MetalLB supports two modes - layer 2 mode and BGP mode - to advertise an Service external IP to the Node network. The layer 2 mode is similar to what Antrea external IP management implements and has the same limitation that the external IPs must be allocated from the Node network subnet. The BGP mode leverages BGP to advertise external IPs to the Node network router. It does not have the layer 2 subnet limitation, but requires the Node network to support BGP.

    MetalLB will automatically allocate external IPs for every Service of type LoadBalancer, and it sets the allocated IP to the loadBalancer.ingress field in the Service resource status. MetalLB also supports user specified loadBalancerIP in the Service spec. For more information, please refer to the .

    To learn more about MetalLB concepts and functionalities, you can read the MetalLB concepts.

    Install MetalLB

    You can run the following commands to install MetalLB using the YAML manifests:

    1. kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
    2. kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml

    The commands will deploy MetalLB of version 0.11.0 into Namespace metallb-system. You can also refer to this MetalLB installation guide for other ways of installing MetalLB.

    Similar to the case of Antrea Service external IP management, MetalLB layer 2 mode also requires kube-proxy’s strictARP configuration to be enabled, when you are using kube-proxy IPVS. Please refer to the Interoperability with kube-proxy IPVS mode section for more information.

    MetalLB is configured through a ConfigMap. To configure MetalLB to work in the layer 2 mode, you just need to provide the IP ranges to allocate external IPs. The IP ranges should be from the Node network subnet.

    For example:

    1. kind: ConfigMap
    2. metadata:
    3. namespace: metallb-system
    4. name: config
    5. data:
    6. config: |
    7. address-pools:
    8. - name: default
    9. protocol: layer2
    10. addresses:
    11. - 10.10.0.2-10.10.0.10

    Configure MetalLB with BGP mode

    The BGP mode of MetalLB requires more configuration parameters to establish BGP peering to the router. The example below configures MetalLB using AS number 64500 to connect to peer router 10.0.0.1 with AS number 64501:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. namespace: metallb-system
    5. name: config
    6. data:
    7. config: |
    8. peers:
    9. - peer-address: 10.0.0.1
    10. peer-asn: 64501
    11. my-asn: 64500
    12. address-pools:
    13. - name: default
    14. protocol: bgp
    15. addresses:
    16. - 10.10.0.2-10.10.0.10

    In addition to the basic layer 2 and BGP mode configurations described in this document, MetalLB supports a few more advanced BGP configurations and supports configuring multiple IP pools which can use different modes. For more information, please refer to the MetalLB configuration guide.

    Both Antrea Service external IP management and MetalLB layer 2 mode require kube-proxy’s strictARP configuration to be enabled, to work with kube-proxy in IPVS mode. You can check the strictARP configuration in the kube-proxy ConfigMap:

    You can set strictARP to true by editing the kube-proxy ConfigMap:

    1. kubectl edit configmap -n kube-system kube-proxy

    Or, simply run the following command to set it:

    1. $ kubectl get configmap kube-proxy -n kube-system -o yaml | \
    2. sed -e "s/strictARP: false/strictARP: true/" | \
    3. kubectl apply -f - -n kube-system
    1. $ kubectl describe configmap -n kube-system kube-proxy | grep strictARP
    2. strictARP: true

    If you are using Antrea v1.7.0 or later, please ignore the issue. The previous implementation of Antrea Egress before v1.7.0 does not work with the strictARP configuration of . It means Antrea Egress cannot work together with Service external IP management or MetalLB layer 2 mode, when kube-proxy IPVS is used. This issue was fixed in Antrea v1.7.0.