Antrea Network Policy CRDs

    Summary

    Starting with Antrea v1.0, Antrea-native policies are enabled by default, which means that no additional configuration is required in order to use the Antrea-native policy CRDs.

    Tier

    Antrea supports grouping Antrea-native policy CRDs together in a tiered fashion to provide a hierarchy of security policies. This is achieved by setting the field when defining an Antrea-native policy CRD (e.g. an Antrea ClusterNetworkPolicy object) to the appropriate Tier name. Each Tier has a priority associated with it, which determines its relative order among other Tiers.

    Note: K8s NetworkPolicies will be enforced once all policies in all Tiers (except for the baseline Tier) have been enforced. For more information, refer to the following Static Tiers section

    Creating Tiers as CRDs allows users the flexibility to create and delete Tiers as per their preference i.e. not be bound to 5 static tiering options as was the case initially.

    An example Tier might look like this:

    Tiers have the following characteristics:

    • Policies can associate themselves with an existing Tier by setting the tier field in an Antrea NetworkPolicy CRD spec to the Tier’s name.
    • A Tier must exist before an Antrea-native policy can reference it.
    • Policies associated with higher ordered (low priority value) Tiers are enforced first.
    • No two Tiers can be created with the same priority.
    • Updating the Tier’s priority field is unsupported.
    • Deleting Tier with existing references from policies is not allowed.

    Static tiers

    Antrea release 0.9.x introduced support for 5 static tiers. These static tiers have been removed in favor of Tier CRDs as mentioned in the previous section. On startup, antrea-controller will create 5 static, read-only Tier CRD resources corresponding to the static tiers for default consumption, as well as a “baseline” Tier CRD object, that will be enforced after developer-created K8s NetworkPolicies. The details for these Tiers are shown below:

    1. Emergency -> Tier name "emergency" with priority "50"
    2. SecurityOps -> Tier name "securityops" with priority "100"
    3. NetworkOps -> Tier name "networkops" with priority "150"
    4. Platform -> Tier name "platform" with priority "200"
    5. Application -> Tier name "application" with priority "250"
    6. Baseline -> Tier name "baseline" with priority "253"

    Any Antrea-native policy CRD referencing a static tier in its spec will now internally reference the corresponding Tier resource, thus maintaining the order of enforcement.

    The static Tier CRD Resources are created as follows in the relative order of precedence compared to K8s NetworkPolicies:

    1. Emergency > SecurityOps > NetworkOps > Platform > Application > K8s NetworkPolicy > Baseline

    Thus, all Antrea-native Policy resources associated with the “emergency” Tier will be enforced before any Antrea-native Policy resource associated with any other Tiers, until a match occurs, in which case the policy rule’s action will be applied. Any Antrea-native Policy resource without a tier name set in its spec will be associated with the “application” Tier. Policies associated with the first 5 static, read-only Tiers, as well as with all the custom Tiers created with a priority value lower than 250 (priority values greater than or equal to 250 are not allowed for custom Tiers), will be enforced before K8s NetworkPolicies.

    Policies created in the “baseline” Tier, on the other hand, will have lower precedence than developer-created K8s NetworkPolicies, which comes in handy when administrators want to enforce baseline policies like “default-deny inter-namespace traffic” for some specific Namespace, while still allowing individual developers to lift the restriction if needed using K8s NetworkPolicies.

    Note that baseline policies cannot counteract the isolated Pod behavior provided by K8s NetworkPolicies. To read more about this Pod isolation behavior, refer to . If a Pod becomes isolated because a K8s NetworkPolicy is applied to it, and the policy does not explicitly allow communications with another Pod, this behavior cannot be changed by creating an Antrea-native policy with an “allow” action in the “baseline” Tier. For this reason, it generally does not make sense to create policies in the “baseline” Tier with the “allow” action.

    kubectl commands for Tier

    The following kubectl commands can be used to retrieve Tier resources:

    1. # Use long name
    2. kubectl get tiers
    3. # Use long name with API Group
    4. kubectl get tiers.crd.antrea.io
    5. # Use short name
    6. kubectl get tr
    7. # Use short name with API Group
    8. kubectl get tr.crd.antrea.io
    9. # Sort output by Tier priority
    10. kubectl get tiers --sort-by=.spec.priority

    All the above commands produce output similar to what is shown below:

    1. NAME PRIORITY AGE
    2. emergency 50 27h
    3. securityops 100 27h
    4. networkops 150 27h
    5. platform 200 27h
    6. application 250 27h

    Antrea ClusterNetworkPolicy (ACNP), one of the two Antrea-native policy CRDs introduced, is a specification of how workloads within a cluster communicate with each other and other external endpoints. The ClusterNetworkPolicy is supposed to aid cluster admins to configure the security policy for the cluster, unlike K8s NetworkPolicy, which is aimed towards developers to secure their apps and affects Pods within the Namespace in which the K8s NetworkPolicy is created. Rules belonging to ClusterNetworkPolicies are enforced before any rule belonging to a K8s NetworkPolicy.

    The Antrea ClusterNetworkPolicy resource

    Example ClusterNetworkPolicies might look like these:

    ACNP with stand-alone selectors

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-with-stand-alone-selectors
    5. spec:
    6. priority: 5
    7. tier: securityops
    8. appliedTo:
    9. - podSelector:
    10. matchLabels:
    11. role: db
    12. - namespaceSelector:
    13. matchLabels:
    14. env: prod
    15. ingress:
    16. - action: Allow
    17. from:
    18. - podSelector:
    19. matchLabels:
    20. role: frontend
    21. - podSelector:
    22. matchLabels:
    23. role: nondb
    24. namespaceSelector:
    25. matchLabels:
    26. role: db
    27. ports:
    28. - protocol: TCP
    29. port: 8080
    30. endPort: 9000
    31. - protocol: TCP
    32. port: 6379
    33. name: AllowFromFrontend
    34. enableLogging: false
    35. egress:
    36. - action: Drop
    37. to:
    38. - ipBlock:
    39. cidr: 10.0.10.0/24
    40. ports:
    41. - protocol: TCP
    42. port: 5978
    43. name: DropToThirdParty
    44. enableLogging: true

    ACNP with ClusterGroup reference

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-with-cluster-groups
    5. spec:
    6. priority: 8
    7. tier: securityops
    8. appliedTo:
    9. - group: "test-cg-with-db-selector" # defined separately with a ClusterGroup resource
    10. ingress:
    11. - action: Allow
    12. from:
    13. - group: "test-cg-with-frontend-selector" # defined separately with a ClusterGroup resource
    14. ports:
    15. - protocol: TCP
    16. port: 8080
    17. endPort: 9000
    18. - protocol: TCP
    19. port: 6379
    20. name: AllowFromFrontend
    21. enableLogging: false
    22. egress:
    23. - action: Drop
    24. to:
    25. - group: "test-cg-with-ip-block" # defined separately with a ClusterGroup resource
    26. ports:
    27. - protocol: TCP
    28. port: 5978
    29. name: DropToThirdParty
    30. enableLogging: true

    ACNP for complete Pod isolation in selected Namespaces

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: isolate-all-pods-in-namespace
    5. spec:
    6. priority: 1
    7. tier: securityops
    8. appliedTo:
    9. - namespaceSelector:
    10. matchLabels:
    11. app: no-network-access-required
    12. ingress:
    13. - action: Drop # For all Pods in those Namespaces, drop and log all ingress traffic from anywhere
    14. name: drop-all-ingress
    15. enableLogging: true
    16. egress:
    17. - action: Drop # For all Pods in those Namespaces, drop and log all egress traffic towards anywhere
    18. name: drop-all-egress
    19. enableLogging: true

    ACNP for strict Namespace isolation

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: strict-ns-isolation
    5. spec:
    6. priority: 5
    7. tier: securityops
    8. appliedTo:
    9. - namespaceSelector: # Selects all non-system Namespaces in the cluster
    10. matchExpressions:
    11. - {key: kubernetes.io/metadata.name, operator: NotIn, values: [kube-system]}
    12. ingress:
    13. - action: Pass
    14. from:
    15. - namespaces:
    16. match: Self # Skip ACNP evaluation for traffic from Pods in the same Namespace
    17. name: PassFromSameNS
    18. enableLogging: false
    19. - action: Drop
    20. from:
    21. - namespaceSelector: {} # Drop from Pods from all other Namespaces
    22. name: DropFromAllOtherNS
    23. enableLogging: true
    24. egress:
    25. - action: Pass
    26. - namespaces:
    27. match: Self # Skip ACNP evaluation for traffic to Pods in the same Namespace
    28. name: PassToSameNS
    29. enableLogging: false
    30. - action: Drop
    31. to:
    32. - namespaceSelector: {} # Drop to Pods from all other Namespaces
    33. name: DropToAllOtherNS
    34. enableLogging: true

    ACNP for default zero-trust cluster security posture

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: default-cluster-deny
    5. spec:
    6. priority: 1
    7. tier: baseline
    8. appliedTo:
    9. - namespaceSelector: {} # Selects all Namespaces in the cluster
    10. ingress:
    11. - action: Drop
    12. egress:
    13. - action: Drop

    ACNP for toServices rule

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-drop-to-services
    5. spec:
    6. priority: 5
    7. tier: securityops
    8. appliedTo:
    9. - podSelector:
    10. matchLabels:
    11. role: client
    12. namespaceSelector:
    13. matchLabels:
    14. env: prod
    15. egress:
    16. - action: Drop
    17. toServices:
    18. - name: svcName
    19. namespace: svcNamespace
    20. name: DropToServices
    21. enableLogging: true

    ACNP for ICMP traffic

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-reject-ping-request
    5. spec:
    6. priority: 5
    7. tier: securityops
    8. appliedTo:
    9. - podSelector:
    10. matchLabels:
    11. role: server
    12. namespaceSelector:
    13. matchLabels:
    14. env: prod
    15. egress:
    16. - action: Reject
    17. protocols:
    18. - icmp:
    19. icmpType: 8
    20. icmpCode: 0
    21. name: DropPingRequest
    22. enableLogging: true

    ACNP for IGMP traffic

    ACNP for multicast egress traffic

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-with-multicast-traffic-drop
    5. spec:
    6. priority: 5
    7. tier: securityops
    8. appliedTo:
    9. - podSelector:
    10. matchLabels:
    11. app: mcjoin6
    12. egress:
    13. - action: Drop
    14. to:
    15. - ipBlock:
    16. cidr: 225.1.2.3/32
    17. name: dropMcastUDPTraffic

    spec: The ClusterNetworkPolicy spec has all the information needed to define a cluster-wide security policy.

    appliedTo: The appliedTo field at the policy level specifies the grouping criteria of Pods to which the policy applies to. Pods can be selected cluster-wide using podSelector. If set with a namespaceSelector, all Pods from Namespaces selected by the namespaceSelector will be selected. Specific Pods from specific Namespaces can be selected by providing both a podSelector and a namespaceSelector in the same appliedTo entry. The appliedTo field can also reference a ClusterGroup resource by setting the ClusterGroup’s name in group field in place of the stand-alone selectors. The appliedTo field can also reference a Service by setting the Service’s name and Namespace in service field in place of the stand-alone selectors. Only a NodePort Service can be referred by this field. More details can be found in the ApplyToNodePortService section. IPBlock cannot be set in the appliedTo field. An IPBlock ClusterGroup referenced in an appliedTo field will be ignored, and the policy will have no effect. This appliedTo field must not be set, if appliedTo per rule is used.

    In the , the policy applies to Pods, which either match the labels “role=db” in all the Namespaces, or are from Namespaces which match the labels “env=prod”. The second example policy applies to all network endpoints selected by the “test-cg-with-db-selector” ClusterGroup. The policy applies to all Pods in the Namespaces that matches label “app=no-network-access-required”. `appliedTo’ also supports ServiceAccount based selection. This allows users using ServiceAccount to select Pods. More details can be found in the ServiceAccountSelector section.

    priority: The priority field determines the relative priority of the policy among all ClusterNetworkPolicies in the given cluster. This field is mandatory. A lower priority value indicates higher precedence. Priority values can range from 1.0 to 10000.0. Note: Policies with the same priorities will be enforced indeterministically. Users should therefore take care to use priorities to ensure the behavior they expect.

    tier: The tier field associates an ACNP to an existing Tier. The tier field can be set with the name of the Tier CRD to which this policy must be associated with. If not set, the ACNP is associated with the lowest priority default tier i.e. the “application” Tier.

    action: Each ingress or egress rule of a ClusterNetworkPolicy must have the action field set. As of now, the available actions are [“Allow”, “Drop”, “Reject”, “Pass”]. When the rule action is “Allow” or “Drop”, Antrea will allow or drop traffic which matches both from/to, ports and protocols sections of that rule, given that traffic does not match a higher precedence rule in the cluster (ACNP rules created in higher order Tiers or policy instances in the same Tier with lower priority number). If a “Reject” rule is matched, the client initiating the traffic will receive ICMP host administratively prohibited code for ICMP, UDP and SCTP request, or an explicit reject response for TCP request, instead of timeout. A “Pass” rule, on the other hand, skips this packet for further ACNP rule evaluations (all ACNP rules that has lower priority than the current “Pass” rule will be skipped, except for the Baseline Tier rules), and delegates the decision to developer created namespaced NetworkPolicies. If no NetworkPolicy matches this traffic, then the Baseline Tier rules will still be matched against. Note that the “Pass” action does not make sense when configured in Baseline Tier ACNP rules, and such configurations will be rejected by the admission controller. Note: “Pass” and “Reject” actions are not supported for rules applied to multicast traffic.

    ingress: Each ClusterNetworkPolicy may consist of zero or more ordered set of ingress rules. Under ports, the optional field endPort can only be set when a numerical port is set to represent a range of ports from port to endPort inclusive. protocols defines additional protocols that are not supported by ports. Currently only ICMP protocol and IGMP protocol are under protocols. For ICMP protocol, icmpType and icmpCode could be used to specify the ICMP traffic that this rule matches. And for IGMP protocol, igmpType and groupAddress can be used to specify the IGMP traffic that this rule matches. Currently, only IGMP query is supported in ingress rules. Other IGMP types and multicast data traffic are not supported for ingress rules. Valid igmpType is:

    The group address in IGMP query packets can only be 224.0.0.1. As for Group-Specific IGMP query, which encodes the target group in the IGMP message, it is not supported yet because OVS can not recognize the address. Protocol IGMP can not be used with ICMP or properties like from, to, ports and toServices.

    Also, each rule has an optional name field, which should be unique within the policy describing the intention of this rule. If name is not provided for a rule, it will be auto-generated by Antrea. The auto-generated name will be of format [ingress/egress]-[action]-[uid], e.g. ingress-allow-2f0ed6e, where [uid] is the first 7 bits of hash value of the rule based on sha1 algorithm. If a policy contains duplicate rules, or if a rule name is same as the auto-generated name of some other rules in the same policy, it will cause a conflict, and the policy will be rejected. A ClusterGroup name can be set in the group field of an ingress from section in place of stand-alone selectors to allow traffic from workloads/ipBlocks set in the ClusterGroup.

    The policy contains a single rule, which allows matched traffic on a single port, from one of two sources: the first specified by a podSelector and the second specified by a combination of a podSelector and a namespaceSelector. The second example policy contains a single rule, which allows matched traffic on multiple TCP ports (8000 through 9000 included, plus 6379) from all network endpoints selected by the “test-cg-with-frontend-selector” ClusterGroup. The policy contains a single rule, which drops all ingress traffic towards any Pod in Namespaces that have label app set to no-network-access-required. Note that an empty From in the ingress rule means that this rule matches all ingress sources. Ingress From section also supports ServiceAccount based selection. This allows users to use ServiceAccount to select Pods. More details can be found in the ServiceAccountSelector section. Note: The order in which the ingress rules are specified matters, i.e., rules will be enforced in the order in which they are written.

    egress: Each ClusterNetworkPolicy may consist of zero or more ordered set of egress rules. Each rule, depending on the action field of the rule, allows or drops traffic which matches all from, ports sections. Under ports, the optional field endPort can only be set when a numerical port is set to represent a range of ports from port to endPort inclusive. protocols defines additional protocols that are not supported by ports. Currently, only ICMP protocol and IGMP protocol are under protocols. For ICMP protocol, icmpType and icmpCode could be used to specify the ICMP traffic that this rule matches. And for protocol, igmpType and groupAddress can be used to specify the IGMP traffic that this rule matches. If igmpType is not set, all reports will be matched. If groupAddress is empty, then all multicast group addresses will be matched here. Only IGMP reports are supported in egress rules. Protocol IGMP can not be used with ICMP or properties like from, to, ports and toServices. Valid igmpType are:

    message typevalue
    IGMPv1 Membership Report0x12
    IGMPv2 Membership Report0x16
    IGMPv3 Membership Report0x22

    Also, each rule has an optional name field, which should be unique within the policy describing the intention of this rule. If name is not provided for a rule, it will be auto-generated by Antrea. The rule name auto-generation process is the same as ingress rules. A ClusterGroup name can be set in the group field of a egress to section in place of stand-alone selectors to allow traffic to workloads/ipBlocks set in the ClusterGroup. toServices field contains a list of combinations of Service Namespace and Service Name to match traffic to this Service.

    More details can be found in the section. The first example policy contains a single rule, which drops matched traffic on a single port, to the 10.0.10.0/24 subnet specified by the ipBlock field. The policy contains a single rule, which drops matched traffic on TCP port 5978 to all network endpoints selected by the “test-cg-with-ip-block” ClusterGroup. The third example policy contains a single rule, which drops all egress traffic initiated by any Pod in Namespaces that have app set to no-network-access-required. The policy contains a single rule, which drops traffic from “role: client” labeled Pods from “env: prod” labeled Namespaces to Service svcNamespace/svcName via ClusterIP. Note that an empty to + an empty toServices in the egress rule means that this rule matches all egress destinations. Egress To section also supports FQDN based filtering. This can be applied to exact FQDNs or wildcard expressions. More details can be found in the FQDN section. Egress To section also supports ServiceAccount based selection. This allows users to use ServiceAccount to select Pods. More details can be found in the section. Note: The order in which the egress rules are specified matters, i.e., rules will be enforced in the order in which they are written.

    enableLogging: A ClusterNetworkPolicy ingress or egress rule can be audited by enabling its logging field. When enableLogging field is set to true, the first packet of any connection that matches this rule will be logged to a separate file (/var/log/antrea/networkpolicy/np.log) on the Node on which the rule is applied. These log files can then be retrieved for further analysis. By default, rules are not logged. The example policy logs all traffic that matches the “DropToThirdParty” egress rule, while the rule “AllowFromFrontend” is not logged. Specifically for drop and reject rules, deduplication is applied to reduce duplicated logs, and duplication buffer length is set to 1 second. The rules are logged in the following format:

    1. <yyyy/mm/dd> <time> <ovs-table-name> <antrea-native-policy-reference> <action> <openflow-priority> <source-ip> <source-port> <destination-ip> <destination-port> <protocol> <packet-length>
    2. Deduplication:
    3. <yyyy/mm/dd> <time> <ovs-table-name> <antrea-native-policy-reference> <action> <openflow-priority> <source-ip> <source-port> <destination-ip> <destination-port> <protocol> <packet-length> [<num of packets> packets in <duplicate duration>]
    4. Examples:
    5. 2020/11/02 22:21:21.148395 AntreaPolicyAppTierIngressRule AntreaNetworkPolicy:default/test-anp Allow 61800 10.10.1.65 35402 10.0.0.5 80 TCP 60
    6. 2021/06/24 23:56:41.346165 AntreaPolicyEgressRule AntreaNetworkPolicy:default/test-anp Drop 44900 10.10.1.65 35402 10.0.0.5 80 TCP 60 [3 packets in 1.011379442s]

    Kubernetes NetworkPolicies can also be audited using Antrea logging to the same file (/var/log/antrea/networkpolicy/np.log). Add Annotation networkpolicy.antrea.io/enable-logging: "true on a Namespace to enable logging for all NetworkPolicies in the Namespace. Packets of any connection that match a NetworkPolicy rule will be logged with a reference to the NetworkPolicy name, but packets dropped by the implicit “default drop” (not allowed by any NetworkPolicy) will only be logged with consistent name K8sNetworkPolicy for reference. The rules are logged in the following format:

    1. <yyyy/mm/dd> <time> <ovs-table-name> <k8s-network-policy-reference> Allow <openflow-priority> <source-ip> <source-port> <destination-ip> <destination-port> <protocol> <packet-length>
    2. Default dropped traffic:
    3. <yyyy/mm/dd> <time> <ovs-table-name> K8sNetworkPolicy Drop -1 <source-ip> <source-port> <destination-ip> <destination-port> <protocol> <packet-length> [<num of packets> packets in <duplicate duration>]
    4. Examples:
    5. 2022/07/26 06:55:56.170456 IngressRule K8sNetworkPolicy:default/test-np-log Allow 190 10.10.1.82 49518 10.10.1.84 80 TCP 60
    6. 2022/07/26 06:55:57.142206 IngressDefaultRule K8sNetworkPolicy Drop -1 10.10.1.83 38608 10.10.1.84 80 TCP 60

    Fluentd can be used to assist with collecting and analyzing the logs. Refer to the Fluentd cookbook for documentation.

    appliedTo per rule: A ClusterNetworkPolicy ingress or egress rule may optionally contain the appliedTo field. Semantically, the appliedTo field per rule is similar to the appliedTo field at the policy level, except that the scope of the appliedTo is rule itself, as opposed to all rules in the policy, as is the case for appliedTo in policy spec. If used, the appliedTo field must be set for all the rules existing in the policy and cannot be set along with appliedTo at the policy level.

    Below is an example of appliedTo-per-rule ACNP usage:

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-appliedto-per-rule
    5. spec:
    6. priority: 1
    7. ingress:
    8. - action: Drop
    9. appliedTo:
    10. - podSelector:
    11. matchLabels:
    12. app: db-restricted-west
    13. from:
    14. - podSelector:
    15. matchLabels:
    16. app: client-east
    17. - action: Drop
    18. appliedTo:
    19. - podSelector:
    20. matchLabels:
    21. app: db-restricted-east
    22. from:
    23. - podSelector:
    24. matchLabels:
    25. app: client-west

    Note: In a given ClusterNetworkPolicy, all rules/appliedTo fields must either contain stand-alone selectors or references to ClusterGroup. Usage of ClusterGroups along with stand-alone selectors is not allowed.

    Behavior of to and from selectors

    The following selectors can be specified in an ingress from section or egress to section when defining networking peers for policy rules:

    podSelector: This selects particular Pods from all Namespaces as “sources”, if set in ingress section, or as “destinations”, if set in egress section.

    namespaceSelector: This selects particular Namespaces for which all Pods are grouped as ingress “sources” or egress “destinations”. Cannot be set with namespaces field.

    nodeSelector: This selects particular Nodes in cluster. The selected Node’s IPs will be set as “sources” if nodeSelector set in ingress section, or as “destinations” if is set in the egress section. For more information on its usage, refer to this section.

    namespaces: The namespaces field allows users to perform advanced matching on Namespaces which cannot be done via label selectors. Refer to for more details, and this sample yaml for usage.

    group: A group refers to a ClusterGroup to which an ingress/egress peer, or an appliedTo must resolve to. More information on ClusterGroups can be found in .

    serviceAccount: This selects all the Pods which have been assigned a specific ServiceAccount. For more information on its usage, refer to this section.

    ipBlock: This selects particular IP CIDR ranges to allow as ingress “sources” or egress “destinations”. These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.

    fqdn: This selector is applicable only to the to section in an egress block. It is used to select Fully Qualified Domain Names (FQDNs), specified either by exact name or wildcard expressions, when defining egress rules. For more information on its usage, refer to .

    Key differences from K8s NetworkPolicy

    • ClusterNetworkPolicy is at the cluster scope, hence a podSelector without any namespaceSelector selects Pods from all Namespaces.
    • There is no automatic isolation of Pods on being selected in appliedTo.
    • Ingress/Egress rules in ClusterNetworkPolicy has an action field which specifies whether the matched rule allows or drops the traffic.
    • IPBlock field in the ClusterNetworkPolicy rules do not have the except field. A higher priority rule can be written to deny the specific CIDR range to simulate the behavior of IPBlock field with cidr and except set.
    • Rules assume the priority in which they are written. i.e. rule set at top takes precedence over a rule set below it.

    kubectl commands for Antrea ClusterNetworkPolicy

    The following kubectl commands can be used to retrieve ACNP resources:

    1. # Use long name
    2. kubectl get clusternetworkpolicies
    3. # Use long name with API Group
    4. kubectl get clusternetworkpolicies.crd.antrea.io
    5. # Use short name
    6. kubectl get acnp
    7. # Use short name with API Group
    8. kubectl get acnp.crd.antrea.io

    All the above commands produce output similar to what is shown below:

    1. NAME TIER PRIORITY AGE
    2. test-cnp emergency 5 54s

    Antrea NetworkPolicy

    Antrea NetworkPolicy (ANP) is another policy CRD, which is similar to the ClusterNetworkPolicy CRD, however its scope is limited to a Namespace. The purpose of introducing this CRD is to allow admins to take advantage of advanced NetworkPolicy features and apply them within a Namespace to complement the K8s NetworkPolicies. Similar to the ClusterNetworkPolicy resource, Antrea NetworkPolicy can also be associated with Tiers.

    The Antrea NetworkPolicy resource

    An example Antrea NetworkPolicy might look like this:

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: NetworkPolicy
    3. metadata:
    4. name: test-anp
    5. namespace: default
    6. spec:
    7. priority: 5
    8. tier: securityops
    9. appliedTo:
    10. - podSelector:
    11. matchLabels:
    12. role: db
    13. ingress:
    14. - action: Allow
    15. from:
    16. - podSelector:
    17. matchLabels:
    18. role: frontend
    19. - podSelector:
    20. matchLabels:
    21. role: nondb
    22. namespaceSelector:
    23. matchLabels:
    24. role: db
    25. ports:
    26. - protocol: TCP
    27. port: 8080
    28. endPort: 9000
    29. name: AllowFromFrontend
    30. enableLogging: false
    31. egress:
    32. - action: Drop
    33. to:
    34. - ipBlock:
    35. cidr: 10.0.10.0/24
    36. ports:
    37. - protocol: TCP
    38. port: 5978
    39. name: DropToThirdParty
    40. enableLogging: true

    Antrea NetworkPolicy shares its spec with ClusterNetworkPolicy. However, the following documents some of the key differences between the two Antrea policy CRDs.

    • Antrea NetworkPolicy is Namespaced while ClusterNetworkPolicy operates at cluster scope.
    • Unlike the appliedTo in a ClusterNetworkPolicy, setting a namespaceSelector in the appliedTo field is forbidden.
    • podSelector without a namespaceSelector, set within a NetworkPolicy Peer of any rule, selects Pods from the Namespace in which the Antrea NetworkPolicy is created. This behavior is similar to the K8s NetworkPolicy.
    • Antrea NetworkPolicy supports both stand-alone selectors and Group references.
    • Antrea NetworkPolicy does not support namespaces field within a peer, as Antrea NetworkPolicy themselves are scoped to a single Namespace.

    Antrea NetworkPolicy with Group reference

    Groups can be referenced in appliedTo and to/from. Refer to the Group section for detailed information.

    The following example Antrea NetworkPolicy realizes the same network policy as the . It refers to three separately defined Groups - “test-grp-with-db-selector” that selects all Pods labeled “role: db”, “test-grp-with-frontend-selector” that selects all Pods labeled “role: frontend” and Pods labeled “role: nondb” in Namespaces labeled “role: db”, “test-grp-with-ip-block” that selects ipblock “10.0.10.0/24”.

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: NetworkPolicy
    3. metadata:
    4. name: anp-with-groups
    5. namespace: default
    6. spec:
    7. priority: 5
    8. tier: securityops
    9. appliedTo:
    10. - group: "test-grp-with-db-selector"
    11. ingress:
    12. - action: Allow
    13. from:
    14. - group: "test-grp-with-frontend-selector"
    15. ports:
    16. - protocol: TCP
    17. port: 8080
    18. endPort: 9000
    19. name: AllowFromFrontend
    20. enableLogging: false
    21. egress:
    22. - action: Drop
    23. to:
    24. - group: "test-grp-with-ip-block"
    25. ports:
    26. - protocol: TCP
    27. port: 5978
    28. name: DropToThirdParty
    29. enableLogging: true

    kubectl commands for Antrea NetworkPolicy

    The following kubectl commands can be used to retrieve ANP resources:

    1. # Use long name with API Group
    2. kubectl get networkpolicies.crd.antrea.io
    3. # Use short name
    4. kubectl get anp
    5. # Use short name with API Group
    6. kubectl get anp.crd.antrea.io

    All the above commands produce output similar to what is shown below:

    1. NAME TIER PRIORITY AGE
    2. test-anp securityops 5 5s

    Antrea-native Policy ordering based on priorities

    Antrea-native policy CRDs are ordered based on priorities set at various levels.

    Ordering based on Tier priority

    With the introduction of Tiers, Antrea-native policies are first enforced based on the Tier to which they are associated. i.e. all policies belonging to a higher precedenced Tier are enforced first, followed by policies belonging to the next Tier and so on, until the “application” Tier policies are enforced. K8s NetworkPolicies are enforced next, and “baseline” Tier policies will be enforced last.

    Ordering based on policy priority

    Within a Tier, Antrea-native policy CRDs are ordered by the priority at the policy level. Thus, the policy with the highest precedence (the smallest numeric priority value) is enforced first. This ordering is performed solely based on the priority assigned, as opposed to the “Kind” of the resource, i.e. the relative ordering between a ClusterNetworkPolicy resource and an within a Tier depends only on the priority set in each of the two resources.

    Rule enforcement based on priorities

    Within a policy, rules are enforced in the order in which they are set. For example, consider the following:

    • ACNP1{tier: application, priority: 10, ingressRules: [ir1.1, ir1.2], egressRules: [er1.1, er1.2]}
    • ANP1{tier: application, priority: 15, ingressRules: [ir2.1, ir2.2], egressRules: [er2.1, er2.2]}
    • ACNP3{tier: emergency, priority: 20, ingressRules: [ir3.1, ir3.2], egressRules: [er3.1, er3.2]}

    This translates to the following order:

    • Ingress rules: ir3.1 > ir3.2 > ir1.1 -> ir1.2 -> ir2.1 -> ir2.2
    • Egress rules: er3.1 > er3.2 > er1.1 -> er1.2 -> er2.1 -> er2.2

    Once a rule is matched, it is executed based on the action set. If none of the policy rules match, the packet is then enforced for rules created for K8s NP. If the packet still does not match any rule for K8s NP, it will then be evaluated against policies created in the “baseline” Tier.

    The with ‘sort-by=effectivePriority’ flag can be used to check the order of policy enforcement. An example output will look like the following:

    1. antctl get netpol --sort-by=effectivePriority
    2. NAME APPLIED-TO RULES SOURCE TIER-PRIORITY PRIORITY
    3. 4c504456-9158-4838-bfab-f81665dfae12 85b88ddb-b474-5b44-93d3-c9192c09085e 1 AntreaClusterNetworkPolicy:acnp-1 250 1
    4. 41e510e0-e430-4606-b4d9-261424184fba e36f8beb-9b0b-5b49-b1b7-5c5307cddd83 1 AntreaClusterNetworkPolicy:acnp-2 250 2
    5. 819b8482-ede5-4423-910c-014b731fdba6 bb6711a1-87c7-5a15-9a4a-71bf49a78056 2 AntreaNetworkPolicy:anp-10 250 10
    6. 4d18e031-f05a-48f6-bd91-0197b556ccca e216c104-770c-5731-bfd3-ff4ccbc38c39 2 K8sNetworkPolicy:default/test-1 <NONE> <NONE>
    7. c547002a-d8c7-40f1-bdd1-8eb6d0217a67 e216c104-770c-5731-bfd3-ff4ccbc38c39 1 K8sNetworkPolicy:default/test-2 <NONE> <NONE>
    8. aac8b8bc-f3bf-4c41-b6e0-2af1863204eb bb6711a1-87c7-5a15-9a4a-71bf49a78056 3 AntreaClusterNetworkPolicy:baseline 253 10

    The ovs-pipeline doc contains more information on how policy rules are realized by OpenFlow, and how the priority of flows reflects the order in which they are enforced.

    Selecting Namespace by Name

    Kubernetes NetworkPolicies and Antrea-native policies allow selecting workloads from Namespaces with the use of a label selector (i.e. namespaceSelector). However, it is often desirable to be able to select Namespaces directly by their name as opposed to using the labels associated with the Namespaces.

    K8s clusters with version 1.21 and above

    Starting with K8s v1.21, all Namespaces are labeled with the kubernetes.io/metadata.name: <namespaceName> label provided that the NamespaceDefaultLabelName feature gate (enabled by default) is not disabled in K8s. K8s NetworkPolicy and Antrea-native policy users can take advantage of this reserved label to select Namespaces directly by their name in namespaceSelectors as follows:

    Note: NamespaceDefaultLabelName feature gate is scheduled to be removed in K8s v1.24, thereby ensuring that labeling Namespaces by their name cannot be disabled.

    K8s clusters with version 1.20 and below

    In order to select Namespaces by name, Antrea labels Namespaces with a reserved label antrea.io/metadata.name, whose value is set to the Namespace’s name. Users can then use this label in the namespaceSelector field, in both K8s NetworkPolicies and Antrea-native policies to select Namespaces by name. By default, Namespaces are not labeled with the reserved name label. In order for the Antrea controller to label the Namespaces, the labelsmutator.antrea.io MutatingWebhookConfiguration must be enabled. This can be done by applying the following webhook configuration YAML:

    1. apiVersion: admissionregistration.k8s.io/v1
    2. kind: MutatingWebhookConfiguration
    3. metadata:
    4. # Do not edit this name.
    5. name: "labelsmutator.antrea.io"
    6. webhooks:
    7. - name: "namelabelmutator.antrea.io"
    8. clientConfig:
    9. service:
    10. name: "antrea"
    11. path: "/mutate/namespace"
    12. rules:
    13. - operations: ["CREATE", "UPDATE"]
    14. apiGroups: [""]
    15. apiVersions: ["v1"]
    16. resources: ["namespaces"]
    17. scope: "Cluster"
    18. admissionReviewVersions: ["v1", "v1beta1"]
    19. sideEffects: None
    20. timeoutSeconds: 5

    Note: antrea-controller Pod must be restarted after applying this YAML.

    Once the webhook is configured, Antrea will start labeling all new and updated Namespaces with the antrea.io/metadata.name: <namespaceName> label. Users may now use this reserved label to select Namespaces by name as follows:

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: NetworkPolicy
    3. metadata:
    4. name: test-anp-by-name
    5. namespace: default
    6. spec:
    7. priority: 5
    8. tier: application
    9. appliedTo:
    10. - podSelector: {}
    11. egress:
    12. - action: Allow
    13. to:
    14. - podSelector:
    15. matchLabels:
    16. k8s-app: kube-dns
    17. namespaceSelector:
    18. matchLabels:
    19. antrea.io/metadata.name: kube-system
    20. ports:
    21. - protocol: TCP
    22. port: 53
    23. - protocol: UDP
    24. port: 53
    25. name: AllowToCoreDNS

    The above example allows all Pods from Namespace “default” to connect to all “kube-dns” Pods from Namespace “kube-system” on TCP port 53.

    Selecting Pods in the same Namespace with Self

    The namespaces field allows users to perform advanced matching on Namespace objects that cannot be done via label selectors. Currently, the namespaces field has only one matching strategy, Self. If set to Self, for each Pod targeted by the appliedTo of the policy/rule, this field will cause the rule to select endpoints in the same Namespace as that Pod. It enables policy writers to create per-Namespace rules within a single policy. This field is optional and cannot be set along with a namespaceSelector within the same peer.

    Consider a minimalistic cluster, where there are only three Namespaces labeled ns=x, ns=y and ns=z. Inside each of these Namespaces, there are three Pods labeled app=a, app=b and app=c.

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: allow-self-ns
    5. spec:
    6. priority: 1
    7. tier: platform
    8. appliedTo:
    9. - namespaceSelector: {}
    10. ingress:
    11. - action: Allow
    12. from:
    13. - namespaces:
    14. match: Self
    15. - action: Deny
    16. egress:
    17. - action: Allow
    18. to:
    19. - namespaces:
    20. match: Self
    21. - action: Deny

    The policy above ensures that x/a, x/b and x/c can communicate with each other, but nothing else (unless there are higher precedenced policies which say otherwise). Same for Namespaces y and z.

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: deny-self-ns-a-to-b
    5. priority: 1
    6. tier: securityops
    7. appliedTo:
    8. - namespaceSelector: {}
    9. podSelector:
    10. matchLabels:
    11. app: b
    12. ingress:
    13. - action: Deny
    14. from:
    15. - namespaces:
    16. match: Self
    17. podSelector:
    18. matchLabels:
    19. app: a

    The deny-self-ns-a-to-b policy ensures that traffic from x/a to x/b, y/a to y/b and z/a to z/b are denied. It can be used in conjunction with the allow-self-ns policy. If both policies are applied, the only other Pod that x/a can reach in the cluster will be Pod x/c.

    These two policies shown above are for demonstration purposes only. For more realistic usage of the namespaces field, refer to this YAML in the previous section.

    Antrea-native policy features a fqdn field in egress rules to select Fully Qualified Domain Names (FQDNs), specified either by exact FQDN name or wildcard expressions.

    The standard Allow, Drop and Reject actions apply to FQDN egress rules.

    An example policy using FQDN based filtering could look like this:

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-fqdn-all-foobar
    5. spec:
    6. priority: 1
    7. appliedTo:
    8. - podSelector:
    9. matchLabels:
    10. app: client
    11. egress:
    12. - action: Allow
    13. to:
    14. - fqdn: "*foobar.com"
    15. ports:
    16. - protocol: TCP
    17. port: 8080
    18. - action: Drop # Drop all other egress traffic, in-cluster or out-of-cluster

    Note that for FQDN wildcard expressions, the * character can match multiple subdomains (i.e. *foobar.com will match foobar.com, www.foobar.com and test.uswest.foobar.com).

    Antrea will only program datapath rules for actual egress traffic towards these FQDNs, based on DNS results. It will not interfere with DNS packets, unless there is a separate policy dropping/rejecting communication between the DNS components and the Pods selected.

    Note that FQDN based policies do not work for (e.g. kubernetes.default.svc or antrea.kube-system.svc), except for headless Services. The reason is that Antrea will use the information included in A or AAAA DNS records to implement FQDN based policies. In the case of “normal” (not headless) Services, the DNS name resolves to the ClusterIP for the Service, but policy rules are enforced after AntreaProxy Service Load-Balancing and at that stage the destination IP address has already been rewritten to the address of an endpoint backing the Service. For headless Services, a ClusterIP is not allocated and, assuming the Service has a selector, the DNS server returns A / AAAA records that point directly to the endpoints. In that case, FQDN based policies can be used successfully. For example, the following policy, which specifies an exact match on a DNS name, will drop all egress traffic destined to headless Service svcA defined in the default Namespace:

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-fqdn-headless-service
    5. spec:
    6. priority: 1
    7. appliedTo:
    8. - podSelector:
    9. matchLabels:
    10. app: client
    11. egress:
    12. - action: Drop
    13. to:
    14. - fqdn: "svcA.default.svc.cluster.local"

    Node Selector

    NodeSelector selects certain Nodes which match the label selector. When used in the to field of an egress rule, it adds the Node IPs to the rule’s destination address group; when used in the from field of an ingress rule, it adds the Node IPs to the rule’s source address group.

    Notice that when a rule with a nodeSelector applies to a Node, it only restricts the traffic to/from certain IPs of the Node. The IPs include:

    1. The Node IP (the IP address in the Node API object)
    2. The Antrea gateway IP (the IP address of the interface antrea-agent will create and use for Node-to-Pod communication)
    3. The transport IP (the IP address of the interface used for tunneling or routing the traffic across Nodes) if it’s different from Node IP

    Traffic to/from other IPs of the Node will be ignored. Meanwhile, NodeSelector doesn’t affect the traffic from Node to Pods running on that Node. Such traffic will always be allowed to make sure that to perform liveness and readiness probes. For more information, see https://github.com/antrea-io/antrea/pull/104.

    For example, the following rule applies to Pods with label app=antrea-test-app and will Drop egress traffic to Nodes on TCP port 6443 which have the labels node-role.kubernetes.io/control-plane.

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: egress-control-plane
    5. spec:
    6. priority: 1
    7. appliedTo:
    8. - podSelector:
    9. matchLabels:
    10. app: antrea-test-app
    11. egress:
    12. - action: Drop
    13. to:
    14. - nodeSelector:
    15. matchLabels:
    16. node-role.kubernetes.io/control-plane: ""
    17. ports:
    18. - protocol: TCP
    19. port: 6443

    toServices egress rules

    A combination of Service name and Service Namespace can be used in toServices in egress rules to refer to a K8s Service. toServices match traffic based on the clusterIP, port and protocol of Services. Thus, headless Service is not supported by this field. A sample policy can be found here.

    Since toServices represents a combination of IP+port, it cannot be used with to or ports within the same egress rule. Also, since the matching process relies on the groupID assigned to Service by AntreaProxy, this field can only be used when AntreaProxy is enabled.

    This clusterIP-based match has one caveat: direct access to the Endpoints of this Service is not affected by toServices rules. To restrict access towards backend Endpoints of a Service, define a ClusterGroup with ServiceReference and use the name of ClusterGroup in the Antrea-native policy rule’s group field instead. ServiceReference of a ClusterGroup is equivalent to a podSelector of a ClusterGroup that selects all backend Pods of a Service, based on the Service spec’s matchLabels. Antrea will keep the Endpoint selection up-to-date in case the Service’s matchLabels change, or Endpoints are added/deleted for that Service. For more information on ServiceReference, refer to the serviceReference paragraph of the .

    ServiceAccount based selection

    Antrea ClusterNetworkPolicy features a serviceAccount field to select all Pods that have been assigned the ServiceAccount referenced in this field. This field could be used in appliedTo, ingress from and egress to section. No matter which sections the serviceAccount field is used in, it cannot be used with any other fields.

    serviceAccount uses namespace and name to select the ServiceAccount with a specific name under a specific namespace.

    An example policy using serviceAccount could look like this:

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-service-account
    5. spec:
    6. priority: 5
    7. tier: securityops
    8. appliedTo:
    9. - serviceAccount:
    10. name: sa-1
    11. namespace: ns-1
    12. egress:
    13. - action: Drop
    14. to:
    15. - serviceAccount:
    16. name: sa-2
    17. namespace: ns-2
    18. name: ServiceAccountEgressRule
    19. enableLogging: false

    In this example, the policy will be applied to all Pods whose ServiceAccount is sa-1 of ns-1. Let’s call those Pods “appliedToPods”. The egress to section will select all Pods whose ServiceAccount is in ns-2 Namespace and name as sa-2. Let’s call those Pods “egressPods”. After this policy is applied, traffic from “appliedToPods” to “egressPods” will be dropped.

    Note: Antrea will use a reserved label key for internal processing serviceAccount. The reserved label looks like: internal.antrea.io/service-account:[ServiceAccountName]. Users should avoid using this label key in any entities no matter if a policy with serviceAccount is applied in the cluster.

    Apply to NodePort Service

    Antrea ClusterNetworkPolicy features a service field in appliedTo field to enforce the ACNP rules on the traffic from external clients to a NodePort Service.

    service uses namespace and name to select the Service with a specific name under a specific Namespace; only a NodePort Service can be referred by service field.

    There are a few restrictions on configuring a policy/rule that applies to NodePort Services:

    1. This feature can only work when Antrea proxyAll is enabled and kube-proxy is disabled.
    2. service field cannot be used with any other fields in appliedTo.
    3. a policy or a rule can’t be applied to both a NodePort Service and other entities at the same time.
    4. If a appliedTo with service is used at policy level, then this policy can only contain ingress rules.
    5. If a appliedTo with service is used at rule level, then this rule can only be an ingress rule.
    6. If an ingress rule is applied to a NodePort Service, then this rule can only use ipBlock in its from field.

    An example policy using service in appliedTo could look like this:

    1. apiVersion: crd.antrea.io/v1alpha1
    2. kind: ClusterNetworkPolicy
    3. metadata:
    4. name: acnp-deny-external-client-nodeport-svc-access
    5. spec:
    6. priority: 5
    7. tier: securityops
    8. appliedTo:
    9. - service:
    10. name: svc-1
    11. namespace: ns-1
    12. ingress:
    13. - action: Drop
    14. from:
    15. - ipBlock:
    16. cidr: 1.1.1.0/24

    In this example, the policy will be applied to the NodePort Service svc-1 in Namespace ns-1, and drop all packets from CIDR 1.1.1.0/24.

    ClusterGroup

    A ClusterGroup (CG) CRD is a specification of how workloads are grouped together. It allows admins to group Pods using traditional label selectors, which can then be referenced in ACNP in place of stand-alone podSelector and/or namespaceSelector. In addition to podSelector and namespaceSelector, ClusterGroup also supports the following ways to select endpoints:

    • Pod grouping by serviceReference. ClusterGroup specified by serviceReference will contain the same Pod members that are currently selected by the Service’s selector.
    • ipBlock or ipBlocks to share IPBlocks between ACNPs.
    • childGroups to select other ClusterGroups by name.

    ClusterGroups allow admins to separate the concern of grouping of workloads from the security aspect of Antrea-native policies. It adds another level of indirection allowing users to update group membership without having to update individual policy rules.

    ClusterGroup CRD

    Below are some example ClusterGroup specs:

    1. apiVersion: crd.antrea.io/v1alpha3
    2. kind: ClusterGroup
    3. metadata:
    4. name: test-cg-sel
    5. spec:
    6. podSelector:
    7. matchLabels:
    8. role: db
    9. namespaceSelector:
    10. matchLabels:
    11. env: prod
    12. ---
    13. apiVersion: crd.antrea.io/v1alpha3
    14. kind: ClusterGroup
    15. metadata:
    16. name: test-cg-ip-block
    17. spec:
    18. # ipBlocks cannot be set along with podSelector, namespaceSelector or serviceReference.
    19. ipBlocks:
    20. - cidr: 10.0.10.0/24
    21. ---
    22. apiVersion: crd.antrea.io/v1alpha3
    23. kind: ClusterGroup
    24. metadata:
    25. name: test-cg-svc-ref
    26. spec:
    27. # serviceReference cannot be set along with podSelector, namespaceSelector or ipBlocks.
    28. serviceReference:
    29. name: test-service
    30. namespace: default
    31. ---
    32. apiVersion: crd.antrea.io/v1alpha3
    33. kind: ClusterGroup
    34. metadata:
    35. name: test-cg-nested
    36. spec:
    37. childGroups: [test-cg-sel, test-cg-ip-blocks, test-cg-svc-ref]

    There are a few restrictions on how ClusterGroups can be configured:

    • A ClusterGroup is a cluster-scoped resource and therefore can only be set in an Antrea ClusterNetworkPolicy’s appliedTo and to/from peers.
    • For the childGroup field, currently only one level of nesting is supported: If a ClusterGroup has childGroups, it cannot be selected as a childGroup by other ClusterGroups.
    • ClusterGroup must exist before another ClusterGroup can select it by name as its childGroup. A ClusterGroup cannot be deleted if it is referred to by other ClusterGroup as childGroup. This restriction may be lifted in future releases.
    • At most one of podSelector, serviceReference, ipBlock, ipBlocks or childGroups can be set for a ClusterGroup, i.e. a single ClusterGroup can either group workloads, represent IP CIDRs or select other ClusterGroups. A parent ClusterGroup can select different types of ClusterGroups (Pod/Service/CIDRs), but as mentioned above, it cannot select a ClusterGroup that has childGroups itself.

    spec: The ClusterGroup spec has all the information needed to define a cluster-wide group.

    • podSelector: Pods can be grouped cluster-wide using podSelector. If set with a namespaceSelector, all matching Pods from Namespaces selected by the namespaceSelector will be grouped.

    • namespaceSelector: All Pods from Namespaces selected by the namespaceSelector will be grouped. If set with a podSelector, all matching Pods from Namespaces selected by the namespaceSelector will be grouped.

    • ipBlock: This selects a particular IP CIDR range to allow as ingress “sources” or egress “destinations”. A ClusterGroup with ipBlock referenced in an ACNP’s appliedTo field will be ignored, and the policy will have no effect. For a same ClusterGroup, ipBlock and ipBlocks cannot be set concurrently. ipBlock will be deprecated for ipBlocks in future versions of ClusterGroup.

    • ipBlocks: This selects a list of IP CIDR ranges to allow as ingress “sources” or egress “destinations”. A ClusterGroup with ipBlocks referenced in an ACNP’s appliedTo field will be ignored, and the policy will have no effect. For a same ClusterGroup, ipBlock and ipBlocks cannot be set concurrently.

    • serviceReference: Pods that serve as the backend for the specified Service will be grouped. Services without selectors are currently not supported, and will be ignored if referred by serviceReference in a ClusterGroup. When ClusterGroups with serviceReference are used in ACNPs as appliedTo or to/from peers, no Service port information will be automatically assumed for traffic enforcement. ServiceReference is merely a mechanism to group Pods and ensure that a ClusterGroup stays in sync with the set of Pods selected by a given Service.

    • childGroups: This selects existing ClusterGroups by name. The effective members of the “parent” ClusterGroup will be the union of all its childGroups’ members. See the section above for restrictions.

    status: The ClusterGroup status field determines the overall realization status of the group.

    • groupMembersComputed: The “GroupMembersComputed” condition is set to “True” when the controller has calculated all the corresponding workloads that match the selectors set in the group.

    kubectl commands for ClusterGroup

    The following kubectl commands can be used to retrieve CG resources:

    1. # Use long name with API Group
    2. kubectl get clustergroups.crd.antrea.io
    3. # Use short name
    4. kubectl get cg
    5. # Use short name with API Group
    6. kubectl get cg.crd.antrea.io

    Group

    A Group CRD represents a different way for specifying how workloads are grouped together, and is conceptually similar to the ClusterGroup CRD. Users will be able to refer to Groups in Antrea NetworkPolicy resources instead of specifying Pod and Namespace selectors every time.

    Group CRD

    Below are some example Group specs:

    Group has a similar spec with ClusterGroup. However, there are key differences and restrictions.

    • A Group can be set in an Antrea NetworkPolicy’s appliedTo and to/from peers. When set in the appliedTo field, it cannot include namespaceSelector, since Antrea NetworkPolicy is Namespace scoped. For example, the test-grp-with-namespace Group in the sample cannot be used by Antrea NetworkPolicy appliedTo.
    • Antrea will not validate the referenced Group resources for the appliedTo convention; if the convention is violated in the Antrea NetworkPolicy’s appliedTo section or for any of the rules’ appliedTo, then Antrea will report a condition Realizable=False in the NetworkPolicy status, the condition includes NetworkPolicyAppliedToUnsupportedGroup reason and a detailed message.
    • childGroups only accepts strings, and they will be considered as names of the Groups and will be looked up in the policy’s own Namespace. For example, if child Group child-0 exists in ns-2, it should not be added as a child Group for ns-1/parentGroup-0.

    kubectl commands for Group

    The following kubectl commands can be used to retrieve Group resources:

    1. # Use long name with API Group
    2. kubectl get groups.crd.antrea.io
    3. # Use short name
    4. kubectl get grp
    5. # Use short name with API Group
    6. kubectl get grp.crd.antrea.io

    Antrea-native policy CRDs are meant for admins to manage the security of their cluster. Thus, access to manage these CRDs must be granted to subjects which have the authority to outline the security policies for the cluster and/or Namespaces. On cluster initialization, Antrea grants the permissions to edit these CRDs with admin and the edit ClusterRole. In addition to this, Antrea also grants the permission to view these CRDs with the view ClusterRole. Cluster admins can therefore grant these ClusterRoles to any subject who may be responsible to manage the Antrea policy CRDs. The admins may also decide to share the view ClusterRole to a wider range of subjects to allow them to read the policies that may affect their workloads. Similar RBAC is applied to the ClusterGroup resource.

    Notes and constraints

    • There is a soft limit of 20 on the maximum number of Tier resources that are supported. But for optimal performance, it is recommended that the number of Tiers in a cluster be less than or equal to 10.
    • In order to reduce the churn in the agent, it is recommended to set the policy priority (acnp/anp.spec.priority) within the range 1.0 to 100.0.
    • The v1alpha1 policy CRDs support up to 10,000 unique priorities at policy level, and up to 50,000 unique priorities at rule level, across all Tiers except for the “baseline” Tier. For any two Antrea-native policy rules, their rule level priorities are only considered equal if their policy objects share the same Tier and have the same policy priority, plus the rules themselves are of the same rule priority (rule priority is the sequence number of the rule within the policy’s ingress or egress section).
    • For the “baseline” Tier, the max supported unique priorities (at rule level) is 150.
    • If there are multiple Antrea-native policy rules created at the same rule-level priority (same policy Tier, policy priority and rule priority), and happen to select overlapping traffic patterns but have conflicting rule actions (e.g.Allowv.s.Deny), the behavior of such traffic will be undeterministic. In general, we recommended against creating rules with conflicting actions in policy resources at the same priority. For example, consider two AntreaNetworkPolicies created in the same Namespace and Tier with the same policy priority. The first policy applies to all app=web Pods in the Namespace and has only one ingress rule to Deny all traffic from role=dev Pods. The other policy also applies to all app=web Pods in the Namespace and has only one ingress rule, which is to Allow all traffic from app=client Pods. Those two ingress rules might not always conflict, but in case a Pod with both the app=client and role=dev labels initiates traffic towards the Pods in the Namespace, both rules will be matched at the same priority with conflicting actions. It will be the policy writer’s responsibility to identify such ambiguities in rule definitions and avoid potential undeterministic rule enforcement results.