Deploying an egress router pod in redirect mode

    The egress router implementation uses the egress router Container Network Interface (CNI) plugin.

    Define the configuration for an egress router pod in an egress router custom resource. The following YAML describes the fields for the configuration of an egress router in redirect mode:

    Example egress router specification

    1. apiVersion: network.operator.openshift.io/v1
    2. kind: EgressRouter
    3. metadata:
    4. name: egress-router-redirect
    5. spec:
    6. networkInterface: {
    7. macvlan: {
    8. mode: "Bridge"
    9. }
    10. }
    11. addresses: [
    12. {
    13. ip: "192.168.12.99/24",
    14. gateway: "192.168.12.1"
    15. }
    16. ]
    17. mode: Redirect
    18. redirect: {
    19. redirectRules: [
    20. {
    21. destinationIP: "10.0.0.99",
    22. port: 80,
    23. protocol: UDP
    24. },
    25. {
    26. destinationIP: "203.0.113.26",
    27. port: 8080,
    28. protocol: TCP
    29. },
    30. {
    31. port: 8443,
    32. targetPort: 443,
    33. protocol: TCP
    34. }
    35. ]
    36. }

    Deploying an egress router in redirect mode

    You can deploy an egress router to redirect traffic from its own reserved source IP address to one or more destination IP addresses.

    After you add an egress router, the client pods that need to use the reserved source IP address must be modified to connect to the egress router rather than connecting directly to the destination IP.

    Prerequisites

    • Install the OpenShift CLI (oc).

    • Log in as a user with cluster-admin privileges.

    Procedure

    1. Create an egress router definition.

    2. To ensure that other pods can find the IP address of the egress router pod, create a service that uses the egress router, as in the following example:

      1. apiVersion: v1
      2. kind: Service
      3. metadata:
      4. name: egress-1
      5. spec:
      6. ports:
      7. - name: web-app
      8. protocol: TCP
      9. port: 8080
      10. type: ClusterIP
      11. selector:
      12. app: egress-router-cni (1)
      1Specify the label for the egress router. The value shown is added by the Cluster Network Operator and is not configurable.

    Verification

    To verify that the Cluster Network Operator started the egress router, complete the following procedure:

    1. View the network attachment definition that the Operator created for the egress router:

      1. $ oc get network-attachment-definition egress-router-cni-nad

      The name of the network attachment definition is not configurable.

      Example output

      1. NAME AGE
      2. egress-router-cni-nad 18m
    2. View the deployment for the egress router pod:

      1. $ oc get deployment egress-router-cni-deployment

      The name of the deployment is not configurable.

      Example output

      1. NAME READY UP-TO-DATE AVAILABLE AGE
      2. egress-router-cni-deployment 1/1 1 1 18m
    3. View the status of the egress router pod:

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m
      1. $ POD_NODENAME=$(oc get pod -l app=egress-router-cni -o jsonpath="{.items[0].spec.nodeName}")
    4. Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug:

      1. $ oc debug node/$POD_NODENAME
    5. Set as the root directory within the debug shell. The debug pod mounts the root file system of the host in /host within the pod. By changing the root directory to /host, you can run binaries from the executable paths of the host:

      1. # chroot /host
    6. From within the chroot environment console, display the egress router logs:

      1. # cat /tmp/egress-router-log

      Example output

      1. 2021-04-26T12:27:20Z [debug] Called CNI ADD
      2. 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1
      3. 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24]
      4. 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443]
      5. 2021-04-26T12:27:20Z [debug] Created macvlan interface
      6. 2021-04-26T12:27:20Z [debug] Renamed macvlan to "net1"
      7. 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface
      8. 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254}
      9. 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1
      10. 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99
      11. 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80
      12. 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443
      13. 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99

      The logging file location and logging level are not configurable when you start the egress router by creating an EgressRouter object as described in this procedure.

    7. From within the chroot environment console, get the container ID:

      Example output

      1. CONTAINER
      2. bac9fae69ddb6
    8. Determine the process ID of the container. In this example, the container ID is bac9fae69ddb6:

      1. # crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print $2}'

      Example output

      1. 68857
    9. Enter the network namespace of the container:

      1. # nsenter -n -t 68857
    10. Display the routing table:

      1. # ip route

      Example output

      1. default via 192.168.12.1 dev net1
      2. 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18
      3. 192.168.12.1 dev net1