Identity-Aware and HTTP-Aware Policy Enforcement

    The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.

    If you have not set up Cilium yet, pick any installation method as described in section to set up Cilium for your Kubernetes environment. If in doubt, pick Getting Started Using Minikube as the simplest way to set up a Kubernetes cluster with Cilium:

    Deploy the Demo Application

    Now that we have Cilium deployed and operating correctly we can deploy our demo application.

    In our Star Wars-inspired example, there are three microservices applications: deathstar, tiefighter, and xwing. The deathstar runs an HTTP webservice on port 80, which is exposed as a Kubernetes Service to load-balance requests to deathstar across two pod replicas. The deathstar service provides landing services to the empire’s spaceships so that they can request a landing port. The tiefighter pod represents a landing-request client service on a typical empire ship and xwing represents a similar service on an alliance ship. They exist so that we can test different security policies for access control to deathstar landing services.

    Application Topology for Cilium and Kubernetes

    The file http-sw-app.yaml contains a Kubernetes Deployment for each of the three services. Each deployment is identified using the Kubernetes labels (org=empire, class=deathstar), (org=empire, class=tiefighter), and (org=alliance, class=xwing). It also includes a deathstar-service, which load-balances traffic to all pods with label (org=empire, class=deathstar).

    1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/minikube/http-sw-app.yaml
    2. service/deathstar created
    3. deployment.extensions/deathstar created
    4. pod/tiefighter created
    5. pod/xwing created

    Kubernetes will deploy the pods and service in the background. Running kubectl get pods,svc will inform you about the progress of the operation. Each pod will go through several states until it reaches Running at which point the pod is ready.

    1. $ kubectl get pods,svc
    2. NAME READY STATUS RESTARTS AGE
    3. pod/deathstar-6fb5694d48-5hmds 1/1 Running 0 107s
    4. pod/deathstar-6fb5694d48-fhf65 1/1 Running 0 107s
    5. pod/tiefighter 1/1 Running 0 107s
    6. pod/xwing 1/1 Running 0 107s
    7. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    8. service/deathstar ClusterIP 10.96.110.8 <none> 80/TCP 107s
    9. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m53s

    Each pod will be represented in Cilium as an . We can invoke the cilium tool inside the Cilium pod to list them:

    1. $ kubectl -n kube-system get pods -l k8s-app=cilium
    2. NAME READY STATUS RESTARTS AGE
    3. cilium-5ngzd 1/1 Running 0 3m19s
    4. $ kubectl -n kube-system exec cilium-1c2cz -- cilium endpoint list
    5. ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
    6. ENFORCEMENT ENFORCEMENT
    7. 232 Disabled Disabled 16530 k8s:class=deathstar 10.0.0.147 ready
    8. k8s:io.cilium.k8s.policy.cluster=default
    9. k8s:io.cilium.k8s.policy.serviceaccount=default
    10. k8s:io.kubernetes.pod.namespace=default
    11. k8s:org=empire
    12. 726 Disabled Disabled 1 reserved:host ready
    13. 883 Disabled Disabled 4 reserved:health 10.0.0.244 ready
    14. 1634 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.118 ready
    15. k8s:io.cilium.k8s.policy.serviceaccount=coredns
    16. k8s:io.kubernetes.pod.namespace=kube-system
    17. k8s:k8s-app=kube-dns
    18. 1673 Disabled Disabled 31028 k8s:class=tiefighter 10.0.0.112 ready
    19. k8s:io.cilium.k8s.policy.cluster=default
    20. k8s:io.cilium.k8s.policy.serviceaccount=default
    21. k8s:io.kubernetes.pod.namespace=default
    22. k8s:org=empire
    23. 2811 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.47 ready
    24. k8s:io.cilium.k8s.policy.serviceaccount=coredns
    25. k8s:io.kubernetes.pod.namespace=kube-system
    26. k8s:k8s-app=kube-dns
    27. 2843 Disabled Disabled 16530 k8s:class=deathstar 10.0.0.89 ready
    28. k8s:io.cilium.k8s.policy.cluster=default
    29. k8s:io.cilium.k8s.policy.serviceaccount=default
    30. k8s:io.kubernetes.pod.namespace=default
    31. k8s:org=empire
    32. 3184 Disabled Disabled 22654 k8s:class=xwing 10.0.0.30 ready
    33. k8s:io.cilium.k8s.policy.cluster=default
    34. k8s:io.cilium.k8s.policy.serviceaccount=default
    35. k8s:io.kubernetes.pod.namespace=default
    36. k8s:org=alliance

    Both ingress and egress policy enforcement is still disabled on all of these pods because no network policy has been imported yet which select any of the pods.

    From the perspective of the deathstar service, only the ships with label org=empire are allowed to connect and request landing. Since we have no rules enforced, both xwing and tiefighter will be able to request landing. To test this, use the commands below.

    1. $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
    2. Ship landed
    3. $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
    4. Ship landed

    Apply an L3/L4 Policy

    When using Cilium, endpoint IP addresses are irrelevant when defining security policies. Instead, you can use the labels assigned to the pods to define security policies. The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster.

    Note: Cilium performs stateful connection tracking, meaning that if policy allows the frontend to reach backend, it will automatically allow all required reply packets that are part of backend replying to frontend within the context of the same TCP/UDP connection.

    L4 Policy with Cilium and Kubernetes

    We can achieve that with the following CiliumNetworkPolicy:

    1. apiVersion: "cilium.io/v2"
    2. kind: CiliumNetworkPolicy
    3. description: "L3-L4 policy to restrict deathstar access to empire ships only"
    4. metadata:
    5. name: "rule1"
    6. spec:
    7. endpointSelector:
    8. matchLabels:
    9. org: empire
    10. class: deathstar
    11. ingress:
    12. - fromEndpoints:
    13. - matchLabels:
    14. org: empire
    15. toPorts:
    16. protocol: TCP

    CiliumNetworkPolicies match on pod labels using an “endpointSelector” to identify the sources and destinations to which the policy applies. The above policy whitelists traffic sent from any pods with label (org=empire) to deathstar pods with label (org=empire, class=deathstar) on TCP port 80.

    To apply this L3/L4 policy, run:

    Now if we run the landing requests again, only the tiefighter pods with the label org=empire will succeed. The xwing pods will be blocked!

    1. $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
    2. Ship landed

    This works as expected. Now the same request run from an xwing pod will fail:

    1. $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

    This request will hang, so press Control-C to kill the curl request, or wait for it to time out.

    If we run cilium endpoint list again we will see that the pods with the label org=empire and class=deathstar now have ingress policy enforcement enabled as per the policy above.

    1. $ kubectl -n kube-system exec cilium-1c2cz -- cilium endpoint list
    2. ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
    3. ENFORCEMENT ENFORCEMENT
    4. 232 Enabled Disabled 16530 k8s:class=deathstar 10.0.0.147 ready
    5. k8s:io.cilium.k8s.policy.cluster=default
    6. k8s:io.cilium.k8s.policy.serviceaccount=default
    7. k8s:io.kubernetes.pod.namespace=default
    8. k8s:org=empire
    9. 726 Disabled Disabled 1 reserved:host ready
    10. 883 Disabled Disabled 4 reserved:health 10.0.0.244 ready
    11. 1634 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.118 ready
    12. k8s:io.cilium.k8s.policy.serviceaccount=coredns
    13. k8s:io.kubernetes.pod.namespace=kube-system
    14. k8s:k8s-app=kube-dns
    15. 1673 Disabled Disabled 31028 k8s:class=tiefighter 10.0.0.112 ready
    16. k8s:io.cilium.k8s.policy.cluster=default
    17. k8s:io.cilium.k8s.policy.serviceaccount=default
    18. k8s:io.kubernetes.pod.namespace=default
    19. k8s:org=empire
    20. 2811 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.47 ready
    21. k8s:io.cilium.k8s.policy.serviceaccount=coredns
    22. k8s:io.kubernetes.pod.namespace=kube-system
    23. k8s:k8s-app=kube-dns
    24. 2843 Enabled Disabled 16530 k8s:class=deathstar 10.0.0.89 ready
    25. k8s:io.cilium.k8s.policy.cluster=default
    26. k8s:io.cilium.k8s.policy.serviceaccount=default
    27. k8s:io.kubernetes.pod.namespace=default
    28. k8s:org=empire
    29. 3184 Disabled Disabled 22654 k8s:class=xwing 10.0.0.30 ready
    30. k8s:io.cilium.k8s.policy.cluster=default
    31. k8s:io.cilium.k8s.policy.serviceaccount=default
    32. k8s:io.kubernetes.pod.namespace=default
    33. k8s:org=alliance

    You can also inspect the policy details via kubectl

    1. $ kubectl get cnp
    2. NAME AGE
    3. rule1 2m
    4. $ kubectl describe cnp rule1
    5. Name: rule1
    6. Namespace: default
    7. Labels: <none>
    8. Annotations: <none>
    9. API Version: cilium.io/v2
    10. Description: L3-L4 policy to restrict deathstar access to empire ships only
    11. Kind: CiliumNetworkPolicy
    12. Metadata:
    13. Creation Timestamp: 2020-06-15T14:06:48Z
    14. Generation: 1
    15. Managed Fields:
    16. API Version: cilium.io/v2
    17. Fields Type: FieldsV1
    18. fieldsV1:
    19. f:description:
    20. f:spec:
    21. .:
    22. f:endpointSelector:
    23. .:
    24. f:matchLabels:
    25. .:
    26. f:class:
    27. f:org:
    28. f:ingress:
    29. Manager: kubectl
    30. Operation: Update
    31. Time: 2020-06-15T14:06:48Z
    32. Resource Version: 2914
    33. Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1
    34. UID: eb3a688b-b3aa-495c-b20a-d4f79e7c088d
    35. Spec:
    36. Endpoint Selector:
    37. Match Labels:
    38. Class: deathstar
    39. From Endpoints:
    40. Match Labels:
    41. Org: empire
    42. To Ports:
    43. Ports:
    44. Port: 80
    45. Protocol: TCP
    46. Events: <none>

    Apply and Test HTTP-aware L7 Policy

    In the simple scenario above, it was sufficient to either give tiefighter / xwing full access to deathstar’s API or no access at all. But to provide the strongest security (i.e., enforce least-privilege isolation) between microservices, each service that calls deathstar’s API should be limited to making only the set of HTTP requests it requires for legitimate operation.

    1. $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
    2. Panic: deathstar exploded
    3. goroutine 1 [running]:
    4. main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)
    5. /code/src/github.com/empire/deathstar/
    6. temp/main.go:9 +0x64
    7. main.main()
    8. /code/src/github.com/empire/deathstar/
    9. temp/main.go:5 +0x85

    While this is an illustrative example, unauthorized access such as above can have adverse security repercussions.

    L7 Policy with Cilium and Kubernetes

    Cilium is capable of enforcing HTTP-layer (i.e., L7) policies to limit what URLs the tiefighter is allowed to reach. Here is an example policy file that extends our original policy by limiting tiefighter to making only a POST /v1/request-landing API call, but disallowing all other calls (including PUT /v1/exhaust-port).

    Update the existing rule to apply L7-aware policy to protect deathstar using:

    1. $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/minikube/sw_l3_l4_l7_policy.yaml
    2. ciliumnetworkpolicy.cilium.io/rule1 configured

    We can now re-run the same test as above, but we will see a different outcome:

    1. $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
    2. Ship landed

    and

    1. $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
    2. Access denied

    As this rule builds on the identity-aware rule, traffic from pods without the label org=empire will continue to be dropped causing the connection to time out:

    1. $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

    As you can see, with Cilium L7 security policies, we are able to permit tiefighter to access only the required API resources on deathstar, thereby implementing a “least privilege” security approach for communication between microservices.

    You can observe the L7 policy via kubectl:

    1. $ kubectl describe ciliumnetworkpolicies
    2. Name: rule1
    3. Namespace: default
    4. Labels: <none>
    5. Annotations: API Version: cilium.io/v2
    6. Description: L7 policy to restrict access to specific HTTP call
    7. Kind: CiliumNetworkPolicy
    8. Metadata:
    9. Creation Timestamp: 2020-06-15T14:06:48Z
    10. Generation: 2
    11. Managed Fields:
    12. API Version: cilium.io/v2
    13. Fields Type: FieldsV1
    14. fieldsV1:
    15. f:description:
    16. f:metadata:
    17. f:annotations:
    18. .:
    19. f:kubectl.kubernetes.io/last-applied-configuration:
    20. f:spec:
    21. .:
    22. f:endpointSelector:
    23. .:
    24. f:matchLabels:
    25. .:
    26. f:class:
    27. f:org:
    28. f:ingress:
    29. Manager: kubectl
    30. Operation: Update
    31. Time: 2020-06-15T14:10:46Z
    32. Resource Version: 3445
    33. Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1
    34. UID: eb3a688b-b3aa-495c-b20a-d4f79e7c088d
    35. Spec:
    36. Endpoint Selector:
    37. Match Labels:
    38. Class: deathstar
    39. Org: empire
    40. Ingress:
    41. From Endpoints:
    42. Match Labels:
    43. Org: empire
    44. To Ports:
    45. Ports:
    46. Port: 80
    47. Protocol: TCP
    48. Rules:
    49. Http:
    50. Method: POST
    51. Path: /v1/request-landing
    52. Events: <none>

    and cilium CLI:

    We hope you enjoyed the tutorial. Feel free to play more with the setup, read the rest of the documentation, and reach out to us on the Cilium Slack channel with any questions!

    1. $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/minikube/http-sw-app.yaml