Restrict a Container’s Access to Resources with AppArmor

    AppArmor is a Linux kernel security module that supplements the standard Linux user and group based permissions to confine programs to a limited set of resources. AppArmor can be configured for any application to reduce its potential attack surface and provide greater in-depth defense. It is configured through profiles tuned to allow the access needed by a specific program or container, such as Linux capabilities, network access, file permissions, etc. Each profile can be run in either enforcing mode, which blocks access to disallowed resources, or complain mode, which only reports violations.

    AppArmor can help you to run a more secure deployment by restricting what containers are allowed to do, and/or provide better auditing through system logs. However, it is important to keep in mind that AppArmor is not a silver bullet and can only do so much to protect against exploits in your application code. It is important to provide good, restrictive profiles, and harden your applications and cluster from other angles as well.

    • See an example of how to load a profile on a node
    • Learn how to enforce the profile on a Pod
    • Learn how to check that the profile is loaded
    • See what happens when a profile is violated
    • See what happens when a profile cannot be loaded

    Before you begin

    Make sure:

    1. Kubernetes version is at least v1.4 — Kubernetes support for AppArmor was added in v1.4. Kubernetes components older than v1.4 are not aware of the new AppArmor annotations, and will silently ignore any AppArmor settings that are provided. To ensure that your Pods are receiving the expected protections, it is important to verify the Kubelet version of your nodes:

      1. gke-test-default-pool-239f5d02-gyn2: v1.4.0
      2. gke-test-default-pool-239f5d02-x1kf: v1.4.0
      3. gke-test-default-pool-239f5d02-xwux: v1.4.0
    2. AppArmor kernel module is enabled — For the Linux kernel to enforce an AppArmor profile, the AppArmor kernel module must be installed and enabled. Several distributions enable the module by default, such as Ubuntu and SUSE, and many others provide optional support. To check whether the module is enabled, check the /sys/module/apparmor/parameters/enabled file:

      1. cat /sys/module/apparmor/parameters/enabled
      2. Y

      If the Kubelet contains AppArmor support (>= v1.4), it will refuse to run a Pod with AppArmor options if the kernel module is not enabled.

    Note: Ubuntu carries many AppArmor patches that have not been merged into the upstream Linux kernel, including patches that add additional hooks and features. Kubernetes has only been tested with the upstream version, and does not promise support for other features.

    1. Container runtime supports AppArmor — Currently all common Kubernetes-supported container runtimes should support AppArmor, like Docker, or containerd. Please refer to the corresponding runtime documentation and verify that the cluster fulfills the requirements to use AppArmor.

    2. Profile is loaded — AppArmor is applied to a Pod by specifying an AppArmor profile that each container should be run with. If any of the specified profiles is not already loaded in the kernel, the Kubelet (>= v1.4) will reject the Pod. You can view which profiles are loaded on a node by checking the /sys/kernel/security/apparmor/profiles file. For example:

      1. ssh gke-test-default-pool-239f5d02-gyn2 "sudo cat /sys/kernel/security/apparmor/profiles | sort"
      1. apparmor-test-deny-write (enforce)
      2. apparmor-test-audit-write (enforce)
      3. docker-default (enforce)
      4. k8s-nginx (enforce)

      For more details on loading profiles on nodes, see .

    As long as the Kubelet version includes AppArmor support (>= v1.4), the Kubelet will reject a Pod with AppArmor options if any of the prerequisites are not met. You can also verify AppArmor support on nodes by checking the node ready condition message (though this is likely to be removed in a later release):

    1. kubectl get nodes -o=jsonpath='{range .items[*]}{@.metadata.name}: {.status.conditions[?(@.reason=="KubeletReady")].message}{"\n"}{end}'
    1. gke-test-default-pool-239f5d02-gyn2: kubelet is posting ready status. AppArmor enabled
    2. gke-test-default-pool-239f5d02-x1kf: kubelet is posting ready status. AppArmor enabled
    3. gke-test-default-pool-239f5d02-xwux: kubelet is posting ready status. AppArmor enabled

    Note: AppArmor is currently in beta, so options are specified as annotations. Once support graduates to general availability, the annotations will be replaced with first-class fields (more details in ).

    1. container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref>

    Where <container_name> is the name of the container to apply the profile to, and <profile_ref> specifies the profile to apply. The profile_ref can be one of:

    • runtime/default to apply the runtime’s default profile
    • localhost/<profile_name> to apply the profile loaded on the host with the name <profile_name>
    • unconfined to indicate that no profiles will be loaded

    See the API Reference for the full details on the annotation and profile name formats.

    Kubernetes AppArmor enforcement works by first checking that all the prerequisites have been met, and then forwarding the profile selection to the container runtime for enforcement. If the prerequisites have not been met, the Pod will be rejected, and will not run.

    To verify that the profile was applied, you can look for the AppArmor security option listed in the container created event:

    1. kubectl get events | grep Created

    You can also verify directly that the container’s root process is running with the correct profile by checking its proc attr:

    1. kubectl exec <pod_name> -- cat /proc/1/attr/current
    1. k8s-apparmor-example-deny-write (enforce)

    Example

    This example assumes you have already set up a cluster with AppArmor support.

    First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:

    1. #include <tunables/global>
    2. profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
    3. #include <abstractions/base>
    4. file,
    5. # Deny all file writes.
    6. deny /** w,
    7. }

    Since we don’t know where the Pod will be scheduled, we’ll need to load the profile on all our nodes. For this example we’ll use SSH to install the profiles, but other approaches are discussed in Setting up nodes with profiles.

    1. NODES=(
    2. # The SSH-accessible domain names of your nodes
    3. gke-test-default-pool-239f5d02-gyn2.us-central1-a.my-k8s
    4. gke-test-default-pool-239f5d02-x1kf.us-central1-a.my-k8s
    5. for NODE in ${NODES[*]}; do ssh $NODE 'sudo apparmor_parser -q <<EOF
    6. #include <tunables/global>
    7. profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
    8. #include <abstractions/base>
    9. file,
    10. # Deny all file writes.
    11. deny /** w,
    12. }
    13. EOF'
    14. done

    Next, we’ll run a simple “Hello AppArmor” pod with the deny-write profile:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: hello-apparmor
    5. annotations:
    6. # Tell Kubernetes to apply the AppArmor profile "k8s-apparmor-example-deny-write".
    7. # Note that this is ignored if the Kubernetes node is not running version 1.4 or greater.
    8. container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write
    9. spec:
    10. containers:
    11. - name: hello
    12. image: busybox:1.28
    13. command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
    1. kubectl create -f ./hello-apparmor.yaml

    If we look at the pod events, we can see that the Pod container was created with the AppArmor profile “k8s-apparmor-example-deny-write”:

    1. kubectl get events | grep hello-apparmor
    1. 14s 14s 1 hello-apparmor Pod Normal Scheduled {default-scheduler } Successfully assigned hello-apparmor to gke-test-default-pool-239f5d02-gyn2
    2. 14s 14s 1 hello-apparmor Pod spec.containers{hello} Normal Pulling {kubelet gke-test-default-pool-239f5d02-gyn2} pulling image "busybox"
    3. 13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Pulled {kubelet gke-test-default-pool-239f5d02-gyn2} Successfully pulled image "busybox"
    4. 13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Created {kubelet gke-test-default-pool-239f5d02-gyn2} Created container with docker id 06b6cd1c0989; Security:[seccomp=unconfined apparmor=k8s-apparmor-example-deny-write]
    5. 13s 13s 1 hello-apparmor Pod spec.containers{hello} Normal Started {kubelet gke-test-default-pool-239f5d02-gyn2} Started container with docker id 06b6cd1c0989

    We can verify that the container is actually running with that profile by checking its proc attr:

    1. k8s-apparmor-example-deny-write (enforce)
    1. kubectl exec hello-apparmor -- touch /tmp/test
    1. touch: /tmp/test: Permission denied
    2. error: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1

    To wrap up, let’s look at what happens if we try to specify a profile that hasn’t been loaded:

    1. kubectl create -f /dev/stdin <<EOF
    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: hello-apparmor-2
    5. annotations:
    6. container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-allow-write
    7. spec:
    8. containers:
    9. - name: hello
    10. image: busybox:1.28
    11. command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
    12. EOF
    13. pod/hello-apparmor-2 created
    1. kubectl describe pod hello-apparmor-2
    1. Name: hello-apparmor-2
    2. Namespace: default
    3. Node: gke-test-default-pool-239f5d02-x1kf/
    4. Start Time: Tue, 30 Aug 2016 17:58:56 -0700
    5. Status: Pending
    6. Reason: AppArmor
    7. Message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded
    8. IP:
    9. Controllers: <none>
    10. Containers:
    11. hello:
    12. Container ID:
    13. Image: busybox
    14. Image ID:
    15. Port:
    16. Command:
    17. sh
    18. -c
    19. echo 'Hello AppArmor!' && sleep 1h
    20. State: Waiting
    21. Reason: Blocked
    22. Ready: False
    23. Restart Count: 0
    24. Environment: <none>
    25. Mounts:
    26. /var/run/secrets/kubernetes.io/serviceaccount from default-token-dnz7v (ro)
    27. Conditions:
    28. Type Status
    29. Initialized True
    30. Ready False
    31. PodScheduled True
    32. Volumes:
    33. default-token-dnz7v:
    34. Type: Secret (a volume populated by a Secret)
    35. SecretName: default-token-dnz7v
    36. Optional: false
    37. QoS Class: BestEffort
    38. Node-Selectors: <none>
    39. Tolerations: <none>
    40. Events:
    41. FirstSeen LastSeen Count From SubobjectPath Type Reason Message
    42. --------- -------- ----- ---- ------------- -------- ------ -------
    43. 23s 23s 1 {default-scheduler } Normal Scheduled Successfully assigned hello-apparmor-2 to e2e-test-stclair-node-pool-t1f5
    44. 23s 23s 1 {kubelet e2e-test-stclair-node-pool-t1f5} Warning AppArmor Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded

    Note the pod status is Pending, with a helpful error message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded. An event was also recorded with the same message.

    Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles onto nodes. There are lots of ways to set up the profiles though, such as:

    • Through a DaemonSet that runs a Pod on each node to ensure the correct profiles are loaded. An example implementation can be found .
    • At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.) or image.
    • By copying the profiles to each node and loading them through SSH, as demonstrated in the Example.

    The scheduler is not aware of which profiles are loaded onto which node, so the full set of profiles must be loaded onto every node. An alternative approach is to add a node label for each profile (or class of profiles) on the node, and use a to ensure the Pod is run on a node with the required profile.

    If you do not want AppArmor to be available on your cluster, it can be disabled by a command-line flag:

    1. --feature-gates=AppArmor=false

    When disabled, any Pod that includes an AppArmor profile will fail validation with a “Forbidden” error.

    Note: Even if the Kubernetes feature is disabled, runtimes may still enforce the default profile. The option to disable the AppArmor feature will be removed when AppArmor graduates to general availability (GA).

    Authoring Profiles

    Getting AppArmor profiles specified correctly can be a tricky business. Fortunately there are some tools to help with that:

    • aa-genprof and aa-logprof generate profile rules by monitoring an application’s activity and logs, and admitting the actions it takes. Further instructions are provided by the AppArmor documentation.
    • is an AppArmor profile generator for Docker that uses a simplified profile language.

    To debug problems with AppArmor, you can check the system logs to see what, specifically, was denied. AppArmor logs verbose messages to dmesg, and errors can usually be found in the system logs or through journalctl. More information is provided in AppArmor failures.

    Specifying the profile a container will run with:

    • key: container.apparmor.security.beta.kubernetes.io/<container_name> Where <container_name> matches the name of a container in the Pod. A separate profile can be specified for each container in the Pod.
    • value: a profile reference, described below
    • runtime/default: Refers to the default runtime profile.
    • : Refers to a profile loaded on the node (localhost) by name.
      • The possible profile names are detailed in the .
    • unconfined: This effectively disables AppArmor on the container.

    Any other profile reference format is invalid.

    What’s next

    Additional resources: