What huge pages do and how they are consumed by applications

    A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.

    In OKD, applications in a pod can allocate and consume pre-allocated huge pages.

    Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size.

    Huge pages can be consumed through container-level resource requirements using the resource name , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi. Unlike CPU or memory, huge pages do not support over-commitment.

    Allocating huge pages of a specific size

    Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size>. The <size> value must be specified in bytes with an optional scale suffix [kKmMgG]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter.

    Huge page requirements

    • Huge page requests must equal the limits. This is the default if limits are specified, but requests are not.

    • Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration.

    • EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request.

    • Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group.

    You can use the Downward API to inject information about the huge pages resources that are consumed by a container.

    Procedure

    1. Create a hugepages-volume-pod.yaml file that is similar to the following example:

      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. generateName: hugepages-volume-
      5. labels:
      6. app: hugepages-example
      7. spec:
      8. containers:
      9. - securityContext:
      10. capabilities:
      11. add: [ "IPC_LOCK" ]
      12. image: rhel7:latest
      13. command:
      14. - sleep
      15. - inf
      16. name: example
      17. volumeMounts:
      18. - mountPath: /dev/hugepages
      19. name: hugepage
      20. - mountPath: /etc/podinfo
      21. name: podinfo
      22. resources:
      23. limits:
      24. hugepages-1Gi: 2Gi
      25. memory: "1Gi"
      26. cpu: "1"
      27. requests:
      28. hugepages-1Gi: 2Gi
      29. env:
      30. - name: REQUESTS_HUGEPAGES_1GI (1)
      31. valueFrom:
      32. resourceFieldRef:
      33. containerName: example
      34. resource: requests.hugepages-1Gi
      35. - name: hugepage
      36. emptyDir:
      37. medium: HugePages
      38. - name: podinfo
      39. downwardAPI:
      40. items:
      41. - path: "hugepages_1G_request" (2)
      42. resourceFieldRef:
      43. containerName: example
      44. resource: requests.hugepages-1Gi
      45. divisor: 1Gi
    2. Create the pod from the hugepages-volume-pod.yaml file:

      1. $ oc create -f hugepages-volume-pod.yaml

    Verification

    1. Check the value of the REQUESTS_HUGEPAGES_1GI environment variable:

      1. $ oc exec -it $(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') \
      2. -- env | grep REQUESTS_HUGEPAGES_1GI

      Example output

      1. REQUESTS_HUGEPAGES_1GI=2147483648
    2. Check the value of the /etc/podinfo/hugepages_1G_request file:

      1. $ oc exec -it $(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') \
      2. -- cat /etc/podinfo/hugepages_1G_request

      Example output

    Additional resources

    Nodes must pre-allocate huge pages used in an OKD cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes.

    Procedure

    To minimize node reboots, the order of the steps below needs to be followed:

      1. $ oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=
    1. Create a file with the following content and name it hugepages-tuned-boottime.yaml:

      1. apiVersion: tuned.openshift.io/v1
      2. kind: Tuned
      3. metadata:
      4. name: hugepages (1)
      5. namespace: openshift-cluster-node-tuning-operator
      6. spec:
      7. profile: (2)
      8. - data: |
      9. [main]
      10. summary=Boot time configuration for hugepages
      11. include=openshift-node
      12. [bootloader]
      13. cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 (3)
      14. name: openshift-node-hugepages
      15. recommend:
      16. - machineConfigLabels: (4)
      17. machineconfiguration.openshift.io/role: "worker-hp"
      18. priority: 30
      19. profile: openshift-node-hugepages
    2. Create a file with the following content and name it hugepages-mcp.yaml:

      1. apiVersion: machineconfiguration.openshift.io/v1
      2. kind: MachineConfigPool
      3. metadata:
      4. name: worker-hp
      5. labels:
      6. worker-hp: ""
      7. spec:
      8. machineConfigSelector:
      9. matchExpressions:
      10. - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]}
      11. nodeSelector:
      12. matchLabels:
      13. node-role.kubernetes.io/worker-hp: ""
    3. Create the machine config pool:

      1. $ oc create -f hugepages-mcp.yaml

    Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated.

    Transparent Huge Pages (THP) attempt to automate most aspects of creating, managing, and using huge pages. Since THP automatically manages the huge pages, this is not always handled optimally for all types of workloads. THP can lead to performance regressions, since many applications handle huge pages on their own. Therefore, consider disabling THP. The following steps describe how to disable THP using the Node Tuning Operator (NTO).

    Procedure

    1. Create a file with the following content and name it thp-disable-tuned.yaml:

      1. apiVersion: tuned.openshift.io/v1
      2. kind: Tuned
      3. metadata:
      4. name: thp-workers-profile
      5. namespace: openshift-cluster-node-tuning-operator
      6. spec:
      7. profile:
      8. - data: |
      9. [main]
      10. summary=Custom tuned profile for OpenShift to turn off THP on worker nodes
      11. include=openshift-node
      12. [vm]
      13. transparent_hugepages=never
      14. name: openshift-thp-never-worker
      15. recommend:
      16. - match:
      17. - label: node-role.kubernetes.io/worker
      18. priority: 25
      19. profile: openshift-thp-never-worker
    2. Create the Tuned object:

      1. $ oc create -f thp-disable-tuned.yaml
    3. Check the list of active profiles:

      1. $ oc get profile -n openshift-cluster-node-tuning-operator

    Verification

    • Log in to one of the nodes and do a regular THP check to verify if the nodes applied the profile successfully:

        1. always madvise [never]