Compliance Operator scans

    or

    1. $ oc explain scansettingbindings

    You can run a scan using the Center for Internet Security (CIS) profiles. For convenience, the Compliance Operator creates a ScanSetting object with reasonable defaults on startup. This ScanSetting object is named default.

    Procedure

    1. Inspect the ScanSetting object by running:

      1. $ oc describe scansettings default -n openshift-compliance

      Example output

      1. Name: default
      2. Namespace: openshift-compliance
      3. Labels: <none>
      4. Annotations: <none>
      5. API Version: compliance.openshift.io/v1alpha1
      6. Kind: ScanSetting
      7. Metadata:
      8. Creation Timestamp: 2022-10-10T14:07:29Z
      9. Generation: 1
      10. Managed Fields:
      11. API Version: compliance.openshift.io/v1alpha1
      12. Fields Type: FieldsV1
      13. fieldsV1:
      14. f:rawResultStorage:
      15. .:
      16. f:nodeSelector:
      17. .:
      18. f:node-role.kubernetes.io/master:
      19. f:pvAccessModes:
      20. f:rotation:
      21. f:size:
      22. f:tolerations:
      23. f:roles:
      24. f:scanTolerations:
      25. f:schedule:
      26. f:showNotApplicable:
      27. f:strictNodeScan:
      28. Manager: compliance-operator
      29. Operation: Update
      30. Time: 2022-10-10T14:07:29Z
      31. Resource Version: 56111
      32. UID: c21d1d14-3472-47d7-a450-b924287aec90
      33. Raw Result Storage:
      34. Node Selector:
      35. node-role.kubernetes.io/master:
      36. Pv Access Modes:
      37. ReadWriteOnce (1)
      38. Rotation: 3 (2)
      39. Size: 1Gi (3)
      40. Tolerations:
      41. Key: node-role.kubernetes.io/master
      42. Operator: Exists
      43. Effect: NoExecute
      44. Key: node.kubernetes.io/not-ready
      45. Operator: Exists
      46. Toleration Seconds: 300
      47. Effect: NoExecute
      48. Key: node.kubernetes.io/unreachable
      49. Toleration Seconds: 300
      50. Effect: NoSchedule
      51. Key: node.kubernetes.io/memory-pressure
      52. Operator: Exists
      53. Roles:
      54. master (4)
      55. worker (4)
      56. Scan Tolerations: (5)
      57. Operator: Exists
      58. Schedule: 0 1 * * * (6)
      59. Show Not Applicable: false
      60. Strict Node Scan: true
      61. Events: <none>
      1The Compliance Operator creates a persistent volume (PV) that contains the results of the scans. By default, the PV will use access mode ReadWriteOnce because the Compliance Operator cannot make any assumptions about the storage classes configured on the cluster. Additionally, ReadWriteOnce access mode is available on most clusters. If you need to fetch the scan results, you can do so by using a helper pod, which also binds the volume. Volumes that use the ReadWriteOnce access mode can be mounted by only one pod at time, so it is important to remember to delete the helper pods. Otherwise, the Compliance Operator will not be able to reuse the volume for subsequent scans.
      2The Compliance Operator keeps results of three subsequent scans in the volume; older scans are rotated.
      3The Compliance Operator will allocate one GB of storage for the scan results.
      4If the scan setting uses any profiles that scan cluster nodes, scan these node roles.
      5The default scan setting object scans all the nodes.
      6The default scan setting object runs scans at 01:00 each day.

      As an alternative to the default scan setting, you can use default-auto-apply, which has the following settings:

    2. Create a ScanSettingBinding object that binds to the default ScanSetting object and scans the cluster using the cis and cis-node profiles. For example:

      1. apiVersion: compliance.openshift.io/v1alpha1
      2. kind: ScanSettingBinding
      3. metadata:
      4. name: cis-compliance
      5. namespace: openshift-compliance
      6. profiles:
      7. - name: ocp4-cis-node
      8. kind: Profile
      9. apiGroup: compliance.openshift.io/v1alpha1
      10. - name: ocp4-cis
      11. kind: Profile
      12. apiGroup: compliance.openshift.io/v1alpha1
      13. settingsRef:
      14. name: default
      15. kind: ScanSetting
      16. apiGroup: compliance.openshift.io/v1alpha1
    3. Create the ScanSettingBinding object by running:

      1. $ oc create -f <file-name>.yaml -n openshift-compliance

      At this point in the process, the ScanSettingBinding object is reconciled and based on the Binding and the Bound settings. The Compliance Operator creates a ComplianceSuite object and the associated ComplianceScan objects.

    4. Follow the compliance scan progress by running:

      1. $ oc get compliancescan -w -n openshift-compliance

    The result server pod mounts the persistent volume (PV) that stores the raw Asset Reporting Format (ARF) scan results. The nodeSelector and tolerations attributes enable you to configure the location of the result server pod.

    This is helpful for those environments where control plane nodes are not permitted to mount persistent volumes.

    Procedure

    • Create a ScanSetting custom resource (CR) for the Compliance Operator:

      1. Define the ScanSetting CR, and save the YAML file, for example, rs-workers.yaml:

        1The Compliance Operator uses this node to store scan results in ARF format.
        2The result server pod tolerates all taints.
      2. To create the ScanSetting CR, run the following command:

        1. $ oc create -f rs-workers.yaml

    Verification

    • To verify that the object is created, run the following command:

        Example output

        1. apiVersion: compliance.openshift.io/v1alpha1
        2. kind: ScanSetting
        3. metadata:
        4. creationTimestamp: "2021-11-19T19:36:36Z"
        5. generation: 1
        6. name: rs-on-workers
        7. namespace: openshift-compliance
        8. resourceVersion: "48305"
        9. uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e
        10. rawResultStorage:
        11. nodeSelector:
        12. node-role.kubernetes.io/worker: ""
        13. pvAccessModes:
        14. - ReadWriteOnce
        15. rotation: 3
        16. size: 1Gi
        17. tolerations:
        18. - operator: Exists
        19. roles:
        20. - worker
        21. - master
        22. scanTolerations:
        23. - operator: Exists
        24. schedule: 0 1 * * *
        25. strictNodeScan: true

      The ScanSetting Custom Resource now allows you to override the default CPU and memory limits of scanner pods through the scan limits attribute. The Compliance Operator will use defaults of 500Mi memory, 100m CPU for the scanner container, and 200Mi memory with 100m CPU for the api-resource-collector container. To set the memory limits of the Operator, modify the Subscription object if installed through OLM or the Operator deployment itself.

      To increase the default CPU and memory limits of the Compliance Operator, see Increasing Compliance Operator resource limits.

      When the kubelet starts a container as part of a Pod, the kubelet passes that container’s requests and limits for memory and CPU to the container runtime. In Linux, the container runtime configures the kernel cgroups that apply and enforce the limits you defined.

      The CPU limit defines how much CPU time the container can use. During each scheduling interval, the Linux kernel checks to see if this limit is exceeded. If so, the kernel waits before allowing the cgroup to resume execution.

      If several different containers (cgroups) want to run on a contended system, workloads with larger CPU requests are allocated more CPU time than workloads with small requests. The memory request is used during Pod scheduling. On a node that uses cgroups v2, the container runtime might use the memory request as a hint to set memory.min and memory.low values.

      If a container attempts to allocate more memory than this limit, the Linux kernel out-of-memory subsystem activates and intervenes by stopping one of the processes in the container that tried to allocate memory. The memory limit for the Pod or container can also apply to pages in memory-backed volumes, such as an emptyDir.

      The kubelet tracks tmpfs emptyDir volumes as container memory is used, rather than as local ephemeral storage. If a container exceeds its memory request and the node that it runs on becomes short of memory overall, the Pod’s container might be evicted.

      A container may not exceed its CPU limit for extended periods. Container run times do not stop Pods or containers for excessive CPU usage. To determine whether a container cannot be scheduled or is being killed due to resource limits, see Troubleshooting the Compliance Operator.

      When a Pod is created, the scheduler selects a Node for the Pod to run on. Each node has a maximum capacity for each resource type in the amount of CPU and memory it can provide for the Pods. The scheduler ensures that the sum of the resource requests of the scheduled containers is less than the capacity nodes for each resource type.

      Although memory or CPU resource usage on nodes is very low, the scheduler might still refuse to place a Pod on a node if the capacity check fails to protect against a resource shortage on a node.

      For each container, you can specify the following resource limits and request:

      Although you can specify requests and limits for only individual containers, it is also useful to consider the overall resource requests and limits for a pod. For a particular resource, a container resource request or limit is the sum of the resource requests or limits of that type for each container in the pod.

      Example container resource requests and limits

      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. name: frontend
      5. spec:
      6. containers:
      7. - name: app
      8. image: images.my-company.example/app:v4
      9. resources:
      10. requests: (1)
      11. memory: "64Mi"
      12. cpu: "250m"
      13. limits: (2)
      14. memory: "128Mi"
      15. cpu: "500m"
      16. - name: log-aggregator
      17. image: images.my-company.example/log-aggregator:v6
      18. resources:
      19. requests:
      20. memory: "64Mi"
      21. cpu: "250m"
      22. limits:
      23. memory: "128Mi"