Writing Custom Scorecard Tests

    The following steps explain creating of a custom test image which can be used with Scorecard to run operator specific tests. As an example, let us start by creating a sample go repository containing the test bundle data, custom scorecard tests and a Makefile to help us build a test image.

    The sample test image repository present here has the following project structure:

    1. - Contains a kustomization for generating a config from a base and set of overlays.
    2. bundle/ - Contains bundle manifests and metadata under test.
    3. bundle/tests/scorecard/config.yaml - Configuration file generated by make bundle from the config/scorecard kustomization.
    4. images/custom-scorecard-tests/main.go - Scorecard test binary.
    5. internal/tests/tests.go - Contains the implementation of custom tests specific to the operator.

    Writing custom test logic:

    Scorecard currently implements a few basic and tests for the image bundle, custom resources and custom resource definitions. Additional tests specific to the operator can also be included in the test suite of scorecard.

    The tests.go file is where the custom tests are implemented in the sample test image project. These tests use scapiv1alpha3.TestResult struct to populate the result, which is then converted to json format for the output. For example, the format of a simple custom sample test can be as follows:

    1. package tests
    2. import (
    3. "github.com/operator-framework/operator-registry/pkg/registry"
    4. scapiv1alpha3 "github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3"
    5. )
    6. const (
    7. CustomTest1Name = "customtest1"
    8. )
    9. // CustomTest1
    10. func CustomTest1(bundle registry.Bundle) scapiv1alpha3.TestStatus {
    11. r := scapiv1alpha3.TestResult{}
    12. r.Name = CustomTest1Name
    13. r.Description = "Custom Test 1"
    14. r.State = scapiv1alpha3.PassState
    15. r.Errors = make([]string, 0)
    16. r.Suggestions = make([]string, 0)
    17. // Implement relevant custom test logic here
    18. return wrapResult(r)
    19. }

    Scorecard Configuration file:

    The includes test definitions and metadata to run the test. This file is constructed using a kustomization under config/scorecard, with overlays for test sets.

    For the example CustomTest1 function, add the following to config/scorecard/customtest1.config.yaml.

    1. - op: add
    2. path: /stages/0/tests/-
    3. value:
    4. image: quay.io/<username>/custom-scorecard-tests:latest
    5. entrypoint:
    6. - custom-scorecard-tests
    7. - customtest1
    8. suite: custom
    9. test: customtest1

    The important fields to note here are:

    1. image - name and tag of the test image which was specified in the Makefile.
    2. labels - the name of the test and suite the test function belongs to. This can be specified in the operator-sdk scorecard command to run the desired test.

    Next, add a JSON 6902 patch to your config/scorecard/kustomization.yaml:

    1. patchesJson6902:
    2. ...
    3. - path: patches/customtest1.config.yaml
    4. target:
    5. group: scorecard.operatorframework.io
    6. version: v1alpha3
    7. kind: Configuration
    8. name: config

    Once you run make bundle, the bundle/tests/scorecard/config.yaml will be (re)generated with your custom test.

    Note: The default location of config.yaml inside the bundle is <bundle directory>/tests/scorecard/config.yaml. It can be overridden using the --config flag. For more details regarding the configuration file refer to .

    Scorecard binary:

    The scorecard test image implementation requires the bundle under test to be present in the test image. The apimanifests.GetBundleFromDir() function reads the pod’s bundle to fetch the manifests and scorecard configuration from desired path.

    1. cfg, err := apimanifests.GetBundleFromDir(scorecard.PodBundleRoot)
    2. if err != nil {
    3. log.Fatal(err.Error())
    4. }

    The scorecard binary uses config.yaml file to locate tests and execute the them as Pods which scorecard creates. Custom test images are included into Pods that scorecard creates, passing in the bundle contents on a shared mount point to the test image container. The specific custom test that is executed is driven by the config.yaml’s entry-point command and arguments.

    An example custom scorecard test implementation is present .

    The names with which the tests are identified in config.yaml and would be passed in the scorecard command, are to be specified here.

    1. ...
    2. switch entrypoint[0] {
    3. case tests.CustomTest1Name:
    4. result = tests.CustomTest1(cfg)
    5. ...
    6. }
    7. ...

    The result of the custom tests which is in scapiv1alpha3.TestResult format, is converted to json for output.

    1. prettyJSON, err := json.MarshalIndent(result, "", " ")
    2. log.Fatal("Failed to generate json", err)
    3. }
    4. fmt.Printf("%s\n", string(prettyJSON))

    The names of the custom tests are also included in function:

    The SDK project makefile contains targets to build the sample custom test image. The current makefile is found here. You can use this makefile as a reference for your own custom test image makefile.

    To build the sample custom test image, run:

    1. make image/custom-scorecard-tests

    Running scorecard command

    1. $ operator-sdk scorecard <bundle_dir_or_image> --selector=suite=custom -o json --wait-time=32s --skip-cleanup=false
    2. {
    3. "kind": "TestList",
    4. "apiVersion": "scorecard.operatorframework.io/v1alpha3",
    5. "items": [
    6. {
    7. "kind": "Test",
    8. "apiVersion": "scorecard.operatorframework.io/v1alpha3",
    9. "spec": {
    10. "image": "quay.io/operator-framework/scorecard-test:latest",
    11. "entrypoint": [
    12. "custom-scorecard-tests",
    13. "customtest1"
    14. ],
    15. "labels": {
    16. "suite": "custom",
    17. "test": "customtest1"
    18. }
    19. },
    20. "status": {
    21. "results": [
    22. {
    23. "name": "customtest1",
    24. "log": "an ISV custom test",
    25. "state": "pass"
    26. }
    27. ]
    28. }
    29. }
    30. ]
    31. }

    Note: More details on the usage of operator-sdk scorecard command and its flags can be found in the scorecard user documentation

    Debugging scorecard custom tests

    The --skip-cleanup flag can be used when executing the operator-sdk scorecard command to cause the scorecard created test pods to be unremoved. This is useful when debugging or writing new tests so that you can view the test logs or the pod manifests.

    The scorecard inserts an initContainer into the test pods it creates. The initContainer serves the purpose of uncompressing the operator bundle contents, mounting them into a shared mount point accessible by test images. The operator bundle contents are stored within a ConfigMap, uniquely built for each scorecard test execution. Upon scorecard completion, the ConfigMap is removed as part of normal cleanup, along with the test pods created by scorecard.

    Using Custom Service Accounts

    Scorecard does not deploy service accounts, RBAC resources, or namespaces for your test but instead considers these resources to be outside its scope. You can however implement whatever service accounts your tests require and then specify that service account from the command line using the service-account flag:

    1. $ operator-sdk scorecard <bundle_dir_or_image> --service-account=mycustomsa

    Also, you can set up a non-default namespace that your tests will be executed within using the following namespace flag:

    If you do not specify either of these flags, the default namespace and service account will be used by the scorecard to run test pods.

    Returning Multiple Test Results

    Some custom tests might require or be better implemented to return more than a single test result. For this case, scorecard’s output API allows multiple test results to be defined for a single test.

    Within your custom tests you might require connecting to the Kube API.
    In golang, you could use the API for example to check Kube resources within your tests, or even create custom resources. Your custom test image is being executed within a Pod, so you can use an in-cluster connection to invoke the Kube API.