Scorecard

    The scorecard command, part of the operator-sdk, executes tests on your operator based upon a configuration file and test images.

    Tests are implemented within test images that are configured and constructed to be executed by scorecard.

    Scorecard assumes it is being executed with access to a configured Kubernetes cluster. Each test is executed within a Pod by scorecard, from which pod logs are aggregated and test results sent to the console.

    Scorecard has built-in basic and OLM tests, and it also provides a means to execute custom test definitions.

    Requirements

    The scorecard tests make no assumptions as to the state of the operator being tested. Creating operators and custom resources for an operator are left outside the scope of the scorecard itself.

    Scorecard tests can however create whatever resources they require if the tests are designed for resource creation.

    Running the Scorecard

    1. A default set of kustomize files should have been scaffolded by . If that is not the case, run operator-sdk init as you would have to initialize your project and copy scaffolded files:

    The default config generated by this kustomization can be immediately run against your operator. See the config file section for an explanation of the configuration file format.

    1. (Re)generate manifests and metadata for your Operator. make bundle will automatically add scorecard annotations to your bundle’s metadata, which is used by the scorecard command to run tests.
    2. Execute the scorecard command. See the for an overview of command invocation.

    The scorecard test execution is driven by a configuration file named config.yaml, generated by make bundle. Note that if run make bundle that any changes you have made to config.yaml will be overwritten. To persist any changes to config.yaml you can update the kustomize templates found in the config/scorecard directory. The configuration file is located at the following location within your bundle directory (bundle/ by default):

    1. $ tree ./bundle
    2. ./bundle
    3. ...
    4. └── tests
    5. └── scorecard
    6. └── config.yaml

    The following YAML spec is an example of the scorecard configuration file:

    1. kind: Configuration
    2. apiversion: scorecard.operatorframework.io/v1alpha3
    3. metadata:
    4. name: config
    5. - parallel: true
    6. tests:
    7. - image: quay.io/operator-framework/scorecard-test:latest
    8. entrypoint:
    9. - scorecard-test
    10. - basic-check-spec
    11. labels:
    12. suite: basic
    13. test: basic-check-spec-test
    14. entrypoint:
    15. - scorecard-test
    16. - olm-bundle-validation
    17. labels:
    18. suite: olm
    19. test: olm-bundle-validation-test

    The configuration file defines the tests that scorecard executes. Tests are grouped into stages for fine-grained control of parallelism. The following fields of the scorecard configuration file define the test as follows:

    Command Args

    The scorecard requires a positional argument that holds either the on-disk path to your operator bundle or the name of a bundle image. Note that the scorecard does not run your operator but merely uses the scorecard configuration within the bundle contents to know which tests to execute.

    For further information about the flags see the CLI documentation.

    Parallelism

    The configuration file allows operator developers to define separate stages for their tests. Stages run sequentially in the order they are defined in the configuration file. A stage contains a list of tests and a configurable parallel setting.

    By default (or when a stage explicitly sets parallel to false), tests in a stage are run sequentially in the order they are defined in the configuration file. Running tests one at a time is helpful to guarantee that no two tests interact and conflict with each other.

    However, if tests are designed to be fully isolated, they can be parallelized. To run a set of isolated tests in parallel, include them in the same stage and set parallel to true. All tests in a parallel stage are executed simultaneously, and scorecard waits for all of them to finish before proceeding to the next stage. This can make your tests run much faster.

    Selecting Tests

    Tests are selected by setting the --selector CLI flag to a set of label strings. If a selector flag is not supplied, then all the tests within the scorecard configuration file are executed.

    Tests are executed serially, one after the other, with test results being aggregated by scorecard and written to stdout.

    To select a single test (basic-check-spec-test) you would enter the following:

    1. $ operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test

    To select a suite of tests, olm in this case, you would specify a label that is used by all the OLM tests:

    1. $ operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm

    To select multiple tests, you could specify them as follows:

    Basic Test Suite

    Scorecard Output

    The --output flag specifies the scorecard results output format.

    JSON format

    See an example of the JSON format produced by a scorecard test:

    1. {
    2. "apiVersion": "scorecard.operatorframework.io/v1alpha3",
    3. "kind": "TestList",
    4. "items": [
    5. {
    6. "kind": "Test",
    7. "apiVersion": "scorecard.operatorframework.io/v1alpha3",
    8. "spec": {
    9. "image": "quay.io/operator-framework/scorecard-test:latest",
    10. "entrypoint": [
    11. "scorecard-test",
    12. ],
    13. "labels": {
    14. "test": "olm-bundle-validation-test"
    15. }
    16. },
    17. "status": {
    18. "results": [
    19. {
    20. "name": "olm-bundle-validation",
    21. "log": "time=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Found metadata directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Found annotations file\" name=bundle-test\ntime=\"2020-06-10T19:02:49Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test\n",
    22. "state": "pass"
    23. }
    24. ]
    25. }
    26. }
    27. ]
    28. }

    XML format

    See the example below for the results of a scorecard test formatted in XML. The scorecard tool formats the XML output for XUnit schema compatability. This format makes it easier for post-processing the test results.

    1. <testsuites name="scorecard">
    2. <testsuite name="olm-bundle-validation-test" tests="1" skipped="0" failures="0" errors="0">
    3. <properties>
    4. <property name="spec.image" value="quay.io/operator-framework/scorecard-test:v1.19.0"></property>
    5. <property name="spec.entrypoint" value="scorecard-test olm-bundle-validation"></property>
    6. <property name="labels.test" value="olm-bundle-validation-test"></property>
    7. </properties>
    8. <testcase name="olm-bundle-validation" time="0001-01-01T00:00:00Z">
    9. <system-out>time=&#34;2022-04-12T19:21:52Z&#34; level=debug msg=&#34;Found manifests directory&#34; name=bundle-test&#xA;time=&#34;2022-04-12T19:21:52Z&#34; level=debug msg=&#34;Found metadata directory&#34; name=bundle-test&#xA;time=&#34;2022-04-12T19:21:52Z&#34; level=debug msg=&#34;Getting mediaType info from manifests directory&#34; name=bundle-test&#xA;time=&#34;2022-04-12T19:21:52Z&#34; level=debug msg=&#34;Found annotations file&#34; name=bundle-test&#xA;time=&#34;2022-04-12T19:21:52Z&#34; level=debug msg=&#34;Could not find optional dependencies file&#34; name=bundle-test&#xA;</system-out>
    10. </testcase>
    11. </testsuite>
    12. <!-- Some suites omitted for readability -->

    See an example of the text format produced by a scorecard test:

    NOTE The output format spec for each test matches the type layout.

    Exit Status

    The scorecard return code is 1 if any of the tests executed did not pass and 0 if all selected tests pass.

    Scorecard will execute custom tests if they follow these mandated conventions:

    • tests are implemented within a container image
    • tests accept an entrypoint which include a command and arguments
    • tests produce v1alpha3 scorecard output in JSON format with no extraneous logging in the test output
    • tests can obtain the bundle contents at a shared mount point of /bundle

    See the written in Go.

    Writing custom tests in other programming languages is possible if the test image follows the above guidelines.