Writing Custom Scorecard Tests

This guide outlines the steps which can be followed to extend the existing scorecard tests and implement operator specific custom tests.

Run scorecard with custom tests:

Building test image:

The following steps explain creating of a custom test image which can be used with Scorecard to run operator specific tests. As an example, let us start by creating a sample go repository containing the test bundle data, custom scorecard tests and a Makefile to help us build a test image.

The sample test image repository present here has the following project structure:

$ tree .
.
...
├── config
|   ...
│   └── scorecard
│       ├── bases
│       │   └── config.yaml
│       ├── kustomization.yaml
│       └── patches
│           ├── basic.config.yaml
│           └── olm.config.yaml
├── Makefile
├── bundle
│   ├── manifests
│   │   ├── cache.example.com_memcached_crd.yaml
│   │   └── memcached-operator.clusterserviceversion.yaml
│   ├── metadata
│   │   └── annotations.yaml
│   └── tests
│       └── scorecard
│           └── config.yaml
├── go.mod
├── go.sum
├── images
│   └── custom-scorecard-tests
│       ├── Dockerfile
│       ├── bin
│       │   ├── entrypoint
│       │   └── user_setup
│       ├── cmd
│       │   └── test
│       │       └── main.go
│       └── custom-scorecard-tests
└── internal
    └── tests
        └── tests.go
  1. config/scorecard - Contains a kustomization for generating a config from a base and set of overlays.
  2. bundle/ - Contains bundle manifests and metadata under test.
  3. bundle/tests/scorecard/config.yaml - Configuration file generated by make bundle from the config/scorecard kustomization.
  4. images/custom-scorecard-tests/main.go - Scorecard test binary.
  5. internal/tests/tests.go - Contains the implementation of custom tests specific to the operator.

Writing custom test logic:

Scorecard currently implements a few basic and olm tests for the image bundle, custom resources and custom resource definitions. Additional tests specific to the operator can also be included in the test suite of scorecard.

The tests.go file is where the custom tests are implemented in the sample test image project. These tests use scapiv1alpha3.TestResult struct to populate the result, which is then converted to json format for the output. For example, the format of a simple custom sample test can be as follows:

package tests

import (
  "github.com/operator-framework/operator-registry/pkg/registry"
  scapiv1alpha3 "github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3"
)

const (
  CustomTest1Name = "customtest1"
)

// CustomTest1
func CustomTest1(bundle registry.Bundle) scapiv1alpha3.TestStatus {
  r := scapiv1alpha3.TestResult{}
  r.Name = CustomTest1Name
  r.Description = "Custom Test 1"
  r.State = scapiv1alpha3.PassState
  r.Errors = make([]string, 0)
  r.Suggestions = make([]string, 0)

  // Implement relevant custom test logic here

  return wrapResult(r)
}

Scorecard Configuration file:

The configuration file includes test definitions and metadata to run the test. This file is constructed using a kustomization under config/scorecard, with overlays for test sets.

For the example CustomTest1 function, add the following to config/scorecard/patches/customtest1.config.yaml.

- op: add
  path: /stages/0/tests/-
  value:
    image: quay.io/<username>/custom-scorecard-tests:latest
    entrypoint:
    - custom-scorecard-tests
    - customtest1
    labels:
      suite: custom
      test: customtest1

The important fields to note here are:

  1. image - name and tag of the test image which was specified in the Makefile.
  2. labels - the name of the test and suite the test function belongs to. This can be specified in the operator-sdk scorecard command to run the desired test.

Next, add a JSON 6902 patch to your config/scorecard/kustomization.yaml:

patchesJson6902:
...
- path: patches/customtest1.config.yaml
  target:
    group: scorecard.operatorframework.io
    version: v1alpha3
    kind: Configuration
    name: config

Once you run make bundle, the bundle/tests/scorecard/config.yaml will be (re)generated with your custom test.

If generating a config file outside of the on-disk bundle, you can run:

$ kustomize build config/scorecard > path/to/config.yaml

Note: The default location of config.yaml inside the bundle is <bundle directory>/tests/scorecard/config.yaml. It can be overridden using the --config flag. For more details regarding the configuration file refer to user docs.

Scorecard binary:

The scorecard test image implementation requires the bundle under test to be present in the test image. The apimanifests.GetBundleFromDir() function reads the pod’s bundle to fetch the manifests and scorecard configuration from desired path.

cfg, err := apimanifests.GetBundleFromDir(scorecard.PodBundleRoot)
if err != nil {
  log.Fatal(err.Error())
}

The scorecard binary uses config.yaml file to locate tests and execute the them as Pods which scorecard creates. Custom test images are included into Pods that scorecard creates, passing in the bundle contents on a shared mount point to the test image container. The specific custom test that is executed is driven by the config.yaml’s entry-point command and arguments.

An example custom scorecard test implementation is present here.

The names with which the tests are identified in config.yaml and would be passed in the scorecard command, are to be specified here.

...
switch entrypoint[0] {
case tests.CustomTest1Name:
  result = tests.CustomTest1(cfg)
  ...
}
...

The result of the custom tests which is in scapiv1alpha3.TestResult format, is converted to json for output.

prettyJSON, err := json.MarshalIndent(result, "", "    ")
if err != nil {
  log.Fatal("Failed to generate json", err)
}
fmt.Printf("%s\n", string(prettyJSON))

The names of the custom tests are also included in printValidTests() function:

func printValidTests() (result scapiv1alpha3.TestStatus) {
  ...
  str := fmt.Sprintf("Valid tests for this image include: %s", tests.CustomTest1Name)
  result.Errors = append(result.Errors, str)
  ...
}

Building the project

The SDK project makefile contains targets to build the sample custom test image. The current makefile is found here. You can use this makefile as a reference for your own custom test image makefile.

To build the sample custom test image, run:

make image/custom-scorecard-tests

Running scorecard command

The operator-sdk scorecard command is used to execute the scorecard tests by specifying the location of test bundle in the command. The name or suite of the tests which are to be executed can be specified with the --selector flag. The command will create scorecard pods with the image specified in config.yaml for the respective test. For example, the CustomTest1Name test provides the following json output.

$ operator-sdk scorecard <bundle_dir_or_image> --selector=suite=custom -o json --wait-time=32s --skip-cleanup=false
{
  "kind": "TestList",
  "apiVersion": "scorecard.operatorframework.io/v1alpha3",
  "items": [
    {
      "kind": "Test",
      "apiVersion": "scorecard.operatorframework.io/v1alpha3",
      "spec": {
        "image": "quay.io/operator-framework/scorecard-test:latest",
        "entrypoint": [
          "custom-scorecard-tests",
          "customtest1"
        ],
        "labels": {
          "suite": "custom",
          "test": "customtest1"
        }
      },
      "status": {
        "results": [
          {
            "name": "customtest1",
            "log": "an ISV custom test",
            "state": "pass"
          }
        ]
      }
    }
  ]
}

Note: More details on the usage of operator-sdk scorecard command and its flags can be found in the scorecard user documentation

Debugging scorecard custom tests

The --skip-cleanup flag can be used when executing the operator-sdk scorecard command to cause the scorecard created test pods to be unremoved. This is useful when debugging or writing new tests so that you can view the test logs or the pod manifests.

Storing scorecard test output

The --test-output flag can be used when executing the operator-sdk scorecard command with a config specifying output persistence to store the output of the scorecard tests in a specific directory. Any persistent volume data will be stored in the specified local directory upon completion of the scorecard tests.

$ operator-sdk scorecard ./bundle --test-output=/mytestoutput

Note: By default, the gathered test output will be stored in $(pwd)/test-output.

Scorecard initContainer

The scorecard inserts an initContainer into the test pods it creates. The initContainer serves the purpose of uncompressing the operator bundle contents, mounting them into a shared mount point accessible by test images. The operator bundle contents are stored within a ConfigMap, uniquely built for each scorecard test execution. Upon scorecard completion, the ConfigMap is removed as part of normal cleanup, along with the test pods created by scorecard.

Using Custom Service Accounts

Scorecard does not deploy service accounts, RBAC resources, or namespaces for your test but instead considers these resources to be outside its scope. You can however specify whichever service account your tests require, like config/rbac/service_account.yaml in Go operator projects, and then specify that service account from the command line:

$ operator-sdk scorecard <bundle_dir_or_image> --service-account=my-project-controller-manager

Also, you can specify a non-default namespace that scorecard will run in:

$ operator-sdk scorecard <bundle_dir_or_image> --namespace=my-project-system

If you do not specify either of these flags, the default namespace and service account will be used by the scorecard to run test pods.

Returning Multiple Test Results

Some custom tests might require or be better implemented to return more than a single test result. For this case, scorecard’s output API allows multiple test results to be defined for a single test.

Accessing the Kube API

Within your custom tests you might require connecting to the Kube API. In golang, you could use the client-go API for example to check Kube resources within your tests, or even create custom resources. Your custom test image is being executed within a Pod, so you can use an in-cluster connection to invoke the Kube API.