Defining cluster service versions (CSVs)

The Operator SDK includes the CSV generator to generate a CSV for the current Operator project, customized using information contained in YAML manifests and Operator source files.

A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.

Operator bundle manifests, which include cluster service versions (CSVs), describe how to display, create, and manage an application with Operator Lifecycle Manager (OLM). The CSV generator in the Operator SDK, called by the generate bundle subcommand, is the first step towards publishing your Operator to a catalog and deploying it with OLM. The subcommand requires certain input manifests to construct a CSV manifest; all inputs are read when the command is invoked, along with a CSV base, to idempotently generate or regenerate a CSV.

Typically, the generate kustomize manifests subcommand would be run first to generate the input Kustomize bases that are consumed by the generate bundle subcommand. However, the Operator SDK provides the make bundle command, which automates several tasks, including running the following subcommands in order:

  1. generate kustomize manifests

  2. generate bundle

  3. bundle validate

Additional resources

  • See for a full procedure that includes generating a bundle and CSV.

The make bundle command creates the following files and directories in your Operator project:

  • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion (CSV) object

  • A bundle metadata directory named bundle/metadata

  • All custom resource definitions (CRDs) in a config/crd directory

  • A Dockerfile bundle.Dockerfile

The following resources are typically included in a CSV:

Role

Defines Operator permissions within a namespace.

ClusterRole

Defines cluster-wide Operator permissions.

Deployment

Defines how an Operand of an Operator is run in pods.

CustomResourceDefinition (CRD)

Defines custom resources that your Operator reconciles.

Custom resource examples

Examples of resources adhering to the spec of a particular CRD.

Version management

The --version flag for the generate bundle subcommand supplies a semantic version for your bundle when creating one for the first time and when upgrading an existing one.

By setting the VERSION variable in your Makefile, the --version flag is automatically invoked using that value when the generate bundle subcommand is run by the make bundle command. The CSV version is the same as the Operator version, and a new CSV is generated when upgrading Operator versions.

Manually-defined CSV fields

Many CSV fields cannot be populated using generated, generic manifests that are not specific to Operator SDK. These fields are mostly human-written metadata about the Operator and various custom resource definitions (CRDs).

Operator authors must directly modify their cluster service version (CSV) YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning during CSV generation when a lack of data in any of the required fields is detected.

The following tables detail which manually-defined CSV fields are required and which are optional.

Table 2. Optional
FieldDescription

spec.replaces

The name of the CSV being replaced by this CSV.

spec.links

URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a name and url.

spec.selector

Selectors by which the Operator can pair resources in a cluster.

spec.icon

A base64-encoded icon unique to the Operator, set in a base64data field with a mediatype.

spec.maturity

The level of maturity the software has achieved at this version. Options include planning, pre-alpha, alpha, beta, stable, mature, inactive, and deprecated.

Further details on what data each field above should hold are found in the CSV spec.

Several YAML fields currently requiring user intervention can potentially be parsed from Operator code.

Additional resources

Operator metadata annotations

Operator developers can manually define certain annotations in the metadata of a cluster service version (CSV) to enable features or highlight capabilities in user interfaces (UIs), such as OperatorHub.

The following table lists Operator metadata annotations that can be manually defined using metadata.annotations fields.

Table 3. Annotations
FieldDescription

alm-examples

Provide custom resource definition (CRD) templates with a minimum set of configuration. Compatible UIs pre-fill this template for users to further customize.

operatorframework.io/initialization-resource

Specify a single required custom resource that must be created at the time that the Operator is installed. Must include a template that contains a complete YAML definition.

operatorframework.io/suggested-namespace

Set a suggested namespace where the Operator should be deployed.

operators.openshift.io/infrastructure-features

Infrastructure features supported by the Operator. Users can view and filter by these features when discovering Operators through OperatorHub in the web console. Valid, case-sensitive values:

  • disconnected: Operator supports being mirrored into disconnected catalogs, including all dependencies, and does not require internet access. All related images required for mirroring are listed by the Operator.

  • cnf: Operator provides a Cloud-native Network Functions (CNF) Kubernetes plug-in.

  • cni: Operator provides a Container Network Interface (CNI) Kubernetes plug-in.

  • csi: Operator provides a Container Storage Interface (CSI) Kubernetes plug-in.

  • fips: Operator accepts the FIPS mode of the underlying platform and works on nodes that are booted into FIPS mode.

The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OKD deployments on the x86_64 architecture.

  • proxy-aware: Operator supports running on a cluster behind a proxy. Operator accepts the standard proxy environment variables HTTP_PROXY and HTTPS_PROXY, which Operator Lifecycle Manager (OLM) provides to the Operator automatically when the cluster is configured to use a proxy. Required environment variables are passed down to Operands for managed workloads.

operators.openshift.io/valid-subscription

Free-form array for listing any specific subscriptions that are required to use the Operator. For example, ‘[“3Scale Commercial License”, “Red Hat Managed Integration”]’.

operators.operatorframework.io/internal-objects

Hides CRDs in the UI that are not meant for user manipulation.

Example use cases

Operator supports disconnected and proxy-aware

Operator requires an OKD license

  1. operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'

Operator requires a 3scale license

  1. operators.openshift.io/valid-subscription: '["3Scale Commercial License", "Red Hat Managed Integration"]'

Operator supports disconnected and proxy-aware, and requires an OKD license

  1. operators.openshift.io/infrastructure-features: '["disconnected", "proxy-aware"]'
  2. operators.openshift.io/valid-subscription: '["OpenShift Container Platform"]'

Additional resources

Enabling your Operator for restricted network environments

As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.

Operator requirements for supporting disconnected mode

  • In the cluster service version (CSV) of your Operator:

    • List any related images, or other container images that your Operator might require to perform their functions.

    • Reference all specified images by a digest (SHA) and not by a tag.

  • All dependencies of your Operator must also support running in a disconnected mode.

  • Your Operator must not require any off-cluster resources.

For the CSV requirements, you can make the following changes as the Operator author.

Prerequisites

  • An Operator project with a CSV.

Procedure

  1. Use SHA references to related images in two places in the CSV for your Operator:

    1. Update spec.relatedImages:

      1. ...
      2. spec:
      3. relatedImages: (1)
      4. - name: etcd-operator (2)
      5. image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 (3)
      6. - name: etcd-image
      7. image: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68
      8. ...
      1Create a relatedImages section and set the list of related images.
      2Specify a unique identifier for the image.
      3Specify each image by a digest (SHA), not by an image tag.
    2. Update the env section in the deployment when declaring environment variables that inject the image that the Operator should use:

      1. spec:
      2. install:
      3. spec:
      4. deployments:
      5. - name: etcd-operator-v3.1.1
      6. spec:
      7. replicas: 1
      8. selector:
      9. matchLabels:
      10. name: etcd-operator
      11. strategy:
      12. type: Recreate
      13. template:
      14. metadata:
      15. labels:
      16. name: etcd-operator
      17. spec:
      18. containers:
      19. - args:
      20. - /opt/etcd/bin/etcd_operator_run.sh
      21. env:
      22. - name: WATCH_NAMESPACE
      23. valueFrom:
      24. fieldRef:
      25. fieldPath: metadata.annotations['olm.targetNamespaces']
      26. - name: ETCD_OPERATOR_DEFAULT_ETCD_IMAGE (1)
      27. value: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68 (2)
      28. - name: ETCD_LOG_LEVEL
      29. value: INFO
      30. image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 (3)
      31. imagePullPolicy: IfNotPresent
      32. livenessProbe:
      33. httpGet:
      34. path: /healthy
      35. port: 8080
      36. initialDelaySeconds: 10
      37. periodSeconds: 30
      38. name: etcd-operator
      39. readinessProbe:
      40. httpGet:
      41. path: /ready
      42. port: 8080
      43. initialDelaySeconds: 10
      44. periodSeconds: 30
      45. resources: {}
      46. serviceAccountName: etcd-operator
      47. strategy: deployment
      1Inject the images referenced by the Operator by using environment variables.
      2Specify each image by a digest (SHA), not by an image tag.
      3Also reference the Operator container image by a digest (SHA), not by an image tag.

      When configuring probes, the timeoutSeconds value must be lower than the periodSeconds value. The timeoutSeconds default value is 1. The periodSeconds default value is 10.

  2. Add the disconnected annotation, which indicates that the Operator works in a disconnected environment:

    1. metadata:
    2. annotations:
    3. operators.openshift.io/infrastructure-features: '["disconnected"]'

Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OKD cluster.

If your Operator supports variants other than AMD64 and Linux, you can add labels to the cluster service version (CSV) that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:

  1. operatorframework.io/arch.<arch>: supported (1)
  2. operatorframework.io/os.<os>: supported (2)
1Set <arch> to a supported string.
2Set <os> to a supported string.

If a CSV does not include an os label, it is treated as if it has the following Linux support label by default:

If a CSV does not include an arch label, it is treated as if it has the following AMD64 support label by default:

  1. labels:
  2. operatorframework.io/arch.amd64: supported

If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.

Prerequisites

  • An Operator project with a CSV.

  • To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.

  • For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.

Procedure

  • Add a label in the metadata.labels of your CSV for each supported architecture and operating system that your Operator supports:

    1. labels:
    2. operatorframework.io/arch.s390x: supported
    3. operatorframework.io/os.zos: supported
    4. operatorframework.io/os.linux: supported (1)
    5. operatorframework.io/arch.amd64: supported (1)
    1After you add a new architecture or operating system, you must also now include the default os.linux and arch.amd64 variants explicitly.

Additional resources

  • See the specification for more information on manifest lists.

Architecture and operating system support for Operators

The following strings are supported in Operator Lifecycle Manager (OLM) on OKD when labeling or filtering Operators that support multiple architectures and operating systems:

Table 4. Architectures supported on OKD
ArchitectureString

AMD64

amd64

64-bit PowerPC little-endian

ppc64le

IBM Z

s390x

Table 5. Operating systems supported on OKD
Operating systemString

Linux

z/OS

zos

Different versions of OKD and other Kubernetes-based distributions might support a different set of architectures and operating systems.

Setting a suggested namespace

Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, to work properly. If resolved from a subscription, Operator Lifecycle Manager (OLM) defaults the namespaced resources of an Operator to the namespace of its subscription.

As an Operator author, you can instead express a desired target namespace as part of your cluster service version (CSV) to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.

Procedure

  • In your CSV, set the operatorframework.io/suggested-namespace annotation to your suggested namespace:

    1. metadata:
    2. annotations:
    3. operatorframework.io/suggested-namespace: <namespace> (1)
    1Set your suggested namespace.

Enabling Operator conditions

Operator Lifecycle Manager (OLM) provides Operators with a channel to communicate complex states that influence OLM behavior while managing the Operator. By default, OLM creates an OperatorCondition custom resource definition (CRD) when it installs an Operator. Based on the conditions set in the OperatorCondition custom resource (CR), the behavior of OLM changes accordingly.

To support Operator conditions, an Operator must be able to read the OperatorCondition CR created by OLM and have the ability to:

  • Get the specific condition.

  • Set the status of a specific condition.

This can be accomplished by using the library. An Operator author can provide a controller-runtime client in their Operator for the library to access the OperatorCondition CR owned by the Operator in the cluster.

The library provides a generic Conditions interface, which has the following methods to Get and Set a conditionType in the OperatorCondition CR:

Get

To get the specific condition, the library uses the client.Get function from controller-runtime, which requires an ObjectKey of type types.NamespacedName present in conditionAccessor.

Set

To update the status of the specific condition, the library uses the client.Update function from controller-runtime. An error occurs if the conditionType is not present in the CRD.

The Operator is allowed to modify only the status subresource of the CR. Operators can either delete or update the status.conditions array to include the condition. For more details on the format and description of the fields present in the conditions, see the upstream .

Operator SDK v1.8.0 supports operator-lib v0.3.0.

Prerequisites

  • An Operator project generated using the Operator SDK.

Procedure

To enable Operator conditions in your Operator project:

  1. In the go.mod file of your Operator project, add operator-framework/operator-lib as a required library:

    1. module github.com/example-inc/memcached-operator
    2. go 1.15
    3. require (
    4. k8s.io/apimachinery v0.19.2
    5. k8s.io/client-go v0.19.2
    6. sigs.k8s.io/controller-runtime v0.7.0
    7. operator-framework/operator-lib v0.3.0
    8. )
  2. Write your own constructor in your Operator logic that:

    • Accepts a controller-runtime client.

    • Accepts a conditionType.

    • Returns a Condition interface to update or add conditions.

    Because OLM currently supports the Upgradeable condition, you can create an interface that has methods to access the Upgradeable condition. For example:

    1. import (
    2. ...
    3. apiv1 "github.com/operator-framework/api/pkg/operators/v1"
    4. )
    5. func NewUpgradeable(cl client.Client) (Condition, error) {
    6. return NewCondition(cl, "apiv1.OperatorUpgradeable")
    7. }
    8. cond, err := NewUpgradeable(cl);

    In this example, the NewUpgradeable constructor is further used to create a variable cond of type Condition. The cond variable would in turn have Get and Set methods, which can be used for handling the OLM Upgradeable condition.

Additional resources

Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.

The cluster service version (CSV) resource of an Operator can include a webhookdefinitions section to define the following types of webhooks:

  • Admission webhooks (validating and mutating)

  • Conversion webhooks

Procedure

  • Add a webhookdefinitions section to the spec section of the CSV of your Operator and include any webhook definitions using a type of ValidatingAdmissionWebhook, MutatingAdmissionWebhook, or ConversionWebhook. The following example contains all three types of webhooks:

    CSV containing webhooks

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: ClusterServiceVersion
    3. metadata:
    4. name: webhook-operator.v0.0.1
    5. spec:
    6. customresourcedefinitions:
    7. owned:
    8. - kind: WebhookTest
    9. name: webhooktests.webhook.operators.coreos.io (1)
    10. version: v1
    11. install:
    12. spec:
    13. deployments:
    14. - name: webhook-operator-webhook
    15. ...
    16. ...
    17. ...
    18. strategy: deployment
    19. installModes:
    20. - supported: false
    21. type: OwnNamespace
    22. - supported: false
    23. type: SingleNamespace
    24. - supported: false
    25. type: MultiNamespace
    26. - supported: true
    27. type: AllNamespaces
    28. webhookdefinitions:
    29. - type: ValidatingAdmissionWebhook (2)
    30. admissionReviewVersions:
    31. - v1beta1
    32. - v1
    33. containerPort: 443
    34. targetPort: 4343
    35. deploymentName: webhook-operator-webhook
    36. failurePolicy: Fail
    37. generateName: vwebhooktest.kb.io
    38. rules:
    39. - apiGroups:
    40. - webhook.operators.coreos.io
    41. apiVersions:
    42. - v1
    43. operations:
    44. - CREATE
    45. - UPDATE
    46. resources:
    47. - webhooktests
    48. sideEffects: None
    49. webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest
    50. - type: MutatingAdmissionWebhook (3)
    51. admissionReviewVersions:
    52. - v1beta1
    53. - v1
    54. containerPort: 443
    55. targetPort: 4343
    56. deploymentName: webhook-operator-webhook
    57. failurePolicy: Fail
    58. generateName: mwebhooktest.kb.io
    59. rules:
    60. - apiGroups:
    61. - webhook.operators.coreos.io
    62. apiVersions:
    63. - v1
    64. operations:
    65. - CREATE
    66. - UPDATE
    67. resources:
    68. - webhooktests
    69. sideEffects: None
    70. webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest
    71. - type: ConversionWebhook (4)
    72. admissionReviewVersions:
    73. - v1beta1
    74. - v1
    75. containerPort: 443
    76. targetPort: 4343
    77. generateName: cwebhooktest.kb.io
    78. sideEffects: None
    79. webhookPath: /convert
    80. conversionCRDs:
    81. - webhooktests.webhook.operators.coreos.io (5)
    82. ...
    1The CRDs targeted by the conversion webhook must exist here.
    2A validating admission webhook.
    3A mutating admission webhook.
    4A conversion webhook.
    5The spec.PreserveUnknownFields property of each CRD must be set to false or nil.

Additional resources

When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:

  • The type field must be set to either ValidatingAdmissionWebhook, MutatingAdmissionWebhook, or ConversionWebhook, or the CSV will be placed in a failed phase.

  • The CSV must contain a deployment whose name is equivalent to the value supplied in the deploymentName field of the webhookdefinition.

When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the Operator group that the Operator is deployed in.

Certificate authority constraints

OLM is configured to provide each deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the deployment was originally used by the API service lifecycle logic. As a result:

  • The TLS certificate file is mounted to the deployment at /apiserver.local.config/certificates/apiserver.crt.

  • The TLS key file is mounted to the deployment at /apiserver.local.config/certificates/apiserver.key.

Admission webhook rules constraints

To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:

  • Requests that target all groups

  • Requests that target the operators.coreos.com group

  • Requests that target the ValidatingWebhookConfigurations or MutatingWebhookConfigurations resources

Conversion webhook constraints

OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:

  • CSVs featuring a conversion webhook can only support the AllNamespaces install mode.

  • The CRD targeted by the conversion webhook must have its spec.preserveUnknownFields field set to false or nil.

  • The conversion webhook defined in the CSV must target an owned CRD.

  • There can only be one conversion webhook on the entire cluster for a given CRD.

Understanding your custom resource definitions (CRDs)

There are two types of custom resource definitions (CRDs) that your Operator can use: ones that are owned by it and ones that it depends on, which are required.

Owned CRDs

The custom resource definitions (CRDs) owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.

It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of replica sets in another. Each one should be listed out in the CSV file.

Table 6. Owned CRD fields
FieldDescriptionRequired/optional

Name

The full name of your CRD.

Required

Version

The version of that object API.

Required

The machine readable name of your CRD.

Required

DisplayName

A human readable version of your CRD name, for example MongoDB Standalone.

Required

Description

A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD.

Required

Group

The API group that this CRD belongs to, for example database.example.com.

Optional

Resources

Your CRDs own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

These descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a secret or config map that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs.

There are three types of descriptors:

  • SpecDescriptors: A reference to fields in the spec block of an object.

  • StatusDescriptors: A reference to fields in the status block of an object.

  • ActionDescriptors: A reference to actions that can be performed on an object.

  • DisplayName: A human readable name for the Spec, Status, or Action.

  • Description: A short description of the Spec, Status, or Action and how it is used by the Operator.

  • Path: A dot-delimited path of the field on the object that this descriptor describes.

  • X-Descriptors: Used to determine which “capabilities” this descriptor has and which UI component to use. See the openshift/console project for a canonical list of React UI X-Descriptors for OKD.

Also see the openshift/console project for more information on in general.

Optional

The following example depicts a MongoDB Standalone CRD that requires some user input in the form of a secret and config map, and orchestrates services, stateful sets, pods and config maps:

Example owned CRD

  1. - displayName: MongoDB Standalone
  2. group: mongodb.com
  3. kind: MongoDbStandalone
  4. name: mongodbstandalones.mongodb.com
  5. resources:
  6. - kind: Service
  7. name: ''
  8. version: v1
  9. - kind: StatefulSet
  10. name: ''
  11. version: v1beta2
  12. - kind: Pod
  13. name: ''
  14. version: v1
  15. - kind: ConfigMap
  16. name: ''
  17. version: v1
  18. specDescriptors:
  19. - description: Credentials for Ops Manager or Cloud Manager.
  20. displayName: Credentials
  21. path: credentials
  22. x-descriptors:
  23. - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
  24. - description: Project this deployment belongs to.
  25. displayName: Project
  26. path: project
  27. x-descriptors:
  28. - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
  29. - description: MongoDB version to be installed.
  30. displayName: Version
  31. path: version
  32. x-descriptors:
  33. - 'urn:alm:descriptor:com.tectonic.ui:label'
  34. statusDescriptors:
  35. - description: The status of each of the pods for the MongoDB cluster.
  36. displayName: Pod Status
  37. path: pods
  38. x-descriptors:
  39. - 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
  40. version: v1
  41. description: >-
  42. MongoDB Deployment consisting of only one host. No replication of
  43. data.

Required CRDs

Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.

An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.

Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a service account created for each Operator to create, watch, and modify the Kubernetes resources required.

Example required CRD

CRD upgrades

OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:

  • All existing serving versions in the current CRD are present in the new CRD.

  • All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.

Adding a new CRD version

Procedure

To add a new version of a CRD to your Operator:

  1. Add a new entry in the CRD resource under the versions section of your CSV.

    For example, if the current CRD has a version v1alpha1 and you want to add a new version v1beta1 and mark it as the new storage version, add a new entry for v1beta1:

    1. versions:
    2. - name: v1alpha1
    3. served: true
    4. storage: false
    5. - name: v1beta1 (1)
    6. served: true
    7. storage: true
    1New entry.
  2. Ensure the referencing version of the CRD in the owned section of your CSV is updated if the CSV intends to use the new version:

    1. customresourcedefinitions:
    2. owned:
    3. - name: cluster.example.com
    4. version: v1beta1 (1)
    5. kind: cluster
    6. displayName: Cluster
    1Update the version.
  3. Push the updated CRD and CSV to your bundle.

Deprecating or removing a CRD version

Operator Lifecycle Manager (OLM) does not allow a serving version of a custom resource definition (CRD) to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the served field in the CRD to false. Then, the non-serving version can be removed on the subsequent CRD upgrade.

Procedure

To deprecate and remove a specific version of a CRD:

  1. Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:

    1. versions:
    2. - name: v1alpha1
    3. served: false (1)
    4. storage: true
    1Set to false.
  2. Switch the storage version to a serving version if the version to be deprecated is currently the storage version. For example:

    1. versions:
    2. - name: v1alpha1
    3. served: false
    4. storage: false (1)
    5. - name: v1beta1
    6. served: true
    7. storage: true (1)
    1Update the storage fields accordingly.

    To remove a specific version that is or was the storage version from a CRD, that version must be removed from the storedVersion in the status of the CRD. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.

  3. Upgrade the CRD with the above changes.

  4. In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:

    1. versions:
    2. - name: v1beta1
    3. served: true
    4. storage: true
  5. Ensure the referencing CRD version in the owned section of your CSV is updated accordingly if that version is removed from the CRD.

Users of your Operator must be made aware of which options are required versus optional. You can provide templates for each of your custom resource definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples. Compatible UIs will pre-fill this template for users to further customize.

The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata and spec of the Kubernetes object.

The following full example provides templates for EtcdCluster, EtcdBackup and EtcdRestore:

  1. metadata:
  2. annotations:
  3. alm-examples: >-
  4. [{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]

Hiding internal objects

It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication CRD that is created whenever a user creates a Database object with replication: true.

As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects annotation to the cluster service version (CSV) of your Operator.

Procedure

  1. Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or spec block of your CR, if applicable to your Operator.

  2. Add the operators.operatorframework.io/internal-objects annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:

    Internal object annotation

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: ClusterServiceVersion
    3. metadata:
    4. name: my-operator-v1.2.3
    5. annotations:
    6. operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]' (1)
    7. ...
    1Set any internal CRDs as an array of strings.

Initializing required custom resources

An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.

As an Operator developer, you can specify a single required custom resource that must be created at the time that the Operator is installed by adding the operatorframework.io/initialization-resource annotation to the cluster service version (CSV). The annotation must include a template that contains a complete YAML definition that is required to initialize the resource during installation.

If this annotation is defined, after installing the Operator from the OKD web console, the user is prompted to create the resource using the template provided in the CSV.

Procedure

  • Add the operatorframework.io/initialization-resource annotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of a StorageCluster resource and provides a full YAML definition:

    Initialization resource annotation

Understanding your API services

As with CRDs, there are two types of API services that your Operator may use: owned and required.

Owned API services

When a CSV owns an API service, it is responsible for describing the deployment of the extension api-server that backs it and the group/version/kind (GVK) it provides.

An API service is uniquely identified by the group/version it provides and can be listed multiple times to denote the different kinds it is expected to provide.

Table 8. Owned API service fields
FieldDescriptionRequired/optional

Group

Group that the API service provides, for example database.example.com.

Required

Version

Version of the API service, for example v1alpha1.

Required

Kind

A kind that the API service is expected to provide.

Required

Name

The plural name for the API service provided.

Required

DeploymentName

Name of the deployment defined by your CSV that corresponds to your API service (required for owned API services). During the CSV pending phase, the OLM Operator searches the InstallStrategy of your CSV for a Deployment spec with a matching name, and if not found, does not transition the CSV to the “Install Ready” phase.

Required

DisplayName

A human readable version of your API service name, for example MongoDB Standalone.

Required

Description

A short description of how this API service is used by the Operator or a description of the functionality provided by the API service.

Required

Resources

Your API services own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the service or ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, do not list config maps that store internal state that are not meant to be modified by a user.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

Essentially the same as for owned CRDs.

Optional

API service resource creation

Operator Lifecycle Manager (OLM) is responsible for creating or replacing the service and API service resources for each unique owned API service:

  • Service pod selectors are copied from the CSV deployment matching the DeploymentName field of the API service description.

  • A new CA key/certificate pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective API service resource.

API service serving certificates

OLM handles generating a serving key/certificate pair whenever an owned API service is being installed. The serving certificate has a common name (CN) containing the hostname of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding API service resource.

The certificate is stored as a type kubernetes.io/tls secret in the deployment namespace, and a volume named apiservice-cert is automatically appended to the volumes section of the deployment in the CSV matching the DeploymentName field of the API service description.

If one does not already exist, a volume mount with a matching name is also appended to all containers of that deployment. This allows users to define a volume mount with the expected name to accommodate any custom path requirements. The path of the generated volume mount defaults to /apiserver.local.config/certificates and any existing volume mounts with the same path are replaced.

OLM ensures all required CSVs have an API service that is available and all expected GVKs are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by API services it does not own.

Table 9. Required API service fields
FieldDescriptionRequired/optional

Group

Group that the API service provides, for example database.example.com.

Required

Version

Version of the API service, for example v1alpha1.

Required

Kind

A kind that the API service is expected to provide.

Required

DisplayName

A human readable version of your API service name, for example MongoDB Standalone.

Required

Description

A short description of how this API service is used by the Operator or a description of the functionality provided by the API service.