Operator SDK tutorial for Java-based Operators

    Operator developers can take advantage of Java programming language support in the Operator SDK to build an example Java-based Operator for Memcached, a distributed key-value store, and manage its lifecycle.

    This process is accomplished using two centerpieces of the Operator Framework:

    Operator SDK

    The operator-sdk CLI tool and java-operator-sdk library API

    Operator Lifecycle Manager (OLM)

    Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster

    This tutorial goes into greater detail than Getting started with Operator SDK for Java-based Operators.

    • Operator SDK CLI installed

    • OpenShift CLI (oc) v4.13+ installed

    • Java v11+

    • v3.6.3+

    • Logged into an OKD 4.13 cluster with oc with an account that has cluster-admin permissions

    • To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret

    Additional resources

    Creating a project

    Use the Operator SDK CLI to create a project called memcached-operator.

    Procedure

    1. Create a directory for the project:

    2. Change to the directory:

      1. $ cd $HOME/projects/memcached-operator
    3. Run the operator-sdk init command with the quarkus plugin to initialize the project:

      1. $ operator-sdk init \
      2. --plugins=quarkus \
      3. --domain=example.com \
      4. --project-name=memcached-operator

    Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Java. For example:

    1. domain: example.com
    2. layout:
    3. - quarkus.javaoperatorsdk.io/v1-alpha
    4. projectName: memcached-operator
    5. version: "3"

    Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller.

    Procedure

    1. Run the following command to create an API:

      1. $ operator-sdk create api \
      2. --plugins=quarkus \ (1)
      3. --group=cache \ (2)
      4. --version=v1 \ (3)
      5. --kind=Memcached (4)
      1Set the plugin flag to quarkus.
      2Set the group flag to cache.
      3Set the version flag to v1.
      4Set the kind flag to Memcached.

    Verification

    1. Run the tree command to view the file structure:

      1. $ tree

      Example output

      1. .
      2. ├── Makefile
      3. ├── PROJECT
      4. ├── pom.xml
      5. └── src
      6. └── main
      7. ├── java
      8. └── com
      9. └── example
      10. ├── Memcached.java
      11. ├── MemcachedReconciler.java
      12. ├── MemcachedSpec.java
      13. └── MemcachedStatus.java
      14. └── resources
      15. └── application.properties
      16. 6 directories, 8 files

    Defining the API

    Define the API for the Memcached custom resource (CR).

    Procedure

    • Edit the following files that were generated as part of the create api process:

      1. Update the following attributes in the MemcachedSpec.java file to define the desired state of the Memcached CR:

        1. public class MemcachedSpec {
        2. private Integer size;
        3. public Integer getSize() {
        4. return size;
        5. }
        6. public void setSize(Integer size) {
        7. this.size = size;
        8. }
        9. }
      2. Update the following attributes in the MemcachedStatus.java file to define the observed state of the Memcached CR:

        1. import java.util.ArrayList;
        2. import java.util.List;
        3. public class MemcachedStatus {
        4. // Add Status information here
        5. // Nodes are the names of the memcached pods
        6. private List<String> nodes;
        7. public List<String> getNodes() {
        8. if (nodes == null) {
        9. nodes = new ArrayList<>();
        10. }
        11. return nodes;
        12. }
        13. public void setNodes(List<String> nodes) {
        14. this.nodes = nodes;
        15. }
        16. }
      3. Update the Memcached.java file to define the Schema for Memcached APIs that extends to both MemcachedSpec.java and MemcachedStatus.java files.

        1. @Version("v1")
        2. @Group("cache.example.com")
        3. public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}

    Generating CRD manifests

    After the API is defined with MemcachedSpec and MemcachedStatus files, you can generate CRD manifests.

    Procedure

    • Run the following command from the memcached-operator directory to generate the CRD:

      1. $ mvn clean install

    Verification

    • Verify the contents of the CRD in the target/kubernetes/memcacheds.cache.example.com-v1.yml file as shown in the following example:

      1. $ cat target/kubernetes/memcacheds.cache.example.com-v1.yaml

      Example output

      1. # Generated by Fabric8 CRDGenerator, manual edits might get overwritten!
      2. apiVersion: apiextensions.k8s.io/v1
      3. kind: CustomResourceDefinition
      4. metadata:
      5. name: memcacheds.cache.example.com
      6. spec:
      7. group: cache.example.com
      8. names:
      9. kind: Memcached
      10. plural: memcacheds
      11. singular: memcached
      12. scope: Namespaced
      13. versions:
      14. - name: v1
      15. schema:
      16. openAPIV3Schema:
      17. properties:
      18. spec:
      19. properties:
      20. size:
      21. type: integer
      22. type: object
      23. status:
      24. properties:
      25. nodes:
      26. items:
      27. type: string
      28. type: array
      29. type: object
      30. type: object
      31. served: true
      32. storage: true
      33. subresources:
      34. status: {}

    After generating the CRD manifests, you can create the Custom Resource (CR).

    Procedure

    • Create a Memcached CR called memcached-sample.yaml:

      1. kind: Memcached
      2. metadata:
      3. name: memcached-sample
      4. spec:
      5. # Add spec fields here
      6. size: 1

    Implementing the controller

    After creating a new API and controller, you can implement the controller logic.

    1. Append the following dependency to the pom.xml file:

      1. <dependency>
      2. <groupId>commons-collections</groupId>
      3. <version>3.2.2</version>
      4. </dependency>
    2. For this example, replace the generated controller file MemcachedReconciler.java with following example implementation:

      Example MemcachedReconciler.java

      ``` package com.example;

      import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors;

      public class MemcachedReconciler implements Reconciler { private final KubernetesClient client;

      public MemcachedReconciler(KubernetesClient client) {

      1. this.client = client;

      }

      // TODO Fill in the rest of the reconciler

      @Override public UpdateControl reconcile(

      1. Memcached resource, Context context) {
      2. // TODO: fill in logic
      3. Deployment deployment = client.apps()
      4. .deployments()
      5. .inNamespace(resource.getMetadata().getNamespace())
      6. .withName(resource.getMetadata().getName())
      7. .get();
      8. if (deployment == null) {
      9. Deployment newDeployment = createMemcachedDeployment(resource);
      10. client.apps().deployments().create(newDeployment);
      11. return UpdateControl.noUpdate();
      12. }
      13. int currentReplicas = deployment.getSpec().getReplicas();
      14. int requiredReplicas = resource.getSpec().getSize();
      15. if (currentReplicas != requiredReplicas) {
      16. deployment.getSpec().setReplicas(requiredReplicas);
      17. client.apps().deployments().createOrReplace(deployment);
      18. return UpdateControl.noUpdate();
      19. }
      20. List<Pod> pods = client.pods()
      21. .inNamespace(resource.getMetadata().getNamespace())
      22. .withLabels(labelsForMemcached(resource))
      23. .list()
      24. .getItems();
      25. List<String> podNames =
      26. pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());
    1. if (resource.getStatus() == null
    2. || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) {
    3. if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus());
    4. resource.getStatus().setNodes(podNames);
    5. return UpdateControl.updateResource(resource);
    6. }
    7. return UpdateControl.noUpdate();
    8. }
    9. private Map<String, String> labelsForMemcached(Memcached m) {
    10. Map<String, String> labels = new HashMap<>();
    11. labels.put("app", "memcached");
    12. labels.put("memcached_cr", m.getMetadata().getName());
    13. return labels;
    14. }
    15. private Deployment createMemcachedDeployment(Memcached m) {
    16. Deployment deployment = new DeploymentBuilder()
    17. .withMetadata(
    18. new ObjectMetaBuilder()
    19. .withName(m.getMetadata().getName())
    20. .withNamespace(m.getMetadata().getNamespace())
    21. .build())
    22. .withSpec(
    23. new DeploymentSpecBuilder()
    24. .withReplicas(m.getSpec().getSize())
    25. .withSelector(
    26. new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build())
    27. .withTemplate(
    28. new PodTemplateSpecBuilder()
    29. .withMetadata(
    30. new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build())
    31. .withSpec(
    32. new PodSpecBuilder()
    33. .withContainers(
    34. new ContainerBuilder()
    35. .withImage("memcached:1.4.36-alpine")
    36. .withName("memcached")
    37. .withCommand("memcached", "-m=64", "-o", "modern", "-v")
    38. .withPorts(
    39. new ContainerPortBuilder()
    40. .withContainerPort(11211)
    41. .withName("memcached")
    42. .build())
    43. .build())
    44. .build())
    45. .build())
    46. .build())
    47. .build();
    48. deployment.addOwnerReference(m);
    49. return deployment;
    50. }
    51. }
    52. ```
    53. The example controller runs the following reconciliation logic for each `Memcached` custom resource (CR):
    54. - Creates a Memcached deployment if it does not exist.
    55. - Ensures that the deployment size matches the size specified by the `Memcached` CR spec.
    56. - Updates the `Memcached` CR status with the names of the `memcached` pods.

    The next subsections explain how the controller in the example implementation watches resources and how the reconcile loop is triggered. You can skip these subsections to go directly to .

    Reconcile loop

    1. Every controller has a reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Deployment argument, as shown in the following example:

    2. As shown in the following example, if the Deployment is null, the deployment needs to be created. After you create the Deployment, you can determine if reconciliation is necessary. If there is no need of reconciliation, return the value of UpdateControl.noUpdate(), otherwise, return the value of `UpdateControl.updateStatus(resource):

      1. if (deployment == null) {
      2. Deployment newDeployment = createMemcachedDeployment(resource);
      3. client.apps().deployments().create(newDeployment);
      4. return UpdateControl.noUpdate();
      5. }
    3. After getting the Deployment, get the current and required replicas, as shown in the following example:

      1. int currentReplicas = deployment.getSpec().getReplicas();
      2. int requiredReplicas = resource.getSpec().getSize();
    4. If currentReplicas does not match the requiredReplicas, you must update the Deployment, as shown in the following example:

      1. if (currentReplicas != requiredReplicas) {
      2. deployment.getSpec().setReplicas(requiredReplicas);
      3. client.apps().deployments().createOrReplace(deployment);
      4. return UpdateControl.noUpdate();
      5. }
    5. The following example shows how to obtain the list of pods and their names:

      1. List<Pod> pods = client.pods()
      2. .inNamespace(resource.getMetadata().getNamespace())
      3. .withLabels(labelsForMemcached(resource))
      4. .list()
      5. .getItems();
      6. List<String> podNames =
      7. pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());
    6. Check if resources were created and verify podnames with the Memcached resources. If a mismatch exists in either of these conditions, perform a reconciliation as shown in the following example:

      1. if (resource.getStatus() == null
      2. || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) {
      3. if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus());
      4. resource.getStatus().setNodes(podNames);
      5. return UpdateControl.updateResource(resource);
      6. }

    Defining labelsForMemcached

    labelsForMemcached is a utility to return a map of the labels to attach to the resources:

    1. private Map<String, String> labelsForMemcached(Memcached m) {
    2. Map<String, String> labels = new HashMap<>();
    3. labels.put("memcached_cr", m.getMetadata().getName());
    4. return labels;
    5. }

    The createMemcachedDeployment method uses the DeploymentBuilder class:

    1. Deployment deployment = new DeploymentBuilder()
    2. .withMetadata(
    3. new ObjectMetaBuilder()
    4. .withName(m.getMetadata().getName())
    5. .withNamespace(m.getMetadata().getNamespace())
    6. .build())
    7. .withSpec(
    8. new DeploymentSpecBuilder()
    9. .withReplicas(m.getSpec().getSize())
    10. .withSelector(
    11. new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build())
    12. .withTemplate(
    13. new PodTemplateSpecBuilder()
    14. .withMetadata(
    15. new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build())
    16. .withSpec(
    17. new PodSpecBuilder()
    18. .withContainers(
    19. new ContainerBuilder()
    20. .withImage("memcached:1.4.36-alpine")
    21. .withName("memcached")
    22. .withCommand("memcached", "-m=64", "-o", "modern", "-v")
    23. .withPorts(
    24. new ContainerPortBuilder()
    25. .withContainerPort(11211)
    26. .withName("memcached")
    27. .build())
    28. .build())
    29. .build())
    30. .build())
    31. .build())
    32. .build();
    33. deployment.addOwnerReference(m);
    34. return deployment;
    35. }

    There are three ways you can use the Operator SDK CLI to build and run your Operator:

    • Run locally outside the cluster as a Go program.

    • Run as a deployment on the cluster.

    • Bundle your Operator and use Operator Lifecycle Manager (OLM) to deploy on the cluster.

    Running locally outside the cluster

    You can run your Operator project as a Go program outside of the cluster. This is useful for development purposes to speed up deployment and testing.

    Procedure

    1. Run the following command to compile the Operator:

      1. $ mvn clean install

      Example output

      1. [INFO] ------------------------------------------------------------------------
      2. [INFO] BUILD SUCCESS
      3. [INFO] ------------------------------------------------------------------------
      4. [INFO] Total time: 11.193 s
      5. [INFO] Finished at: 2021-05-26T12:16:54-04:00
      6. [INFO] ------------------------------------------------------------------------
    2. Run the following command to install the CRD to the default namespace:

      1. $ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml

      Example output

      1. customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
    3. Create a file called rbac.yaml as shown in the following example:

      1. apiVersion: rbac.authorization.k8s.io/v1
      2. kind: ClusterRoleBinding
      3. metadata:
      4. name: memcached-operator-admin
      5. subjects:
      6. - kind: ServiceAccount
      7. name: memcached-quarkus-operator-operator
      8. namespace: default
      9. roleRef:
      10. kind: ClusterRole
      11. name: cluster-admin
      12. apiGroup: ""
    4. Run the following command to grant cluster-admin privileges to the memcached-quarkus-operator-operator by applying the rbac.yaml file:

      1. $ oc apply -f rbac.yaml
    5. Enter the following command to run the Operator:

      1. $ java -jar target/quarkus-app/quarkus-run.jar

      The java command will run the Operator and remain running until you end the process. You will need another terminal to complete the rest of these commands.

    6. Apply the memcached-sample.yaml file with the following command:

      1. $ kubectl apply -f memcached-sample.yaml

      Example output

      1. memcached.cache.example.com/memcached-sample created

    Verification

    • Run the following command to confirm that the pod has started:

      1. $ oc get all

      Example output

    Running as a deployment on the cluster

    You can run your Operator project as a deployment on your cluster.

    Procedure

    1. Run the following make commands to build and push the Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

      1. Build the image:

        1. $ make docker-build IMG=<registry>/<user>/<image_name>:<tag>

        The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see .

      2. Push the image to a repository:

        1. $ make docker-push IMG=<registry>/<user>/<image_name>:<tag>
    2. Run the following command to install the CRD to the default namespace:

      1. $ oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml
      1. customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created
    3. Create a file called rbac.yaml as shown in the following example:

      1. apiVersion: rbac.authorization.k8s.io/v1
      2. kind: ClusterRoleBinding
      3. metadata:
      4. name: memcached-operator-admin
      5. subjects:
      6. - kind: ServiceAccount
      7. name: memcached-quarkus-operator-operator
      8. namespace: default
      9. roleRef:
      10. kind: ClusterRole
      11. name: cluster-admin
      12. apiGroup: ""

      The rbac.yaml file will be applied at a later step.

    4. Run the following command to deploy the Operator:

      1. $ make deploy IMG=<registry>/<user>/<image_name>:<tag>
    5. Run the following command to grant cluster-admin privileges to the memcached-quarkus-operator-operator by applying the rbac.yaml file created in a previous step:

      1. $ oc apply -f rbac.yaml
    6. Run the following command to verify that the Operator is running:

      1. $ oc get all -n default

      Example output

      1. NAME READY UP-TO-DATE AVAILABLE AGE
      2. pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s
    7. Run the following command to apply the memcached-sample.yaml and create the memcached-sample pod:

      1. $ oc apply -f memcached-sample.yaml

      Example output

      1. memcached.cache.example.com/memcached-sample created

    Verification

    • Run the following command to confirm the pods have started:

      1. $ oc get all

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s
      3. pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s

    Bundling an Operator

    The Operator bundle format is the default packaging method for Operator SDK and Operator Lifecycle Manager (OLM). You can get your Operator ready for use on OLM by using the Operator SDK to build and push your Operator project as a bundle image.

    Prerequisites

    • Operator SDK CLI installed on a development workstation

    • OpenShift CLI (oc) v4.13+ installed

    • Operator project initialized by using the Operator SDK

    Procedure

    1. Run the following make commands in your Operator project directory to build and push your Operator image. Modify the IMG argument in the following steps to reference a repository that you have access to. You can obtain an account for storing containers at repository sites such as Quay.io.

      1. Build the image:

        1. $ make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>

        The Dockerfile generated by the SDK for the Operator explicitly references GOARCH=amd64 for go build. This can be amended to GOARCH=$TARGETARCH for non-AMD64 architectures. Docker will automatically set the environment variable to the value specified by –platform. With Buildah, the –build-arg will need to be used for the purpose. For more information, see Multiple Architectures.

      2. Push the image to a repository:

        1. $ make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>
    2. Create your Operator bundle manifest by running the make bundle command, which invokes several commands, including the Operator SDK generate bundle and bundle validate subcommands:

      1. $ make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>

      Bundle manifests for an Operator describe how to display, create, and manage an application. The make bundle command creates the following files and directories in your Operator project:

      • A bundle manifests directory named bundle/manifests that contains a ClusterServiceVersion object

      • A bundle metadata directory named bundle/metadata

      • All custom resource definitions (CRDs) in a config/crd directory

      • A Dockerfile bundle.Dockerfile

      These files are then automatically validated by using operator-sdk bundle validate to ensure the on-disk bundle representation is correct.

    3. Build and push your bundle image by running the following commands. OLM consumes Operator bundles using an index image, which reference one or more bundle images.

      1. Build the bundle image. Set BUNDLE_IMG with the details for the registry, user namespace, and image tag where you intend to push the image:

        1. $ make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>
      2. Push the bundle image:

    Deploying an Operator with Operator Lifecycle Manager

    Operator Lifecycle Manager (OLM) helps you to install, update, and manage the lifecycle of Operators and their associated services on a Kubernetes cluster. OLM is installed by default on OKD and runs as a Kubernetes extension so that you can use the web console and the OpenShift CLI (oc) for all Operator lifecycle management functions without any additional tools.

    The Operator bundle format is the default packaging method for Operator SDK and OLM. You can use the Operator SDK to quickly run a bundle image on OLM to ensure that it runs properly.

    Prerequisites

    • Operator SDK CLI installed on a development workstation

    • Operator bundle image built and pushed to a registry

    • OLM installed on a Kubernetes-based cluster (v1.16.0 or later if you use apiextensions.k8s.io/v1 CRDs, for example OKD 4.13)

    • Logged in to the cluster with oc using an account with cluster-admin permissions

    Procedure

    1. Enter the following command to run the Operator on the cluster:

      1. $ operator-sdk run bundle \(1)
      2. -n <namespace> \(2)
      3. <registry>/<user>/<bundle_image_name>:<tag> (3)

      As of OKD 4.11, the run bundle command supports the file-based catalog format for Operator catalogs by default. The deprecated SQLite database format for Operator catalogs continues to be supported; however, it will be removed in a future release. It is recommended that Operator authors migrate their workflows to the file-based catalog format.

      This command performs the following actions:

      • Create an index image referencing your bundle image. The index image is opaque and ephemeral, but accurately reflects how a bundle would be added to a catalog in production.

      • Create a catalog source that points to your new index image, which enables OperatorHub to discover your Operator.

      • Deploy your Operator to your cluster by creating an OperatorGroup, , InstallPlan, and all other required resources, including RBAC.

    Additional resources

    • See to learn about the directory structures created by the Operator SDK.