Migrating Helm-based Legacy Projects

    The motivations for the new layout are related to bringing more flexibility to users and part of the process to Integrating Kubebuilder and Operator SDK.

    • The deploy directory was replaced with the config directory including a new layout of Kubernetes manifests files:

      • CRD manifests in deploy/crds/ are now in config/crd/bases
      • Controller manifest deploy/operator.yaml is now in config/manager/manager.yaml
      • RBAC manifests in deploy are now in config/rbac/
    • build/Dockerfile is moved to Dockerfile in the project root directory

    What is new

    Projects are now scaffold using:

    • kustomize to manage Kubernetes resources needed to deploy your operator
    • A Makefile with helpful targets for build, test, and deployment, and to give you flexibility to tailor things to your project’s needs
    • Updated metrics configuration using , a --metrics-addr flag, and kustomize-based deployment of a Kubernetes Service and prometheus operator ServiceMonitor

    The easy migration path is to create a new project from the scratch and let the tool scaffold the files properly and then, just replace with your customizations and implementations. Following an example.

    In Kubebuilder-style projects, CRD groups are defined using two different flags (--group and --domain).

    When we initialize a new project, we need to specify the domain that all APIs in our project will share, so before creating the new project, we need to determine which domain we’re using for the APIs in our existing project.

    To determine the domain, look at the field in your CRDs in the deploy/crds directory.

    So let’s create a new project with the same domain (example.com):

    Now that we have our new project initialized, we need to re-create each of our APIs. Using our API example from earlier (cache.example.com), we’ll use cache for the --group flag.

    For --version and --kind, we use spec.versions[0].name and spec.names.kind, respectively.

    For each API in the existing project, run:

    Migrating your Custom Resource samples

    Update the CR manifests in config/samples with the values of the CRs in your existing project which are in deploy/crds/<group>_<version>_<kind>_cr.yaml

    Check if you have custom options in the watches.yaml file of your existing project. If so, update the new watches.yaml file to match. In our example, it will look like:

    NOTE: Do not remove the +kubebuilder:scaffold:watch marker. It allows the tool to update the watches file when new APIs are created.

    Checking the Permissions (RBAC)

    New projects are configured to watch all namespaces by default, so they need a ClusterRole to have the necessary permissions. Ensure that config/rbac/role.yaml remains a ClusterRole if you want to retain the default behavior of the new project conventions. For further information refer to the [operator scope][operator-scope] documentation.

    The following rules were used in earlier versions of helm-operator to automatically create and manage services and servicemonitors for metrics collection. If your operator’s charts don’t require these rules, they can safely be left out of the new config/rbac/role.yaml file:

    If your existing project has customizations in deploy/operator.yaml, they need to be ported to config/manager/manager.yaml. If you are passing custom arguments in your deployment, make sure to also update config/default/auth_proxy_patch.yaml.

    Note that the following environment variables are no longer used.

    • POD_NAME was used to enable a particular pod to hold the leader election lock when the Helm operator used the leader for life mechanism. Helm operator now uses controller-runtime’s leader with lease mechanism, and POD_NAME is no longer necessary.

    If you are using metrics and would like to keep them exported you will need to configure it in the config/default/kustomization.yaml. Please see the metrics doc to know how you can perform this setup.

    The default port used by the metric endpoint binds to was changed from :8383 to :8080. To continue using port 8383, specify --metrics-addr=:8383 when you start the operator.

    Finally, follow the steps in the section to verify your project is running.