FAQ

    KubeVela is a platform builder tool to create easy-to-use yet extensible app delivery/management systems with Kubernetes. KubeVela relies on Helm as templating engine and package format for apps. But Helm is not the only templating module that KubeVela supports. Another first-class supported approach is CUE.

    Also, KubeVela is by design a Kubernetes controller (i.e. works on server side), even for its Helm part, a Helm operator will be installed.

    Error: unable to create new content in namespace cert-manager because it is being terminated

    Occasionally you might hit the issue as below. It happens when the last KubeVela release deletion hasn’t completed.

    Take a break and try again in a few seconds.

    1. - Installing Vela Core Chart:
    2. Vela system along with OAM runtime already exist.
    3. Automatically discover capabilities successfully Add(0) Update(0) Delete(8)
    4. TYPE CATEGORY DESCRIPTION
    5. -task workload One-off task to run a piece of code or script to completion
    6. -webservice workload Long-running scalable service with stable endpoint to receive external traffic
    7. -worker workload Long-running scalable backend worker without network endpoint
    8. -autoscale trait Automatically scale the app following certain triggers or metrics
    9. -metrics trait Configure metrics targets to be monitored for the app
    10. -rollout trait Configure canary deployment strategy to release the app
    11. -route trait Configure route policy to the app
    12. -scaler trait Manually scale the app
    13. - Finished successfully.

    And manually apply all WorkloadDefinition and TraitDefinition manifests to have all capabilities back.

    1. $ kubectl apply -f charts/vela-core/templates/defwithtemplate
    2. traitdefinition.core.oam.dev/autoscale created
    3. traitdefinition.core.oam.dev/scaler created
    4. traitdefinition.core.oam.dev/metrics created
    5. traitdefinition.core.oam.dev/rollout created
    6. traitdefinition.core.oam.dev/route created
    7. workloaddefinition.core.oam.dev/task created
    8. workloaddefinition.core.oam.dev/webservice created
    9. workloaddefinition.core.oam.dev/worker created
    10. $ vela workloads
    11. Automatically discover capabilities successfully Add(8) Update(0) Delete(0)
    12. TYPE CATEGORY DESCRIPTION
    13. +task workload One-off task to run a piece of code or script to completion
    14. +webservice workload Long-running scalable service with stable endpoint to receive external traffic
    15. +worker workload Long-running scalable backend worker without network endpoint
    16. +autoscale trait Automatically scale the app following certain triggers or metrics
    17. +metrics trait Configure metrics targets to be monitored for the app
    18. +rollout trait Configure canary deployment strategy to release the app
    19. +scaler trait Manually scale the app
    20. NAME DESCRIPTION
    21. task One-off task to run a piece of code or script to completion
    22. webservice Long-running scalable service with stable endpoint to receive external traffic
    23. worker Long-running scalable backend worker without network endpoint

    Occasionally you might hit the issue as below. It happens when there is an old OAM Kubernetes Runtime release, or you applied ScopeDefinition before.

    1. $ vela install
    2. - Installing Vela Core Chart:
    3. install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
    4. Failed to install the chart with error: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"
    5. rendered manifests contain a resource that already exists. Unable to continue with install
    6. helm.sh/helm/v3/pkg/action.(*Install).Run
    7. /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
    8. ...
    9. Error: rendered manifests contain a resource that already exists. Unable to continue with install: ScopeDefinition "healthscopes.core.oam.dev" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "kubevela": current value is "oam"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "vela-system": current value is "oam-system"

    Delete ScopeDefinition “healthscopes.core.oam.dev” and try again.

    You have reached your pull rate limit

    When you look into the logs of Pod kubevela-vela-core and found the issue as below.

    1. $ kubectl get pod -n vela-system -l app.kubernetes.io/name=vela-core
    2. NAME READY STATUS RESTARTS AGE
    3. kubevela-vela-core-f8b987775-wjg25 0/1 - 0 35m

    You can use github container registry instead.

    1. $ docker pull ghcr.io/oam-dev/kubevela/vela-core:latest

    If you hit the issue as below, an cert-manager release might exist whose namespace and RBAC related resource conflict with KubeVela.

    1. $ vela install
    2. - Installing Vela Core Chart:
    3. install chart vela-core, version 0.1.0, desc : A Helm chart for Kube Vela core, contains 35 file
    4. Failed to install the chart with error: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
    5. rendered manifests contain a resource that already exists. Unable to continue with install
    6. helm.sh/helm/v3/pkg/action.(*Install).Run
    7. /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
    8. ...
    9. /opt/hostedtoolcache/go/1.14.12/x64/src/runtime/asm_amd64.s:1373
    10. Error: rendered manifests contain a resource that already exists. Unable to continue with install: Namespace "cert-manager" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"

    Try these steps to fix the problem.

    • Delete release cert-manager
    • Delete namespace cert-manager

    How to fix issue: MutatingWebhookConfiguration mutating-webhook-configuration exists?

    If you deploy some other services which will apply MutatingWebhookConfiguration mutating-webhook-configuration, installing KubeVela will hit the issue as below.

    1. - Installing Vela Core Chart:
    2. install chart vela-core, version v0.2.1, desc : A Helm chart for Kube Vela core, contains 36 file
    3. Failed to install the chart with error: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"
    4. rendered manifests contain a resource that already exists. Unable to continue with install
    5. /home/runner/go/pkg/mod/helm.sh/helm/v3@v3.2.4/pkg/action/install.go:274
    6. github.com/oam-dev/kubevela/pkg/commands.InstallOamRuntime
    7. /home/runner/work/kubevela/kubevela/pkg/commands/system.go:259
    8. github.com/oam-dev/kubevela/pkg/commands.(*initCmd).run
    9. /home/runner/work/kubevela/kubevela/pkg/commands/system.go:162
    10. github.com/oam-dev/kubevela/pkg/commands.NewInstallCommand.func2
    11. /home/runner/work/kubevela/kubevela/pkg/commands/system.go:119
    12. github.com/spf13/cobra.(*Command).execute
    13. /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:850
    14. github.com/spf13/cobra.(*Command).ExecuteC
    15. /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:958
    16. github.com/spf13/cobra.(*Command).Execute
    17. /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.1.1/command.go:895
    18. main.main
    19. /home/runner/work/kubevela/kubevela/references/cmd/cli/main.go:16
    20. runtime.main
    21. /opt/hostedtoolcache/go/1.14.13/x64/src/runtime/proc.go:203
    22. runtime.goexit
    23. /opt/hostedtoolcache/go/1.14.13/x64/src/runtime/asm_amd64.s:1373
    24. Error: rendered manifests contain a resource that already exists. Unable to continue with install: MutatingWebhookConfiguration "mutating-webhook-configuration" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubevela"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "vela-system"

    To fix this issue, please upgrade KubeVela Cli vela version to be higher than v0.2.2 from .

    Operating Autoscale depends on metrics server, so it has to be enabled in various clusters. Please check whether metrics server is enabled with command kubectl top nodes or kubectl top pods.

    If the output is similar as below, the metrics is enabled.

    1. $ kubectl top nodes
    2. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
    3. cn-hongkong.10.0.1.237 288m 7% 5378Mi 78%
    4. cn-hongkong.10.0.1.238 351m 8% 5113Mi 74%
    5. $ kubectl top pods
    6. NAME CPU(cores) MEMORY(bytes)
    7. php-apache-65f444bf84-cjbs5 0m 1Mi
    8. wordpress-55c59ccdd5-lf59d 1m 66Mi

    Or you have to manually enable metrics server in your Kubernetes cluster.

    • ACK (Alibaba Cloud Container Service for Kubernetes)
    • ASK (Alibaba Cloud Serverless Kubernetes)

    Metrics server has to be enabled in Operations/Add-ons section of Alibaba Cloud console as below.

    Please refer to if you hit more issue.

    • Kind

    Install metrics server as below, or you can install the latest version.

    1. $ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

    Also add the following part under .spec.template.spec.containers in the yaml file loaded by kubectl edit deploy -n kube-system metrics-server.

    Noted: This is just a walk-around, not for production-level use.

    • MiniKube

    Enable it with following command.

    1. $ minikube addons enable metrics-server

    Have fun to on your application.