kOps addons

    • Managed addons, which are configurable through the cluster spec
    • Static addons, which are manifest files that are applied as-is

    The following addons are managed by kOps and will be upgraded following the kOps and kubernetes lifecycle, and configured based on your cluster spec. kOps will consider both the configuration of the addon itself as well as what other settings you may have configured where applicable.

    AWS Load Balancer Controller

    AWS Load Balancer Controller offers additional functionality for provisioning ELBs.

    Read more in the official documentation.

    Cluster autoscaler

    Introduced
    kOps 1.19

    Cluster autoscaler can be enabled to automatically adjust the size of the kubernetes cluster.

    1. clusterAutoscaler:
    2. enabled: true
    3. expander: least-waste
    4. balanceSimilarNodeGroups: false
    5. awsUseStaticInstanceList: false
    6. scaleDownUtilizationThreshold: 0.5
    7. skipNodesWithLocalStorage: true
    8. skipNodesWithSystemPods: true
    9. newPodScaleUpDelay: 0s
    10. scaleDownDelayAfterAdd: 10m0s
    11. image: <the latest supported image for the specified kubernetes version>
    12. cpuRequest: "100m"
    13. memoryRequest: "300Mi"

    Read more about cluster autoscaler in the official documentation.

    Expander strategies

    Cluster autoscaler supports several different expander strategies.

    Note that the priority expander requires additional configuration through a ConfigMap as described in - you will need to create this ConfigMap in your cluster before selecting this expander.

    Disabling cluster autoscaler for a given instance group
    Introduced
    kOps 1.20

    You can disable the autoscaler for a given instance group by adding the following to the instance group spec.

    1. spec:
    2. autoscale: false

    Cert-manager

    IntroducedMinimum K8s Version
    kOps 1.20k8s 1.16

    Cert-manager handles x509 certificates for your cluster.

    1. spec:
    2. certManager:
    3. enabled: true
    4. defaultIssuer: yourDefaultIssuer

    Warning: cert-manager only supports one installation per cluster. If you are already running cert-manager, you need to either remove this installation prior to enabling this addon, or mark cert-manger as not being managed by kOps (see below). As long as you are using v1 versions of the cert-manager resources, it is safe to remove existing installs and replace it with this addon

    Self-provisioned cert-manager

    The following cert-manager configuration allows provisioning cert-manager externally and allows all dependent plugins to be deployed. Please note that addons might run into errors until cert-manager is deployed.

    1. spec:
    2. enabled: true
    3. managed: false

    Read more about cert-manager in the

    Karpenter

    Introduced
    kOps 1.24
    1. spec:
    2. karpenter:
    3. enabled: true

    See more details on how to configure Karpenter in the and the official documentation

    Metrics server

    Introduced
    kOps 1.19

    Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

    Read more about Metrics Server in the official documentation.

    Secure TLS
    Introduced
    kOps 1.20

    By default, API server will not verify the metrics server TLS certificate. To enable TLS verification, set the following in the cluster spec:

    1. spec:
    2. certManager:
    3. enabled: true
    4. metricsServer:
    5. enabled: true
    6. insecure: false

    This requires that cert-manager is installed in the cluster.

    Node local DNS cache

    NodeLocal DNSCache can be enabled if you are using CoreDNS. It is used to improve the Cluster DNS performance by running a dns caching agent on cluster nodes as a DaemonSet.

    memoryRequest and cpuRequest for the node-local-dns pods can also be configured. If not set, they will be configured by default to 5Mi and 25m respectively.

    If is enabled, kubedns will be used as a default upstream

    1. spec:
    2. kubeDNS:
    3. provider: CoreDNS
    4. nodeLocalDNS:
    5. enabled: true
    6. memoryRequest: 5Mi
    7. cpuRequest: 25m

    Node termination handler

    Introduced
    kOps 1.19

    Node Termination Handler ensures that the Kubernetes control plane responds appropriately to events that can cause your EC2 instance to become unavailable, such as EC2 maintenance events, EC2 Spot interruptions, and EC2 instance rebalance recommendations. If not handled, your application code may not stop gracefully, take longer to recover full availability, or accidentally schedule work to nodes that are going down.

    1. spec:
    2. nodeTerminationHandler:
    3. cpuRequest: 200m
    4. enabled: true
    5. enableSQSTerminationDraining: true
    6. managedASGTag: "aws-node-termination-handler/managed"
    Queue Processor Mode
    Introduced
    kOps 1.21

    If enableSQSTerminationDraining is true Node Termination Handler will operate in Queue Processor mode. In addition to the events mentioned above, Queue Processor mode allows Node Termination Handler to take care of ASG Scale-In, AZ-Rebalance, Unhealthy Instances, EC2 Instance Termination via the API or Console, and more. kOps will provision the necessary infrastructure: an SQS queue, EventBridge rules, and ASG Lifecycle hooks. managedASGTag can be configured with Queue Processor mode to distinguish resource ownership between multiple clusters.

    The kOps CLI requires additional IAM permissions to manage the requisite EventBridge rules and SQS queue:

    1. {
    2. "Version": "2012-10-17",
    3. "Statement": [
    4. {
    5. "Effect": "Allow",
    6. "Action": [
    7. "events:DeleteRule",
    8. "events:ListRules",
    9. "events:ListTargetsByRule",
    10. "events:ListTagsForResource",
    11. "events:PutEvents",
    12. "events:PutRule",
    13. "events:PutTargets",
    14. "events:RemoveTargets",
    15. "events:TagResource",
    16. "sqs:CreateQueue",
    17. "sqs:DeleteQueue",
    18. "sqs:ListQueues",
    19. "sqs:ListQueueTags"
    20. ],
    21. "Resource": "*"
    22. }
    23. }

    Warning: If you switch between the two operating modes on an existing cluster, the old resources have to be manually deleted. For IMDS to Queue Processor, this means deleting the k8s nth daemonset. For Queue Processor to IMDS, this means deleting the Kubernetes NTH deployment and the AWS resources: the SQS queue, EventBridge rules, and ASG Lifecycle hooks.

    Node Problem Detector

    Introduced
    kOps 1.22
    1. spec:
    2. nodeProblemDetector:
    3. enabled: true
    4. memoryRequest: 32Mi
    5. cpuRequest: 10m

    Pod Identity Webhook

    When using IAM roles for Service Accounts (IRSA), Pods require an additinal token to authenticate with the AWS API. In addition, the SDK requires specific environment variables set to make use of these tokens. This addon will mutate Pods configured to use IRSA so that users do not need to do this themselves.

    All ServiceAccounts configured with AWS privileges in the Cluster spec will automatically be mutated to assume the configured role.

    The EKS annotations on ServiceAccounts are typically not necessary as kOps will configure the webhook with all ServiceAccount to role mapping configured in the Cluster spec. But if you need specific configuration, you may annotate the ServiceAccount, overriding the kOps configuration.

    Read more about Pod Identity Webhook in the .

    Snapshot controller

    IntroducedMinimum K8s Version
    kOps 1.21k8s 1.20

    Snapshot controller implements the of the Container Storage Interface (CSI).

    You can enable the snapshot controller by adding the following to the cluster spec:

    1. spec:
    2. snapshotController:
    3. enabled: true

    Note that the in-tree volume drivers do not support this feature. If you are running a cluster on AWS, you can enable the EBS CSI driver by adding the following:

    1. spec:
    2. cloudConfig:
    3. awsEBSCSIDriver:
    4. enabled: true

    Custom addons

    The command kops create cluster does not support specifying addons to be added to the cluster when it is created. Instead they can be added after cluster creation using kubectl. Alternatively when creating a cluster from a yaml manifest, addons can be specified using spec.addons.

    1. spec:
    2. addons:
    3. - manifest: s3://my-kops-addons/addon.yaml

    The docs about the describe in more detail how to define a addon resource with regards to versioning. Here is a minimal example of an addon manifest that would install two different addons.

    1. kind: Addons
    2. metadata:
    3. name: example
    4. spec:
    5. addons:
    6. - name: foo.addons.org.io
    7. version: 0.0.1
    8. selector:
    9. k8s-addon: foo.addons.org.io
    10. manifest: foo.addons.org.io/v0.0.1.yaml
    11. - name: bar.addons.org.io
    12. version: 0.0.1
    13. selector:
    14. k8s-addon: bar.addons.org.io
    15. manifest: bar.addons.org.io/v0.0.1.yaml

    In this example the folder structure should look like this;

    1. addon.yaml
    2. foo.addons.org.io
    3. v0.0.1.yaml
    4. bar.addons.org.io

    The yaml files in the foo/bar folders can be any kubernetes resource. Typically this file structure would be pushed to S3 or another of the supported backends and then referenced as above in . In order for master nodes to be able to access the S3 bucket containing the addon manifests, one might have to add additional iam policies to the master nodes using spec.additionalPolicies, like so:

    The masters will poll for changes in the bucket and keep the addons up to date.