Autoscale the DNS Service in a Cluster

    • You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using or you can use one of these Kubernetes playgrounds:

      To check the version, enter .

    • This guide assumes your nodes use the AMD64 or Intel 64 CPU architecture.

    • Make sure Kubernetes DNS is enabled.

    Determine whether DNS horizontal autoscaling is already enabled

    List the Deployments in your cluster in the kube-system :

    The output is similar to this:

    1. NAME READY UP-TO-DATE AVAILABLE AGE
    2. ...
    3. dns-autoscaler 1/1 1 1 ...
    4. ...

    If you see “dns-autoscaler” in the output, DNS horizontal autoscaling is already enabled, and you can skip to Tuning autoscaling parameters.

    List the DNS deployments in your cluster in the kube-system namespace:

    1. kubectl get deployment -l k8s-app=kube-dns --namespace=kube-system

    The output is similar to this:

    1. NAME READY UP-TO-DATE AVAILABLE AGE
    2. ...
    3. coredns 2/2 2 2 ...
    4. ...

    If you don’t see a Deployment for DNS services, you can also look for it by name:

    1. kubectl get deployment --namespace=kube-system

    and look for a deployment named coredns or kube-dns.

    Your scale target is

    1. Deployment/<your-deployment-name>

    where <your-deployment-name> is the name of your DNS Deployment. For example, if the name of your Deployment for DNS is coredns, your scale target is Deployment/coredns.

    Note: CoreDNS is the default DNS service for Kubernetes. CoreDNS sets the label k8s-app=kube-dns so that it can work in clusters that originally used kube-dns.

    Enable DNS horizontal autoscaling

    Create a file named dns-horizontal-autoscaler.yaml with this content:

    1. kind: ServiceAccount
    2. apiVersion: v1
    3. metadata:
    4. name: kube-dns-autoscaler
    5. namespace: kube-system
    6. ---
    7. kind: ClusterRole
    8. apiVersion: rbac.authorization.k8s.io/v1
    9. metadata:
    10. name: system:kube-dns-autoscaler
    11. rules:
    12. - apiGroups: [""]
    13. resources: ["nodes"]
    14. verbs: ["list", "watch"]
    15. - apiGroups: [""]
    16. resources: ["replicationcontrollers/scale"]
    17. verbs: ["get", "update"]
    18. - apiGroups: ["apps"]
    19. resources: ["deployments/scale", "replicasets/scale"]
    20. # Remove the configmaps rule once below issue is fixed:
    21. # kubernetes-incubator/cluster-proportional-autoscaler#16
    22. - apiGroups: [""]
    23. resources: ["configmaps"]
    24. ---
    25. kind: ClusterRoleBinding
    26. apiVersion: rbac.authorization.k8s.io/v1
    27. metadata:
    28. name: system:kube-dns-autoscaler
    29. subjects:
    30. - kind: ServiceAccount
    31. name: kube-dns-autoscaler
    32. namespace: kube-system
    33. roleRef:
    34. kind: ClusterRole
    35. name: system:kube-dns-autoscaler
    36. apiGroup: rbac.authorization.k8s.io
    37. ---
    38. apiVersion: apps/v1
    39. kind: Deployment
    40. metadata:
    41. name: kube-dns-autoscaler
    42. namespace: kube-system
    43. labels:
    44. k8s-app: kube-dns-autoscaler
    45. kubernetes.io/cluster-service: "true"
    46. spec:
    47. selector:
    48. matchLabels:
    49. k8s-app: kube-dns-autoscaler
    50. template:
    51. metadata:
    52. labels:
    53. k8s-app: kube-dns-autoscaler
    54. spec:
    55. priorityClassName: system-cluster-critical
    56. securityContext:
    57. seccompProfile:
    58. type: RuntimeDefault
    59. supplementalGroups: [ 65534 ]
    60. fsGroup: 65534
    61. nodeSelector:
    62. kubernetes.io/os: linux
    63. containers:
    64. image: registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4
    65. resources:
    66. requests:
    67. cpu: "20m"
    68. memory: "10Mi"
    69. command:
    70. - /cluster-proportional-autoscaler
    71. - --namespace=kube-system
    72. - --configmap=kube-dns-autoscaler
    73. # Should keep target in sync with cluster/addons/dns/kube-dns.yaml.base
    74. - --target=<SCALE_TARGET>
    75. # When cluster is using large nodes(with more cores), "coresPerReplica" should dominate.
    76. # If using small nodes, "nodesPerReplica" should dominate.
    77. - --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true,"includeUnschedulableNodes":true}}
    78. - --logtostderr=true
    79. - --v=2
    80. tolerations:
    81. - key: "CriticalAddonsOnly"
    82. operator: "Exists"
    83. serviceAccountName: kube-dns-autoscaler

    In the file, replace <SCALE_TARGET> with your scale target.

    Go to the directory that contains your configuration file, and enter this command to create the Deployment:

    The output of a successful command is:

    1. deployment.apps/dns-autoscaler created

    DNS horizontal autoscaling is now enabled.

    Verify that the dns-autoscaler exists:

    1. kubectl get configmap --namespace=kube-system

    The output is similar to this:

    1. NAME DATA AGE
    2. ...
    3. dns-autoscaler 1 ...
    4. ...

    Modify the data in the ConfigMap:

    1. kubectl edit configmap dns-autoscaler --namespace=kube-system

    Look for this line:

    1. linear: '{"coresPerReplica":256,"min":1,"nodesPerReplica":16}'

    Modify the fields according to your needs. The “min” field indicates the minimal number of DNS backends. The actual number of backends is calculated using this equation:

    1. replicas = max( ceil( cores × 1/coresPerReplica ) , ceil( nodes × 1/nodesPerReplica ) )

    Note that the values of both coresPerReplica and nodesPerReplica are floats.

    The idea is that when a cluster is using nodes that have many cores, coresPerReplica dominates. When a cluster is using nodes that have fewer cores, nodesPerReplica dominates.

    There are other supported scaling patterns. For details, see cluster-proportional-autoscaler.

    Disable DNS horizontal autoscaling

    This option works for all situations. Enter this command:

    The output is:

    1. deployment.apps/dns-autoscaler scaled

    Verify that the replica count is zero:

    1. kubectl get rs --namespace=kube-system

    The output displays 0 in the DESIRED and CURRENT columns:

    1. NAME DESIRED CURRENT READY AGE
    2. ...
    3. dns-autoscaler-6b59789fc8 0 0 0 ...
    4. ...

    This option works if dns-autoscaler is under your own control, which means no one will re-create it:

    1. kubectl delete deployment dns-autoscaler --namespace=kube-system

    The output is:

    This option works if dns-autoscaler is under control of the (deprecated) , and you have write access to the master node.

    Sign in to the master node and delete the corresponding manifest file. The common path for this dns-autoscaler is:

    1. /etc/kubernetes/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml

    After the manifest file is deleted, the Addon Manager will delete the dns-autoscaler Deployment.

    • The cluster-proportional-autoscaler application is deployed separately from the DNS service.

    • An autoscaler Pod runs a client that polls the Kubernetes API server for the number of nodes and cores in the cluster.

    • A desired replica count is calculated and applied to the DNS backends based on the current schedulable nodes and cores and the given scaling parameters.

    • The scaling parameters and data points are provided via a ConfigMap to the autoscaler, and it refreshes its parameters table every poll interval to be up to date with the latest desired scaling parameters.

    • Changes to the scaling parameters are allowed without rebuilding or restarting the autoscaler Pod.

    What’s next