Using A Manifest to Manage kOps Clusters

    Background

    Because of the above statement kops includes an API which provides a feature for users to utilize YAML or JSON manifests for managing their kops created Kubernetes installations. In the same way that you can use a YAML manifest to deploy a Job, you can deploy and manage a kops Kubernetes instance with a manifest. All of these values are also usable via the interactive editor with kops edit.

    You can see all the options that are currently supported in kOps or more prettily here

    The following is a list of the benefits of using a file to manage instances.

    • Capability to access API values that are not accessible via the command line such as setting the max price for spot instances.
    • Create, replace, update, and delete clusters without entering an interactive editor. This feature is helpful when automating cluster creation.
    • Ability to check-in files to source control that represents an installation.
    • Run commands such as kops delete -f mycluster.yaml.

    At this time you must run kops create cluster and then export the YAML from the state store. We plan in the future to have the capability to generate kOps YAML via the command line. The following is an example of creating a cluster and exporting the YAML.

    The above command exports a YAML document which contains the definition of the cluster, kind: Cluster, and the definitions of the instance groups, kind: InstanceGroup.

    NOTE: If you run kops get cluster $NAME -o yaml > $NAME.yaml, you will only get a cluster spec. Use the command above (kops get $NAME ...)for both the cluster spec and all instance groups.

    1. apiVersion: kops.k8s.io/v1alpha2
    2. kind: Cluster
    3. metadata:
    4. name: k8s.example.com
    5. spec:
    6. api:
    7. loadBalancer:
    8. type: Public
    9. authorization:
    10. alwaysAllow: {}
    11. channel: stable
    12. cloudProvider: aws
    13. configBase: s3://example-state-store/k8s.example.com
    14. etcdClusters:
    15. - etcdMembers:
    16. - instanceGroup: master-us-east-2d
    17. name: a
    18. - instanceGroup: master-us-east-2b
    19. name: b
    20. - instanceGroup: master-us-east-2c
    21. name: c
    22. name: main
    23. - etcdMembers:
    24. - instanceGroup: master-us-east-2d
    25. name: a
    26. - instanceGroup: master-us-east-2b
    27. name: b
    28. - instanceGroup: master-us-east-2c
    29. name: c
    30. name: events
    31. kubernetesApiAccess:
    32. - 0.0.0.0/0
    33. kubernetesVersion: 1.6.6
    34. masterPublicName: api.k8s.example.com
    35. networkCIDR: 172.20.0.0/16
    36. networkID: vpc-6335dd1a
    37. networking:
    38. weave: {}
    39. nonMasqueradeCIDR: 100.64.0.0/10
    40. sshAccess:
    41. - 0.0.0.0/0
    42. subnets:
    43. - cidr: 172.20.32.0/19
    44. name: us-east-2d
    45. type: Private
    46. zone: us-east-2d
    47. - cidr: 172.20.64.0/19
    48. name: us-east-2b
    49. type: Private
    50. - cidr: 172.20.96.0/19
    51. name: us-east-2c
    52. type: Private
    53. zone: us-east-2c
    54. - cidr: 172.20.0.0/22
    55. type: Utility
    56. zone: us-east-2d
    57. - cidr: 172.20.4.0/22
    58. name: utility-us-east-2b
    59. type: Utility
    60. zone: us-east-2b
    61. - cidr: 172.20.8.0/22
    62. name: utility-us-east-2c
    63. type: Utility
    64. zone: us-east-2c
    65. topology:
    66. bastion:
    67. bastionPublicName: bastion.k8s.example.com
    68. dns:
    69. type: Public
    70. masters: private
    71. nodes: private
    72. ---
    73. apiVersion: kops.k8s.io/v1alpha2
    74. kind: InstanceGroup
    75. metadata:
    76. labels:
    77. kops.k8s.io/cluster: k8s.example.com
    78. name: bastions
    79. spec:
    80. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
    81. machineType: t2.micro
    82. maxSize: 1
    83. minSize: 1
    84. role: Bastion
    85. subnets:
    86. - utility-us-east-2d
    87. - utility-us-east-2b
    88. - utility-us-east-2c
    89. ---
    90. apiVersion: kops.k8s.io/v1alpha2
    91. kind: InstanceGroup
    92. metadata:
    93. labels:
    94. kops.k8s.io/cluster: k8s.example.com
    95. name: master-us-east-2d
    96. spec:
    97. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
    98. machineType: m4.large
    99. maxSize: 1
    100. minSize: 1
    101. role: Master
    102. subnets:
    103. - us-east-2d
    104. ---
    105. apiVersion: kops.k8s.io/v1alpha2
    106. kind: InstanceGroup
    107. metadata:
    108. labels:
    109. kops.k8s.io/cluster: k8s.example.com
    110. spec:
    111. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
    112. maxSize: 1
    113. minSize: 1
    114. role: Master
    115. subnets:
    116. - us-east-2b
    117. ---
    118. apiVersion: kops.k8s.io/v1alpha2
    119. kind: InstanceGroup
    120. metadata:
    121. labels:
    122. kops.k8s.io/cluster: k8s.example.com
    123. name: master-us-east-2c
    124. spec:
    125. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
    126. machineType: m4.large
    127. maxSize: 1
    128. minSize: 1
    129. role: Master
    130. subnets:
    131. - us-east-2c
    132. ---
    133. apiVersion: kops.k8s.io/v1alpha2
    134. kind: InstanceGroup
    135. metadata:
    136. labels:
    137. kops.k8s.io/cluster: k8s.example.com
    138. name: nodes
    139. spec:
    140. image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
    141. machineType: m4.xlarge
    142. maxSize: 3
    143. minSize: 3
    144. role: Node
    145. subnets:
    146. - us-east-2d
    147. - us-east-2b
    148. - us-east-2c

    YAML Examples

    With the above YAML file, a user can add configurations that are not available via the command line. For instance, you can add a maxPrice value to a new instance group and use spot instances. Also add node and cloud labels for the new instance group.

    This configuration will create an autoscale group that will include 42 m4.10xlarge nodes running as spot instances with custom labels.

    To create the cluster execute:

    1. kops create -f $NAME.yaml
    2. kops create secret --name $NAME sshpublickey admin -i ~/.ssh/id_rsa.pub
    3. kops update cluster $NAME --yes
    4. kops rolling-update cluster $NAME --yes

    Please refer to the rolling-update documentation.

    Update the cluster spec YAML file, and to update the cluster run:

    Please refer to the rolling-update .

    kops implements a full API that defines the various elements in the YAML file exported above. Two top level components exist; ClusterSpec and InstanceGroup.

    1. apiVersion: kops.k8s.io/v1alpha2
    2. kind: Cluster
    3. metadata:
    4. name: k8s.example.com
    5. spec:
    6. api:

    The ClusterSpec allows a user to set configurations for such values as Docker log driver, Kubernetes API server log level, VPC for reusing a VPC (NetworkID), and the Kubernetes version.

    More information about some of the elements in the ClusterSpec is available in the following:

    • Cluster Spec document which outlines some of the values in the Cluster Specification.
    • GPU setup
    • - adding additional instance IAM roles.
    • Labels

    To access the full configuration that a kops installation is running execute:

    This command prints the entire YAML configuration. But do not use the full document, you may experience strange and unique unwanted behaviors.

    Instance Groups

    1. apiVersion: kops.k8s.io/v1alpha2
    2. kind: InstanceGroup
    3. metadata:
    4. name: foo
    5. spec:

    Full documentation is accessible via .

    Instance Groups map to Auto Scaling Groups in AWS, and Instance Groups in GCE. They are an API level description of a group of compute instances used as Masters or Nodes.

    More documentation is available in the Instance Group document.

    Closing Thoughts

    • If you do not need to define or customize a value, let kOps set that value. Setting too many values prevents kOps from doing its job in setting up the cluster and you may end up with strange bugs.
    • If you end up with strange bugs, try letting kOps do more.

    If you need to run a custom version of Kubernetes Controller Manager, set and update your cluster. This is the beauty of using a manifest for your cluster!