Creating infrastructure machine sets

    You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.

    In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.

    The following infrastructure workloads do not incur OKD worker subscriptions:

    • Kubernetes and OKD control plane services that run on masters

    • The default router

    • The integrated container image registry

    • The HAProxy-based Ingress Controller

    • The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects

    • Cluster aggregated logging

    • Service brokers

    • Red Hat Quay

    • Red Hat OpenShift Data Foundation

    • Red Hat Advanced Cluster Manager

    • Red Hat Advanced Cluster Security for Kubernetes

    • Red Hat OpenShift GitOps

    • Red Hat OpenShift Pipelines

    Any node that runs any other container, pod, or component is a worker node that your subscription must cover.

    For information about infrastructure nodes and which components can run on infrastructure nodes, see the “Red Hat OpenShift control plane and infrastructure nodes” section in the document.

    To create an infrastructure node, you can use a machine set, , or use a machine config pool.

    In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.

    Use the sample compute machine set for your cloud.

    Sample YAML for a compute machine set custom resource on Alibaba Cloud

    This sample YAML defines a compute machine set that runs in a specified Alibaba Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    7. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    8. name: <infrastructure_id>-<infra>-<zone> (3)
    9. namespace: openshift-machine-api
    10. spec:
    11. replicas: 1
    12. selector:
    13. matchLabels:
    14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> (3)
    16. template:
    17. metadata:
    18. labels:
    19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    20. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    21. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> (3)
    23. spec:
    24. metadata:
    25. labels:
    26. node-role.kubernetes.io/infra: ""
    27. providerSpec:
    28. value:
    29. apiVersion: machine.openshift.io/v1
    30. credentialsSecret:
    31. name: alibabacloud-credentials
    32. imageId: <image_id> (4)
    33. instanceType: <instance_type> (5)
    34. kind: AlibabaCloudMachineProviderConfig
    35. ramRoleName: <infrastructure_id>-role-worker (6)
    36. regionId: <region> (7)
    37. resourceGroup: (8)
    38. id: <resource_group_id>
    39. type: ID
    40. securityGroups:
    41. - tags: (9)
    42. - Key: Name
    43. Value: <infrastructure_id>-sg-<role>
    44. type: Tags
    45. systemDisk: (10)
    46. category: cloud_essd
    47. size: <disk_size>
    48. tag: (9)
    49. - Key: kubernetes.io/cluster/<infrastructure_id>
    50. Value: owned
    51. userDataSecret:
    52. name: <user_data_secret> (11)
    53. vSwitch:
    54. tags: (9)
    55. - Key: Name
    56. Value: <infrastructure_id>-vswitch-<zone>
    57. type: Tags
    58. vpcId: ""
    59. zoneId: <zone> (12)
    60. taints: (13)
    61. - key: node-role.kubernetes.io/infra
    62. effect: NoSchedule
    1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
    1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
    2Specify the <infra> node label.
    3Specify the infrastructure ID, <infra> node label, and zone.
    4Specify the image to use. Use an image from an existing default compute machine set for the cluster.
    5Specify the instance type you want to use for the compute machine set.
    6Specify the name of the RAM role to use for the compute machine set. Use the value that the installer populates in the default compute machine set.
    7Specify the region to place machines on.
    8Specify the resource group and type for the cluster. You can use the value that the installer populates in the default compute machine set, or specify a different one.
    9Specify the tags to use for the compute machine set. Minimally, you must include the tags shown in this example, with appropriate values for your cluster. You can include additional tags, including the tags that the installer populates in the default compute machine set it creates, as needed.
    10Specify the type and size of the root disk. Use the category value that the installer populates in the default compute machine set it creates. If required, specify a different value in gigabytes for size.
    11Specify the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that the installer populates in the default compute machine set.
    12Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
    13Specify a taint to prevent user workloads from being scheduled on infra nodes.

    After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled. You must either delete or add toleration on misscheduled DNS pods.

    Machine set parameters for Alibaba Cloud usage statistics

    The default compute machine sets that the installer creates for Alibaba Cloud clusters include nonessential tag values that Alibaba Cloud uses internally to track usage statistics. These tags are populated in the securityGroups, tag, and vSwitch parameters of the spec.template.spec.providerSpec.value list.

    When creating compute machine sets to deploy additional machines, you must include the required Kubernetes tags. The usage statistics tags are applied by default, even if they are not specified in the compute machine sets you create. You can also include additional tags as needed.

    The following YAML snippets indicate which tags in the default compute machine sets are optional and which are required.

    Tags in spec.template.spec.providerSpec.value.securityGroups

    1. spec:
    2. template:
    3. spec:
    4. providerSpec:
    5. value:
    6. securityGroups:
    7. - tags:
    8. - Key: kubernetes.io/cluster/<infrastructure_id> (1)
    9. Value: owned
    10. - Key: GISV
    11. Value: ocp
    12. - Key: sigs.k8s.io/cloud-provider-alibaba/origin (1)
    13. Value: ocp
    14. - Key: Name
    15. Value: <infrastructure_id>-sg-<role> (2)
    16. type: Tags
    1Optional: This tag is applied even when not specified in the compute machine set.
    2Required.

    where:

    • <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster.

    • <role> is the node label to add.

    Tags in spec.template.spec.providerSpec.value.tag

    1. spec:
    2. template:
    3. spec:
    4. providerSpec:
    5. value:
    6. tag:
    7. - Key: kubernetes.io/cluster/<infrastructure_id> (2)
    8. Value: owned
    9. - Key: GISV (1)
    10. Value: ocp
    11. - Key: sigs.k8s.io/cloud-provider-alibaba/origin (1)
    12. Value: ocp
    1Optional: This tag is applied even when not specified in the compute machine set.
    2Required.

    where <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster.

    Tags in spec.template.spec.providerSpec.value.vSwitch

    1. spec:
    2. template:
    3. spec:
    4. providerSpec:
    5. value:
    6. vSwitch:
    7. tags:
    8. - Key: kubernetes.io/cluster/<infrastructure_id> (1)
    9. Value: owned
    10. - Key: GISV (1)
    11. Value: ocp
    12. - Key: sigs.k8s.io/cloud-provider-alibaba/origin (1)
    13. Value: ocp
    14. - Key: Name
    15. Value: <infrastructure_id>-vswitch-<zone> (2)
    16. type: Tags
    1Optional: This tag is applied even when not specified in the compute machine set.
    2Required.

    where:

    • <infrastructure_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster.

    • <zone> is the zone within your region to place machines on.

    Sample YAML for a compute machine set custom resource on AWS

    This sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. name: <infrastructure_id>-infra-<zone> (2)
    7. namespace: openshift-machine-api
    8. spec:
    9. replicas: 1
    10. selector:
    11. matchLabels:
    12. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    13. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> (2)
    14. template:
    15. metadata:
    16. labels:
    17. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    18. machine.openshift.io/cluster-api-machine-role: infra (3)
    19. machine.openshift.io/cluster-api-machine-type: infra (3)
    20. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> (2)
    21. spec:
    22. metadata:
    23. labels:
    24. node-role.kubernetes.io/infra: "" (3)
    25. providerSpec:
    26. value:
    27. ami:
    28. id: ami-046fe691f52a953f9 (4)
    29. apiVersion: awsproviderconfig.openshift.io/v1beta1
    30. blockDevices:
    31. - ebs:
    32. iops: 0
    33. volumeSize: 120
    34. volumeType: gp2
    35. credentialsSecret:
    36. name: aws-cloud-credentials
    37. deviceIndex: 0
    38. iamInstanceProfile:
    39. id: <infrastructure_id>-worker-profile (1)
    40. instanceType: m6i.large
    41. kind: AWSMachineProviderConfig
    42. placement:
    43. availabilityZone: <zone> (6)
    44. region: <region> (7)
    45. securityGroups:
    46. - filters:
    47. - name: tag:Name
    48. values:
    49. - <infrastructure_id>-worker-sg (1)
    50. subnet:
    51. filters:
    52. - name: tag:Name
    53. values:
    54. - <infrastructure_id>-private-<zone> (8)
    55. tags:
    56. - name: kubernetes.io/cluster/<infrastructure_id> (1)
    57. value: owned
    58. - name: <custom_tag_name> (5)
    59. value: <custom_tag_value> (5)
    60. userDataSecret:
    61. name: worker-user-data
    62. taints: (9)
    63. - key: node-role.kubernetes.io/infra
    64. effect: NoSchedule
    1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
    1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
    2Specify the infrastructure ID, infra role node label, and zone.
    3Specify the infra role node label.
    4Specify a valid Fedora CoreOS (FCOS) Amazon Machine Image (AMI) for your AWS zone for your OKD nodes. If you want to use an AWS Marketplace image, you must complete the OKD subscription from the to obtain an AMI ID for your region.
    1. $ oc -n openshift-machine-api \
    2. -o jsonpath=’{.spec.template.spec.providerSpec.value.ami.id}{“\n”}’ \
    3. get machineset/<infrastructure_id>-<role>-<zone>
    5Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:admin-email@example.com.

    Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file.

    6Specify the zone, for example, us-east-1a.
    7Specify the region, for example, us-east-1.
    8Specify the infrastructure ID and zone.
    9Specify a taint to prevent user workloads from being scheduled on infra nodes.

    Machine sets running on AWS support non-guaranteed Spot Instances. You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. by adding spotMarketOptions to the MachineSet YAML file.

    Sample YAML for a compute machine set custom resource on Azure

    This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    7. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    8. name: <infrastructure_id>-infra-<region> (3)
    9. namespace: openshift-machine-api
    10. spec:
    11. replicas: 1
    12. selector:
    13. matchLabels:
    14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
    16. template:
    17. metadata:
    18. creationTimestamp: null
    19. labels:
    20. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    21. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    22. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    23. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
    24. spec:
    25. metadata:
    26. creationTimestamp: null
    27. labels:
    28. machine.openshift.io/cluster-api-machineset: <machineset_name> (4)
    29. node-role.kubernetes.io/infra: "" (2)
    30. providerSpec:
    31. value:
    32. apiVersion: azureproviderconfig.openshift.io/v1beta1
    33. credentialsSecret:
    34. name: azure-cloud-credentials
    35. namespace: openshift-machine-api
    36. image: (5)
    37. offer: ""
    38. publisher: ""
    39. resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> (6)
    40. sku: ""
    41. version: ""
    42. internalLoadBalancer: ""
    43. kind: AzureMachineProviderSpec
    44. location: <region> (7)
    45. managedIdentity: <infrastructure_id>-identity (1)
    46. metadata:
    47. creationTimestamp: null
    48. natRule: null
    49. networkResourceGroup: ""
    50. osDisk:
    51. diskSizeGB: 128
    52. managedDisk:
    53. storageAccountType: Premium_LRS
    54. osType: Linux
    55. publicIP: false
    56. publicLoadBalancer: ""
    57. resourceGroup: <infrastructure_id>-rg (1)
    58. sshPrivateKey: ""
    59. sshPublicKey: ""
    60. tags:
    61. - name: <custom_tag_name> (9)
    62. value: <custom_tag_value> (9)
    63. subnet: <infrastructure_id>-<role>-subnet (1) (2)
    64. userDataSecret:
    65. name: worker-user-data (2)
    66. vmSize: Standard_D4s_v3
    67. vnet: <infrastructure_id>-vnet (1)
    68. zone: "1" (8)
    69. taints: (10)
    70. - key: node-role.kubernetes.io/infra
    71. effect: NoSchedule
    1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
    1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster

    You can obtain the subnet by running the following command:

    1. $ oc -n openshift-machine-api \
    2. -o jsonpath=’{.spec.template.spec.providerSpec.value.subnet}{“\n”}’ \
    3. get machineset/<infrastructure_id>-worker-centralus1

    You can obtain the vnet by running the following command:

    1. $ oc -n openshift-machine-api \
    2. -o jsonpath=’{.spec.template.spec.providerSpec.value.vnet}{“\n”}’ \
    3. get machineset/<infrastructure_id>-worker-centralus1
    2Specify the <infra> node label.
    3Specify the infrastructure ID, <infra> node label, and region.
    4Optional: Specify the compute machine set name to enable the use of availability sets. This setting only applies to new compute machines.
    5Specify the image details for your compute machine set. If you want to use an Azure Marketplace image, see “Selecting an Azure Marketplace image”.
    6Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix.
    7Specify the region to place machines on.
    8Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
    9Optional: Specify custom tags in your machine set. Provide the tag name in <custom_tag_name> field and the corresponding tag value in <custom_tag_value> field.
    10Specify a taint to prevent user workloads from being scheduled on infra nodes.

    Machine sets running on Azure support non-guaranteed . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file.

    Additional resources

    Sample YAML for a compute machine set custom resource on Azure Stack Hub

    This sample YAML defines a compute machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    7. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    8. name: <infrastructure_id>-infra-<region> (3)
    9. namespace: openshift-machine-api
    10. spec:
    11. replicas: 1
    12. selector:
    13. matchLabels:
    14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
    16. template:
    17. metadata:
    18. creationTimestamp: null
    19. labels:
    20. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    21. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    22. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    23. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
    24. spec:
    25. metadata:
    26. creationTimestamp: null
    27. labels:
    28. node-role.kubernetes.io/infra: "" (2)
    29. taints: (4)
    30. - key: node-role.kubernetes.io/infra
    31. effect: NoSchedule
    32. providerSpec:
    33. value:
    34. apiVersion: machine.openshift.io/v1beta1
    35. availabilitySet: <availability_set> (6)
    36. credentialsSecret:
    37. name: azure-cloud-credentials
    38. namespace: openshift-machine-api
    39. image:
    40. offer: ""
    41. publisher: ""
    42. resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> (1)
    43. sku: ""
    44. version: ""
    45. internalLoadBalancer: ""
    46. kind: AzureMachineProviderSpec
    47. location: <region> (5)
    48. managedIdentity: <infrastructure_id>-identity (1)
    49. metadata:
    50. creationTimestamp: null
    51. natRule: null
    52. networkResourceGroup: ""
    53. osDisk:
    54. diskSizeGB: 128
    55. managedDisk:
    56. storageAccountType: Premium_LRS
    57. osType: Linux
    58. publicIP: false
    59. publicLoadBalancer: ""
    60. resourceGroup: <infrastructure_id>-rg (1)
    61. sshPrivateKey: ""
    62. sshPublicKey: ""
    63. subnet: <infrastructure_id>-<role>-subnet (1) (2)
    64. userDataSecret:
    65. name: worker-user-data (2)
    66. vmSize: Standard_DS4_v2
    67. vnet: <infrastructure_id>-vnet (1)
    68. zone: "1" (7)
    1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
    1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster

    You can obtain the subnet by running the following command:

    1. $ oc -n openshift-machine-api \
    2. -o jsonpath=’{.spec.template.spec.providerSpec.value.subnet}{“\n”}’ \
    3. get machineset/<infrastructure_id>-worker-centralus1

    You can obtain the vnet by running the following command:

    1. $ oc -n openshift-machine-api \
    2. -o jsonpath=’{.spec.template.spec.providerSpec.value.vnet}{“\n”}’ \
    3. get machineset/<infrastructure_id>-worker-centralus1
    2Specify the <infra> node label.
    3Specify the infrastructure ID, <infra> node label, and region.
    4Specify a taint to prevent user workloads from being scheduled on infra nodes.
    5Specify the region to place machines on.
    6Specify the availability set for the cluster.
    7Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.

    Machine sets running on Azure Stack Hub do not support non-guaranteed Spot VMs.

    Sample YAML for a compute machine set custom resource on IBM Cloud

    This sample YAML defines a compute machine set that runs in a specified IBM Cloud zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    7. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    8. name: <infrastructure_id>-<infra>-<region> (3)
    9. namespace: openshift-machine-api
    10. spec:
    11. replicas: 1
    12. selector:
    13. matchLabels:
    14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> (3)
    16. template:
    17. metadata:
    18. labels:
    19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    20. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    21. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> (3)
    23. spec:
    24. metadata:
    25. labels:
    26. node-role.kubernetes.io/infra: ""
    27. providerSpec:
    28. apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1
    29. credentialsSecret:
    30. name: ibmcloud-credentials
    31. image: <infrastructure_id>-rhcos (4)
    32. kind: IBMCloudMachineProviderSpec
    33. primaryNetworkInterface:
    34. securityGroups:
    35. - <infrastructure_id>-sg-cluster-wide
    36. - <infrastructure_id>-sg-openshift-net
    37. subnet: <infrastructure_id>-subnet-compute-<zone> (5)
    38. profile: <instance_profile> (6)
    39. region: <region> (7)
    40. resourceGroup: <resource_group> (8)
    41. userDataSecret:
    42. name: <role>-user-data (2)
    43. vpc: <vpc_name> (9)
    44. zone: <zone> (10)
    45. taints: (11)
    46. - key: node-role.kubernetes.io/infra
    1The infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
    1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
    2The <infra> node label.
    3The infrastructure ID, <infra> node label, and region.
    4The custom Fedora CoreOS (FCOS) image that was used for cluster installation.
    5The infrastructure ID and zone within your region to place machines on. Be sure that your region supports the zone that you specify.
    6Specify the IBM Cloud instance profile.
    7Specify the region to place machines on.
    8The resource group that machine resources are placed in. This is either an existing resource group specified at installation time, or an installer-created resource group named based on the infrastructure ID.
    9The VPC name.
    10Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.
    11The taint to prevent user workloads from being scheduled on infra nodes.

    Sample YAML for a compute machine set custom resource on GCP

    This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "", where infra is the node label to add.

    Values obtained by using the OpenShift CLI

    In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.

    Infrastructure ID

    The <infrastructure_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

    1. $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

    Image path

    The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:

    1. $ oc -n openshift-machine-api \
    2. -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \
    3. get machineset/<infrastructure_id>-worker-a

    Sample GCP MachineSet values

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. name: <infrastructure_id>-w-a
    7. namespace: openshift-machine-api
    8. spec:
    9. replicas: 1
    10. selector:
    11. matchLabels:
    12. machine.openshift.io/cluster-api-cluster: <infrastructure_id>
    13. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
    14. template:
    15. metadata:
    16. creationTimestamp: null
    17. labels:
    18. machine.openshift.io/cluster-api-cluster: <infrastructure_id>
    19. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    20. machine.openshift.io/cluster-api-machine-type: <infra>
    21. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
    22. spec:
    23. metadata:
    24. labels:
    25. node-role.kubernetes.io/infra: ""
    26. providerSpec:
    27. value:
    28. apiVersion: gcpprovider.openshift.io/v1beta1
    29. canIPForward: false
    30. credentialsSecret:
    31. name: gcp-cloud-credentials
    32. deletionProtection: false
    33. disks:
    34. - autoDelete: true
    35. boot: true
    36. image: <path_to_image> (3)
    37. labels: null
    38. sizeGb: 128
    39. type: pd-ssd
    40. gcpMetadata: (4)
    41. - key: <custom_metadata_key>
    42. value: <custom_metadata_value>
    43. kind: GCPMachineProviderSpec
    44. machineType: n1-standard-4
    45. metadata:
    46. creationTimestamp: null
    47. networkInterfaces:
    48. - network: <infrastructure_id>-network
    49. subnetwork: <infrastructure_id>-worker-subnet
    50. projectID: <project_name> (5)
    51. region: us-central1
    52. serviceAccounts:
    53. - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
    54. scopes:
    55. - https://www.googleapis.com/auth/cloud-platform
    56. tags:
    57. - <infrastructure_id>-worker
    58. userDataSecret:
    59. name: worker-user-data
    60. zone: us-central1-a
    61. taints: (6)
    62. - key: node-role.kubernetes.io/infra
    63. effect: NoSchedule
    1For <infrastructure_id>, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster.
    2For <infra>, specify the <infra> node label.
    3Specify the path to the image that is used in current compute machine sets.

    To use a GCP Marketplace image, specify the offer to use:

    4Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata.
    5For <project_name>, specify the name of the GCP project that you use for your cluster.
    6Specify a taint to prevent user workloads from being scheduled on infra nodes.

    Machine sets running on GCP support non-guaranteed . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file.

    Sample YAML for a compute machine set custom resource on Nutanix

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    7. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    8. name: <infrastructure_id>-<infra>-<zone> (3)
    9. namespace: openshift-machine-api
    10. annotations: (4)
    11. machine.openshift.io/memoryMb: "16384"
    12. machine.openshift.io/vCPU: "4"
    13. spec:
    14. replicas: 3
    15. selector:
    16. matchLabels:
    17. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    18. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> (3)
    19. template:
    20. metadata:
    21. labels:
    22. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    23. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    24. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    25. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> (3)
    26. spec:
    27. metadata:
    28. labels:
    29. node-role.kubernetes.io/infra: ""
    30. providerSpec:
    31. value:
    32. apiVersion: machine.openshift.io/v1
    33. cluster:
    34. type: uuid
    35. uuid: <cluster_uuid>
    36. credentialsSecret:
    37. name: nutanix-creds-secret
    38. image:
    39. name: <infrastructure_id>-rhcos (5)
    40. type: name
    41. kind: NutanixMachineProviderConfig
    42. memorySize: 16Gi (6)
    43. subnets:
    44. - type: uuid
    45. uuid: <subnet_uuid>
    46. systemDiskSize: 120Gi (7)
    47. userDataSecret:
    48. name: <user_data_secret> (8)
    49. vcpuSockets: 4 (9)
    50. vcpusPerSocket: 1 (10)
    51. taints: (11)
    52. - key: node-role.kubernetes.io/infra
    53. effect: NoSchedule

    Sample YAML for a compute machine set custom resource on OpenStack

    This sample YAML defines a compute machine set that runs on OpenStack and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    7. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    8. name: <infrastructure_id>-infra (3)
    9. namespace: openshift-machine-api
    10. spec:
    11. replicas: <number_of_replicas>
    12. selector:
    13. matchLabels:
    14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (3)
    16. template:
    17. metadata:
    18. labels:
    19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    20. machine.openshift.io/cluster-api-machine-role: <infra> (2)
    21. machine.openshift.io/cluster-api-machine-type: <infra> (2)
    22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (3)
    23. spec:
    24. metadata:
    25. creationTimestamp: null
    26. labels:
    27. node-role.kubernetes.io/infra: ""
    28. taints: (4)
    29. - key: node-role.kubernetes.io/infra
    30. effect: NoSchedule
    31. providerSpec:
    32. value:
    33. apiVersion: openstackproviderconfig.openshift.io/v1alpha1
    34. cloudName: openstack
    35. cloudsSecret:
    36. name: openstack-cloud-credentials
    37. namespace: openshift-machine-api
    38. flavor: <nova_flavor>
    39. image: <glance_image_name_or_location>
    40. serverGroupID: <optional_UUID_of_server_group> (5)
    41. kind: OpenstackProviderSpec
    42. networks: (6)
    43. - filter: {}
    44. subnets:
    45. - filter:
    46. name: <subnet_name>
    47. tags: openshiftClusterID=<infrastructure_id> (1)
    48. primarySubnet: <rhosp_subnet_UUID> (7)
    49. securityGroups:
    50. - filter: {}
    51. name: <infrastructure_id>-worker (1)
    52. serverMetadata:
    53. Name: <infrastructure_id>-worker (1)
    54. openshiftClusterID: <infrastructure_id> (1)
    55. tags:
    56. - openshiftClusterID=<infrastructure_id> (1)
    57. trunk: true
    58. userDataSecret:
    59. name: worker-user-data (2)
    60. availabilityZone: <optional_openstack_availability_zone>
    1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
    1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
    2Specify the <infra> node label.
    3Specify the infrastructure ID and <infra> node label.
    4Specify a taint to prevent user workloads from being scheduled on infra nodes.
    5To set a server group policy for the MachineSet, enter the value that is returned from . For most deployments, anti-affinity or soft-anti-affinity policies are recommended.
    6Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value.
    7Specify the OpenStack subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file.

    Sample YAML for a compute machine set custom resource on oVirt

    This sample YAML defines a compute machine set that runs on oVirt and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "".

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. labels:
    5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    6. machine.openshift.io/cluster-api-machine-role: <role> (2)
    7. machine.openshift.io/cluster-api-machine-type: <role> (2)
    8. name: <infrastructure_id>-<role> (3)
    9. namespace: openshift-machine-api
    10. spec:
    11. replicas: <number_of_replicas> (4)
    12. Selector: (5)
    13. matchLabels:
    14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
    16. template:
    17. metadata:
    18. labels:
    19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    20. machine.openshift.io/cluster-api-machine-role: <role> (2)
    21. machine.openshift.io/cluster-api-machine-type: <role> (2)
    22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
    23. spec:
    24. metadata:
    25. labels:
    26. node-role.kubernetes.io/<role>: "" (2)
    27. providerSpec:
    28. value:
    29. apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1
    30. cluster_id: <ovirt_cluster_id> (6)
    31. template_name: <ovirt_template_name> (7)
    32. sparse: <boolean_value> (8)
    33. format: <raw_or_cow> (9)
    34. cpu: (10)
    35. sockets: <number_of_sockets> (11)
    36. cores: <number_of_cores> (12)
    37. threads: <number_of_threads> (13)
    38. memory_mb: <memory_size> (14)
    39. guaranteed_memory_mb: <memory_size> (15)
    40. os_disk: (16)
    41. size_gb: <disk_size> (17)
    42. storage_domain_id: <storage_domain_UUID> (18)
    43. network_interfaces: (19)
    44. vnic_profile_id: <vnic_profile_id> (20)
    45. credentialsSecret:
    46. name: ovirt-credentials (21)
    47. kind: OvirtMachineProviderSpec
    48. type: <workload_type> (22)
    49. auto_pinning_policy: <auto_pinning_policy> (23)
    50. hugepages: <hugepages> (24)
    51. affinityGroupsNames:
    52. - compute (25)
    53. userDataSecret:
    54. name: worker-user-data
    1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
    1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
    2Specify the node label to add.
    3Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters.
    4Specify the number of machines to create.
    5Selector for the machines.
    6Specify the UUID for the oVirt cluster to which this VM instance belongs.
    7Specify the oVirt VM template to use to create the machine.
    8Setting this option to false enables preallocation of disks. The default is true. Setting sparse to true with format set to raw is not available for block storage domains. The raw format writes the entire virtual disk to the underlying physical disk.
    9Can be set to cow or raw. The default is cow. The cow format is optimized for virtual machines.

    Preallocating disks on file storage domains writes zeroes to the file. This might not actually preallocate disks depending on the underlying storage.

    10Optional: The CPU field contains the CPU configuration, including sockets, cores, and threads.
    11Optional: Specify the number of sockets for a VM.
    12Optional: Specify the number of cores per socket.
    13Optional: Specify the number of threads per core.
    14Optional: Specify the size of a VM’s memory in MiB.
    15Optional: Specify the size of a virtual machine’s guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see and Optimization Settings Explained.

    If you are using a version earlier than oVirt 4.4.8, see .

    16Optional: Root disk of the node.
    17Optional: Specify the size of the bootable disk in GiB.
    18Optional: Specify the UUID of the storage domain for the compute node’s disks. If none is provided, the compute node is created on the same storage domain as the control nodes. (default)
    19Optional: List of the network interfaces of the VM. If you include this parameter, OKD discards all network interfaces from the template and creates new ones.
    20Optional: Specify the vNIC profile ID.
    21Specify the name of the secret object that holds the oVirt credentials.
    22Optional: Specify the workload type for which the instance is optimized. This value affects the oVirt VM parameter. Supported values: desktop, server (default), high_performance. high_performance improves performance on the VM. Limitations exist, for example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide.
    23Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none, resize_and_pin. For more information, see in the Virtual Machine Management Guide.
    24Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576. For more information, see Configuring Huge Pages in the Virtual Machine Management Guide.
    25Optional: A list of affinity group names to be applied to the VMs. The affinity groups must exist in oVirt.

    Because oVirt uses a template when creating a VM, if you do not specify a value for an optional parameter, oVirt uses the value for that parameter that is specified in the template.

    Sample YAML for a compute machine set custom resource on vSphere

    This sample YAML defines a compute machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

    In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. metadata:
    4. creationTimestamp: null
    5. labels:
    6. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    7. name: <infrastructure_id>-infra (2)
    8. namespace: openshift-machine-api
    9. spec:
    10. replicas: 1
    11. selector:
    12. matchLabels:
    13. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    14. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (2)
    15. template:
    16. metadata:
    17. creationTimestamp: null
    18. labels:
    19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
    20. machine.openshift.io/cluster-api-machine-role: <infra> (3)
    21. machine.openshift.io/cluster-api-machine-type: <infra> (3)
    22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (2)
    23. spec:
    24. metadata:
    25. creationTimestamp: null
    26. labels:
    27. node-role.kubernetes.io/infra: "" (3)
    28. taints: (4)
    29. - key: node-role.kubernetes.io/infra
    30. effect: NoSchedule
    31. providerSpec:
    32. value:
    33. apiVersion: vsphereprovider.openshift.io/v1beta1
    34. credentialsSecret:
    35. name: vsphere-cloud-credentials
    36. diskGiB: 120
    37. kind: VSphereMachineProviderSpec
    38. memoryMiB: 8192
    39. metadata:
    40. creationTimestamp: null
    41. network:
    42. devices:
    43. - networkName: "<vm_network_name>" (5)
    44. numCPUs: 4
    45. numCoresPerSocket: 1
    46. snapshot: ""
    47. template: <vm_template_name> (6)
    48. userDataSecret:
    49. name: worker-user-data
    50. workspace:
    51. datacenter: <vcenter_datacenter_name> (7)
    52. datastore: <vcenter_datastore_name> (8)
    53. folder: <vcenter_vm_folder_path> (9)
    54. resourcepool: <vsphere_resource_pool> (10)
    55. server: <vcenter_server_ip> (11)
    1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
    1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
    2Specify the infrastructure ID and <infra> node label.
    3Specify the <infra> node label.
    4Specify a taint to prevent user workloads from being scheduled on infra nodes.
    5Specify the vSphere VM network to deploy the compute machine set to. This VM network must be where other compute machines reside in the cluster.
    6Specify the vSphere VM template to use, such as user-5ddjd-rhcos.
    7Specify the vCenter Datacenter to deploy the compute machine set on.
    8Specify the vCenter Datastore to deploy the compute machine set on.
    9Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd.
    10Specify the vSphere resource pool for your VMs.
    11Specify the vCenter server IP or fully qualified domain name.

    Creating a compute machine set

    In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

    Prerequisites

    • Deploy an OKD cluster.

    • Install the OpenShift CLI (oc).

    • Log in to oc as a user with cluster-admin permission.

    Procedure

    1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

      Ensure that you set the <clusterID> and <role> parameter values.

    2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

      1. To list the compute machine sets in your cluster, run the following command:

        1. $ oc get machinesets -n openshift-machine-api

        Example output

      2. To view values of a specific compute machine set custom resource (CR), run the following command:

        1. $ oc get machineset <machineset_name> \
        2. -n openshift-machine-api -o yaml

        Example output

        1. apiVersion: machine.openshift.io/v1beta1
        2. kind: MachineSet
        3. metadata:
        4. labels:
        5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
        6. name: <infrastructure_id>-<role> (2)
        7. namespace: openshift-machine-api
        8. spec:
        9. replicas: 1
        10. selector:
        11. matchLabels:
        12. machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        13. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        14. template:
        15. metadata:
        16. labels:
        17. machine.openshift.io/cluster-api-cluster: <infrastructure_id>
        18. machine.openshift.io/cluster-api-machine-role: <role>
        19. machine.openshift.io/cluster-api-machine-type: <role>
        20. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
        21. spec:
        22. providerSpec: (3)
        23. ...
        1The cluster infrastructure ID.
        2A default node label.

        For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

        3The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
    3. Create a MachineSet CR by running the following command:

      1. $ oc create -f <file_name>.yaml

    Verification

    • View the list of compute machine sets by running the following command:

      1. $ oc get machineset -n openshift-machine-api

      Example output

      1. NAME DESIRED CURRENT READY AVAILABLE AGE
      2. agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
      3. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
      4. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
      5. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
      6. agl030519-vplxk-worker-us-east-1d 0 0 55m
      7. agl030519-vplxk-worker-us-east-1e 0 0 55m
      8. agl030519-vplxk-worker-us-east-1f 0 0 55m

      When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

    Creating an infrastructure node

    See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API.

    Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app, nodes through labeling.

    Procedure

    1. Add a label to the worker node that you want to act as application node:

      1. $ oc label node <node-name> node-role.kubernetes.io/app=""
    2. Add a label to the worker nodes that you want to act as infrastructure nodes:

      1. $ oc label node <node-name> node-role.kubernetes.io/infra=""
    3. Check to see if applicable nodes now have the infra role and app roles:

      1. $ oc get nodes
    4. Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.

      If the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied.

      However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra=””, when a pod’s label is set to a different node role, such as node-role.kubernetes.io/master=””, can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles.

      You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.

      1. Edit the Scheduler object:

        1. $ oc edit scheduler cluster
      2. Add the defaultNodeSelector field with the appropriate node selector:

        1. apiVersion: config.openshift.io/v1
        2. kind: Scheduler
        3. metadata:
        4. name: cluster
        5. ...
        6. spec:
        7. defaultNodeSelector: topology.kubernetes.io/region=us-east-1 (1)
        8. ...
        1This example node selector deploys pods on nodes in the us-east-1 region by default.
      3. Save the file to apply the changes.

    You can now move infrastructure resources to the newly labeled infra nodes.

    Additional resources

    If you need infrastructure machines to have dedicated configurations, you must create an infra pool.

    Procedure

    1. Add a label to the node you want to assign as the infra node with a specific label:

      1. $ oc label node <node_name> <label>
      1. $ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=
    2. Create a machine config pool that contains both the worker role and your custom role as machine config selector:

      1. $ cat infra.mcp.yaml

      Example output

      1. kind: MachineConfigPool
      2. metadata:
      3. name: infra
      4. spec:
      5. machineConfigSelector:
      6. matchExpressions:
      7. - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} (1)
      8. nodeSelector:
      9. matchLabels:
      10. node-role.kubernetes.io/infra: "" (2)
      1Add the worker role and your custom role.
      2Add the label you added to the node as a nodeSelector.
    3. Check the machine configs to ensure that the infrastructure configuration rendered successfully:

      1. $ oc get machineconfig

      Example output

      1. NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED
      2. 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
      3. 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
      4. 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
      5. 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
      6. 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
      7. 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
      8. 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
      9. 99-master-ssh 3.2.0 31d
      10. 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
      11. 99-worker-ssh 3.2.0 31d
      12. rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m
      13. rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
      14. rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d
      15. rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d
      16. rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d
      17. rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h
      18. rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
      19. rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
      20. rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
      21. rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h
      22. rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d
      23. rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d
      24. rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d

      You should see a new machine config, with the rendered-infra-* prefix.

    4. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.

      After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.

      1. Create a machine config:

        1. $ cat infra.mc.yaml

        Example output

        1. apiVersion: machineconfiguration.openshift.io/v1
        2. kind: MachineConfig
        3. metadata:
        4. name: 51-infra
        5. labels:
        6. machineconfiguration.openshift.io/role: infra (1)
        7. spec:
        8. config:
        9. ignition:
        10. version: 3.2.0
        11. storage:
        12. files:
        13. - path: /etc/infratest
        14. mode: 0644
        15. contents:
        16. source: data:,infra
        1Add the label you added to the node as a nodeSelector.
      2. Apply the machine config to the infra-labeled nodes:

        1. $ oc create -f infra.mc.yaml
    5. Confirm that your new machine config pool is available:

      1. $ oc get mcp

      Example output

      1. NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
      2. infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s
      3. master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m
      4. worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m

      In this example, a worker node was changed to an infra node.

    Additional resources

    After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied.

    However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control.

    Binding infrastructure node workloads using taints and tolerations

    If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it.

    It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions.

    Prerequisites

    • Configure additional MachineSet objects in your OKD cluster.
    1. Add a taint to the infra node to prevent scheduling user workloads on it:

      1. Determine if the node has the taint:

        1. $ oc describe nodes <node_name>

        Sample output

        1. oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
        2. Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
        3. Roles: worker
        4. ...
        5. Taints: node-role.kubernetes.io/infra:NoSchedule
        6. ...

        This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.

      2. If you have not configured a taint to prevent scheduling user workloads on it:

        1. $ oc adm taint nodes <node_name> <key>=<value>:<effect>

        For example:

        1. $ oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute

        You can alternatively apply the following YAML to add the taint:

        1. kind: Node
        2. apiVersion: v1
        3. metadata:
        4. name: <node_name>
        5. labels:
        6. spec:
        7. taints:
        8. - key: node-role.kubernetes.io/infra
        9. effect: NoExecute
        10. value: reserved

        This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule. Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.

        If a descheduler is used, pods violating node taints could be evicted from the cluster.

    2. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification:

      1. tolerations:
      2. - effect: NoExecute (1)
      3. key: node-role.kubernetes.io/infra (2)
      4. operator: Exists (3)
      5. value: reserved (4)
      1Specify the effect that you added to the node.
      2Specify the key that you added to the node.
      3Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node.
      4Specify the value of the key-value pair taint that you added to the node.

      This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node.

      Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.

    3. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details.

    Additional resources

    Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown:

    1. spec:
    2. nodePlacement: (1)
    3. nodeSelector:
    4. matchLabels:
    5. node-role.kubernetes.io/infra: ""
    6. tolerations:
    7. - effect: NoSchedule
    8. key: node-role.kubernetes.io/infra
    9. value: reserved
    10. - effect: NoExecute
    11. key: node-role.kubernetes.io/infra
    12. value: reserved
    1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.

    Applying a specific node selector to all infrastructure components causes OKD to schedule those workloads on nodes with that label.

    Moving the router

    You can deploy the router pod to a different compute machine set. By default, the pod is deployed to a worker node.

    Prerequisites

    • Configure additional compute machine sets in your OKD cluster.

    Procedure

    1. View the IngressController custom resource for the router Operator:

      1. $ oc get ingresscontroller default -n openshift-ingress-operator -o yaml

      The command output resembles the following text:

    2. Edit the ingresscontroller resource and change the nodeSelector to use the infra label:

      1. $ oc edit ingresscontroller default -n openshift-ingress-operator
      1. spec:
      2. nodePlacement:
      3. nodeSelector: (1)
      4. matchLabels:
      5. node-role.kubernetes.io/infra: ""
      6. tolerations:
      7. - effect: NoSchedule
      8. key: node-role.kubernetes.io/infra
      9. value: reserved
      10. - effect: NoExecute
      11. key: node-role.kubernetes.io/infra
      12. value: reserved
      1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration.
    3. Confirm that the router pod is running on the infra node.

      1. View the list of router pods and note the node name of the running pod:

        1. $ oc get pod -n openshift-ingress -o wide

        Example output

        1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
        2. router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none>
        3. router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>

        In this example, the running pod is on the ip-10-0-217-226.ec2.internal node.

      2. View the node status of the running pod:

        1. $ oc get node <node_name> (1)
        1Specify the <node_name> that you obtained from the pod list.

        Example output

        1. NAME STATUS ROLES AGE VERSION
        2. ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.26.0

        Because the role list includes infra, the pod is running on the correct node.

    You configure the registry Operator to deploy its pods to different nodes.

    Prerequisites

    • Configure additional compute machine sets in your OKD cluster.

    Procedure

    1. View the config/instance object:

      1. $ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml

      Example output

      1. apiVersion: imageregistry.operator.openshift.io/v1
      2. kind: Config
      3. metadata:
      4. creationTimestamp: 2019-02-05T13:52:05Z
      5. finalizers:
      6. - imageregistry.operator.openshift.io/finalizer
      7. generation: 1
      8. name: cluster
      9. resourceVersion: "56174"
      10. selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
      11. uid: 36fd3724-294d-11e9-a524-12ffeee2931b
      12. spec:
      13. httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
      14. logging: 2
      15. managementState: Managed
      16. proxy: {}
      17. replicas: 1
      18. requests:
      19. read: {}
      20. write: {}
      21. storage:
      22. s3:
      23. bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
      24. region: us-east-1
      25. status:
      26. ...
    2. Edit the config/instance object:

      1. $ oc edit configs.imageregistry.operator.openshift.io/cluster
      1. spec:
      2. affinity:
      3. podAntiAffinity:
      4. preferredDuringSchedulingIgnoredDuringExecution:
      5. - podAffinityTerm:
      6. namespaces:
      7. - openshift-image-registry
      8. topologyKey: kubernetes.io/hostname
      9. weight: 100
      10. logLevel: Normal
      11. managementState: Managed
      12. nodeSelector: (1)
      13. node-role.kubernetes.io/infra: ""
      14. tolerations:
      15. - effect: NoSchedule
      16. key: node-role.kubernetes.io/infra
      17. value: reserved
      18. - effect: NoExecute
      19. key: node-role.kubernetes.io/infra
      20. value: reserved
      1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
    3. Verify the registry pod has been moved to the infrastructure node.

      1. Run the following command to identify the node where the registry pod is located:

        1. $ oc get pods -o wide -n openshift-image-registry
      2. Confirm the node has the label you specified:

        1. $ oc describe node <node_name>

        Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list.

    Moving the monitoring solution

    The monitoring stack includes multiple components, including Prometheus, Thanos Querier, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map.

    Procedure

    1. Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label:

      1. $ oc edit configmap cluster-monitoring-config -n openshift-monitoring
      1. apiVersion: v1
      2. kind: ConfigMap
      3. metadata:
      4. name: cluster-monitoring-config
      5. namespace: openshift-monitoring
      6. data:
      7. config.yaml: |+
      8. alertmanagerMain:
      9. nodeSelector: (1)
      10. node-role.kubernetes.io/infra: ""
      11. tolerations:
      12. - key: node-role.kubernetes.io/infra
      13. value: reserved
      14. effect: NoSchedule
      15. - key: node-role.kubernetes.io/infra
      16. value: reserved
      17. effect: NoExecute
      18. prometheusK8s:
      19. nodeSelector:
      20. node-role.kubernetes.io/infra: ""
      21. tolerations:
      22. - key: node-role.kubernetes.io/infra
      23. value: reserved
      24. effect: NoSchedule
      25. - key: node-role.kubernetes.io/infra
      26. value: reserved
      27. effect: NoExecute
      28. prometheusOperator:
      29. nodeSelector:
      30. node-role.kubernetes.io/infra: ""
      31. tolerations:
      32. - key: node-role.kubernetes.io/infra
      33. value: reserved
      34. effect: NoSchedule
      35. - key: node-role.kubernetes.io/infra
      36. value: reserved
      37. effect: NoExecute
      38. k8sPrometheusAdapter:
      39. nodeSelector:
      40. node-role.kubernetes.io/infra: ""
      41. tolerations:
      42. - key: node-role.kubernetes.io/infra
      43. value: reserved
      44. effect: NoSchedule
      45. - key: node-role.kubernetes.io/infra
      46. value: reserved
      47. effect: NoExecute
      48. kubeStateMetrics:
      49. nodeSelector:
      50. node-role.kubernetes.io/infra: ""
      51. tolerations:
      52. - key: node-role.kubernetes.io/infra
      53. value: reserved
      54. effect: NoSchedule
      55. - key: node-role.kubernetes.io/infra
      56. value: reserved
      57. effect: NoExecute
      58. telemeterClient:
      59. nodeSelector:
      60. node-role.kubernetes.io/infra: ""
      61. tolerations:
      62. - key: node-role.kubernetes.io/infra
      63. value: reserved
      64. effect: NoSchedule
      65. - key: node-role.kubernetes.io/infra
      66. value: reserved
      67. effect: NoExecute
      68. openshiftStateMetrics:
      69. nodeSelector:
      70. node-role.kubernetes.io/infra: ""
      71. tolerations:
      72. - key: node-role.kubernetes.io/infra
      73. value: reserved
      74. effect: NoSchedule
      75. - key: node-role.kubernetes.io/infra
      76. value: reserved
      77. effect: NoExecute
      78. thanosQuerier:
      79. nodeSelector:
      80. node-role.kubernetes.io/infra: ""
      81. tolerations:
      82. - key: node-role.kubernetes.io/infra
      83. value: reserved
      84. effect: NoSchedule
      85. - key: node-role.kubernetes.io/infra
      86. value: reserved
      87. effect: NoExecute
      1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration.
    2. Watch the monitoring pods move to the new machines:

      1. $ watch 'oc get pod -n openshift-monitoring -o wide'
    3. If a component has not moved to the infra node, delete the pod with this component:

      1. $ oc delete pod -n openshift-monitoring <pod>

      The component from the deleted pod is re-created on the infra node.

    Moving OpenShift Logging resources

    You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.

    For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

    Prerequisites

    • The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default.

    Procedure

    1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

      1. $ oc edit ClusterLogging instance
      1. apiVersion: logging.openshift.io/v1
      2. kind: ClusterLogging
      3. ...
      4. spec:
      5. collection:
      6. logs:
      7. fluentd:
      8. resources: null
      9. type: fluentd
      10. logStore:
      11. elasticsearch:
      12. nodeCount: 3
      13. nodeSelector: (1)
      14. node-role.kubernetes.io/infra: ''
      15. tolerations:
      16. - effect: NoSchedule
      17. key: node-role.kubernetes.io/infra
      18. value: reserved
      19. - effect: NoExecute
      20. key: node-role.kubernetes.io/infra
      21. value: reserved
      22. redundancyPolicy: SingleRedundancy
      23. resources:
      24. limits:
      25. cpu: 500m
      26. memory: 16Gi
      27. requests:
      28. cpu: 500m
      29. memory: 16Gi
      30. storage: {}
      31. type: elasticsearch
      32. managementState: Managed
      33. visualization:
      34. kibana:
      35. nodeSelector: (1)
      36. node-role.kubernetes.io/infra: ''
      37. tolerations:
      38. - effect: NoSchedule
      39. key: node-role.kubernetes.io/infra
      40. value: reserved
      41. - effect: NoExecute
      42. key: node-role.kubernetes.io/infra
      43. value: reserved
      44. proxy:
      45. resources: null
      46. replicas: 1
      47. resources: null
      48. type: kibana
      49. ...

    Verification

    To verify that a component has moved, you can use the oc get pod -o wide command.

    For example:

    • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

      1. $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      2. kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
    • You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

      1. $ oc get nodes

      Example output

      1. NAME STATUS ROLES AGE VERSION
      2. ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.26.0
      3. ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.26.0
      4. ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.26.0
      5. ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.26.0
      6. ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.26.0
      7. ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.26.0
      8. ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.26.0

      Note that the node has a node-role.kubernetes.io/infra: '' label:

      1. $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml

      Example output

      1. kind: Node
      2. apiVersion: v1
      3. metadata:
      4. name: ip-10-0-139-48.us-east-2.compute.internal
      5. selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
      6. uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
      7. resourceVersion: '39083'
      8. creationTimestamp: '2020-04-13T19:07:55Z'
      9. labels:
      10. node-role.kubernetes.io/infra: ''
      11. ...
    • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

      1. apiVersion: logging.openshift.io/v1
      2. kind: ClusterLogging
      3. ...
      4. spec:
      5. ...
      6. visualization:
      7. kibana:
      8. nodeSelector: (1)
      9. node-role.kubernetes.io/infra: ''
      10. proxy:
      11. resources: null
      12. replicas: 1
      13. resources: null
      14. type: kibana
      1Add a node selector to match the label in the node specification.
    • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

      1. $ oc get pods

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
      3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
      4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
      5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
      6. fluentd-42dzz 1/1 Running 0 28m
      7. fluentd-d74rq 1/1 Running 0 28m
      8. fluentd-m5vr9 1/1 Running 0 28m
      9. fluentd-nkxl7 1/1 Running 0 28m
      10. fluentd-pdvqb 1/1 Running 0 28m
      11. fluentd-tflh6 1/1 Running 0 28m
      12. kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
      13. kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
    • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

      1. $ oc get pod kibana-7d85dcffc8-bfpfp -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    • After a few moments, the original Kibana pod is removed.

      1. $ oc get pods

      Example output