Control plane machine set configuration
The base of the CR is structured the same way for all platforms.
Sample ControlPlaneMachineSet
CR YAML file
Additional resources
The <platform_provider_spec>
and <platform_failure_domains>
sections of the control plane machine set resources are provider-specific. Refer to the example YAML for your cluster:
Sample YAML snippets for configuring Amazon Web Services clusters
Sample YAML snippets for configuring Microsoft Azure clusters
Some sections of the control plane machine set CR are provider-specific. The example YAML in this section show provider specification and failure domain configurations for an Amazon Web Services (AWS) cluster.
Sample AWS provider specification
When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec
configuration in the control plane machine
CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Sample AWS providerSpec
values
providerSpec:
value:
ami:
id: ami-<ami_id_string> (1)
apiVersion: machine.openshift.io/v1beta1
blockDevices:
- ebs: (2)
encrypted: true
iops: 0
kmsKey:
arn: ""
volumeSize: 120
volumeType: gp3
credentialsSecret:
name: aws-cloud-credentials (3)
deviceIndex: 0
iamInstanceProfile:
id: <cluster_id>-master-profile (4)
instanceType: m6i.xlarge (5)
kind: AWSMachineProviderConfig (6)
loadBalancers: (7)
- name: <cluster_id>-int
type: network
- name: <cluster_id>-ext
type: network
metadata:
creationTimestamp: null
placement: (8)
region: <region> (9)
securityGroups:
- filters:
- name: tag:Name
values:
- <cluster_id>-master-sg (10)
userDataSecret:
name: master-user-data (12)
The control plane machine set concept of a failure domain is analogous to existing AWS concept of an . The ControlPlaneMachineSet
CR spreads control plane machines across multiple failure domains when possible.
When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use.
Sample AWS failure domain values
1 | Specifies an AWS availability zone for the first failure domain. |
2 | Specifies a subnet configuration. In this example, the subnet type is Filters , so there is a filters stanza. |
3 | Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone. |
4 | Specifies the subnet type. The allowed values are: ARN , Filters and ID . The default value is Filters . |
5 | Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone. |
6 | Specifies the cluster’s infrastructure ID and the AWS availability zone for the additional failure domain. |
7 | Specifies the cloud provider platform name. Do not change this value. |
Additional resources
Some sections of the control plane machine set CR are provider-specific. The example YAML in this section show provider specification and failure domain configurations for a Google Cloud Platform (GCP) cluster.
Sample GCP provider specification
When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec
configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.
Values obtained by using the OpenShift CLI
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.
Infrastructure ID
The <cluster_id>
string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Image path
The <path_to_image>
string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:
$ oc -n openshift-machine-api \
-o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{"\n"}' \
get ControlPlaneMachineSet/cluster
Sample GCP providerSpec
values
providerSpec:
value:
apiVersion: machine.openshift.io/v1beta1
canIPForward: false
credentialsSecret:
name: gcp-cloud-credentials (1)
deletionProtection: false
disks:
- autoDelete: true
boot: true
image: <path_to_image> (2)
labels: null
sizeGb: 200
type: pd-ssd
kind: GCPMachineProviderSpec (3)
machineType: e2-standard-4
metadata:
creationTimestamp: null
metadataServiceOptions: {}
networkInterfaces:
- network: <cluster_id>-network
subnetwork: <cluster_id>-master-subnet
projectID: <project_name> (4)
region: <region> (5)
serviceAccounts:
- email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/cloud-platform
shieldedInstanceConfig: {}
tags:
- <cluster_id>-master
targetPools:
- <cluster_id>-api
userDataSecret:
zone: "" (7)
1 | Specifies the secret name for the cluster. Do not change this value. |
2 | Specifies the path to the image that was used to create the disk. To use a GCP Marketplace image, specify the offer to use:
|
3 | Specifies the cloud provider platform type. Do not change this value. |
4 | Specifies the name of the GCP project that you use for your cluster. |
5 | Specifies the GCP region for the cluster. |
6 | Specifies the control plane user data secret. Do not change this value. |
7 | This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. |
The control plane machine set concept of a failure domain is analogous to the existing GCP concept of a . The CR spreads control plane machines across multiple failure domains when possible.
When configuring GCP failure domains in the control plane machine set, you must specify the zone name to use.
Sample GCP failure domain values
Some sections of the control plane machine set CR are provider-specific. The example YAML in this section show provider specification and failure domain configurations for an Azure cluster.
Sample Azure provider specification
When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec
configuration in the control plane Machine
CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.
In the following example, <cluster_id>
is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Sample Azure providerSpec
values
providerSpec:
value:
acceleratedNetworking: true
apiVersion: machine.openshift.io/v1beta1
credentialsSecret:
name: azure-cloud-credentials (1)
namespace: openshift-machine-api
diagnostics: {}
image: (2)
offer: ""
publisher: ""
resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 (3)
sku: ""
version: ""
internalLoadBalancer: <cluster_id>-internal (4)
kind: AzureMachineProviderSpec (5)
location: <region> (6)
managedIdentity: <cluster_id>-identity
metadata:
creationTimestamp: null
name: <cluster_id>
networkResourceGroup: <cluster_id>-rg
osDisk: (7)
diskSettings: {}
diskSizeGB: 1024
managedDisk:
storageAccountType: Premium_LRS
osType: Linux
publicIP: false
publicLoadBalancer: <cluster_id> (8)
resourceGroup: <cluster_id>-rg
subnet: <cluster_id>-master-subnet (9)
userDataSecret:
name: master-user-data (10)
vmSize: Standard_D8s_v3
vnet: <cluster_id>-vnet
zone: "" (11)
1 | Specifies the secret name for the cluster. Do not change this value. |
2 | Specifies the image details for your control plane machine set. |
3 | Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. |
4 | Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs. |
5 | Specifies the cloud provider platform type. Do not change this value. |
6 | Specifies the region to place control plane machines on. |
7 | Specifies the disk configuration for the control plane. |
8 | Specifies the public load balancer for the control plane. |
9 | Specifies the subnet for the control plane. |
10 | Specifies the control plane user data secret. Do not change this value. |
11 | This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. |
The control plane machine set concept of a failure domain is analogous to existing Azure concept of an . The ControlPlaneMachineSet
CR spreads control plane machines across multiple failure domains when possible.
When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name.
Sample Azure failure domain values
failureDomains:
azure: (1)
- zone: "1"
- zone: "2"
- zone: "3"
platform: Azure (2)
1 | Each instance of zone specifies an Azure availability zone for a failure domain. |
2 | Specifies the cloud provider platform name. Do not change this value. |
Additional resources
Some sections of the control plane machine set CR are provider-specific. The example YAML in this section shows a provider specification configuration for a VMware vSphere cluster.
Sample vSphere provider specification
Sample vSphere providerSpec
values