Installing a cluster on AWS into a government or secret region

    AWS government and secret regions

    OKD supports deploying a cluster to AWS GovCloud (US) regions and the . These regions are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads in the cloud.

    These regions do not have published Fedora CoreOS (FCOS) Amazon Machine Images (AMI) to select, so you must upload a custom AMI that belongs to that region.

    The following AWS GovCloud partitions are supported:

    • us-gov-west-1

    • us-gov-east-1

    The following AWS Secret Region partition is supported:

    • us-iso-east-1

    The AWS government or secret region, and accompanying custom AMI, must be manually configured in the install-config.yaml file since FCOS AMIs are not provided by Red Hat for those regions.

    If you are deploying to the C2S Secret Region, you must also define a custom CA certificate in the additionalTrustBundle field of the install-config.yaml file because the AWS API requires a custom CA trust bundle. To allow the installation program to access the AWS API, the CA certificates must also be defined on the machine that runs the installation program. You must add the CA bundle to the trust store on the machine, use the AWS_CA_BUNDLE environment variable, or define the CA bundle in the ca_bundle field of the AWS config file.

    Private clusters

    You can deploy a private OKD cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.

    Public zones are not supported in Route 53 in AWS GovCloud or Secret Regions. Therefore, clusters must be private if they are deployed to an AWS government or secret region.

    By default, OKD is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.

    To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.

    Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.

    To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network.

    The cluster still requires access to internet to access the AWS APIs.

    The following items are not required or created when you install a private cluster:

    • Public subnets

    • Public load balancers, which support public ingress

    • A public Route 53 zone that matches the baseDomain for the cluster

    The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.

    Limitations

    The ability to add public functionality to a private cluster is limited.

    • You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port).

    • If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers.

    About using a custom VPC

    In OKD 4.8, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OKD into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.

    Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.

    Requirements for using your VPC

    The installation program no longer creates the following components:

    • Internet gateways

    • NAT gateways

    • Subnets

    • Route tables

    • VPCs

    • VPC DHCP options

    • VPC endpoints

    If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VPC options like DHCP, so you must do so before you install the cluster.

    Your VPC must meet the following characteristics:

    • The VPC’s CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines.

    • The VPC must not use the kubernetes.io/cluster/.*: owned tag.

    • You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See in the AWS documentation. If you prefer using your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file.

    If you use a cluster with public access, you must create a public and a private subnet for each availability zone that your cluster uses.

    The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. Review the current Tag Restrictions in the AWS documentation to ensure that the installation program can add a tag to each subnet that you specify.

    If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:

    • ec2.<region>.amazonaws.com

    • elasticloadbalancing.<region>.amazonaws.com

    • s3.<region>.amazonaws.com

    Required VPC components

    You must provide a suitable VPC and subnets that allow communication to your machines.

    ComponentAWS typeDescription

    VPC

    • AWS::EC2::VPC

    • AWS::EC2::VPCEndpoint

    You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3.

    Public subnets

    • AWS::EC2::Subnet

    • AWS::EC2::SubnetNetworkAclAssociation

    Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules.

    Internet gateway

    • AWS::EC2::InternetGateway

    • AWS::EC2::VPCGatewayAttachment

    • AWS::EC2::RouteTable

    • AWS::EC2::Route

    • AWS::EC2::SubnetRouteTableAssociation

    • AWS::EC2::NatGateway

    • AWS::EC2::EIP

    You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios.

    Network access control

    • AWS::EC2::NetworkAcl

    • AWS::EC2::NetworkAclEntry

    You must allow the VPC to access the following ports:

    Port

    Reason

    80

    Inbound HTTP traffic

    443

    Inbound HTTPS traffic

    22

    Inbound SSH traffic

    1024 - 65535

    Inbound ephemeral traffic

    0 - 65535

    Outbound ephemeral traffic

    Private subnets

    • AWS::EC2::Subnet

    • AWS::EC2::RouteTable

    • AWS::EC2::SubnetRouteTableAssociation

    Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them.

    VPC validation

    To ensure that the subnets that you provide are suitable, the installation program confirms the following data:

    • All the subnets that you specify exist.

    • You provide private subnets.

    • The subnet CIDRs belong to the machine CIDR that you specified.

    • You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.

    • You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.

    If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OKD cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.

    Division of permissions

    Starting with OKD 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.

    The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.

    If you deploy OKD to an existing network, the isolation of cluster services is reduced in the following ways:

    • You can install multiple OKD clusters in the same VPC.

    • ICMP ingress is allowed from the entire network.

    • TCP 22 ingress (SSH) is allowed to the entire network.

    • Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.

    During an OKD installation, you can provide an SSH public key to the installation program. The key is passed to the Fedora CoreOS (FCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.

    After the key is passed to the nodes, you can use the key pair to SSH in to the FCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.

    If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.

    Do not skip this procedure in production environments, where disaster recovery and debugging is required.

    You must use a local key, not one that you configured with platform-specific approaches such as .

    On clusters running Fedora CoreOS (FCOS), the SSH keys specified in the Ignition config files are written to the /home/core/.ssh/authorized_keys.d/core file. However, the Machine Config Operator manages SSH keys in the /home/core/.ssh/authorized_keys file and configures sshd to ignore the /home/core/.ssh/authorized_keys.d/core file. As a result, newly provisioned OKD nodes are not accessible using SSH until the Machine Config Operator reconciles the machine configs with the authorized_keys file. After you can access the nodes using SSH, you can delete the /home/core/.ssh/authorized_keys.d/core file.

    Procedure

    1. If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:

      1Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory.

      If you plan to install an OKD cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm.

    2. View the public SSH key:

      1. $ cat <path>/<file_name>.pub

      For example, run the following to view the ~/.ssh/id_rsa.pub public key:

      1. $ cat ~/.ssh/id_rsa.pub
    3. Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command.

      On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically.

      1. If the ssh-agent process is not already running for your local user, start it as a background task:

        1. $ eval "$(ssh-agent -s)"

        Example output

        1. Agent pid 31874

        If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.

    4. Add your SSH private key to the ssh-agent:

      1. $ ssh-add <path>/<file_name> (1)
      1Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

      Example output

      1. Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

    Next steps

    • When you install OKD, provide the SSH public key to the installation program.

    Obtaining the installation program

    Before you install OKD, download the installation file on a local computer.

    Prerequisites

    • You have a computer that runs Linux or macOS, with 500 MB of local disk space

    Procedure

    1. Download installer from

      The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.

      Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OKD uninstallation procedures for your specific cloud provider.

    2. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:

      1. $ tar xvf openshift-install-linux.tar.gz
    3. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OKD components.

      Using a pull secret from the Red Hat OpenShift Cluster Manager site is not required. You can use a pull secret for another private registry. Or, if you do not need the cluster to pull images from a private registry, you can use {"auths":{"fake":{"auth":"aWQ6cGFzcwo="}}} as the pull secret when prompted during the installation.

      If you do not use the pull secret from the Red Hat OpenShift Cluster Manager site:

      • Red Hat Operators are not available.

      • The Telemetry and Insights operators do not send data to Red Hat.

      • Content from the registry, such as image streams and Operators, are not available.

    Manually creating the installation configuration file

    When installing OKD on Amazon Web Services (AWS) into a region requiring a custom Fedora CoreOS (FCOS) AMI, you must manually generate your installation configuration file.

    Prerequisites

    • You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.

    • You have obtained the OKD installation program and the pull secret for your cluster.

    Procedure

    1. Create an installation directory to store your required installation assets in:

      1. $ mkdir <installation_directory>

      You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OKD version.

    2. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory>.

      You must name this configuration file install-config.yaml.

      For some platform types, you can alternatively run ./openshift-install create install-config —dir=<installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts.

    3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

      The install-config.yaml file is consumed during the next step of the installation process. You must back it up now.

    Installation configuration parameters

    Before you deploy an OKD cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.

    The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.

    Required configuration parameters

    Required installation configuration parameters are described in the following table:

    Table 1. Required parameters
    ParameterDescriptionValues

    apiVersion

    The API version for the install-config.yaml content. The current version is v1. The installer may also support older API versions.

    String

    baseDomain

    The base domain of your cloud provider. The base domain is used to create routes to your OKD cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format.

    A fully-qualified domain or subdomain name, such as example.com.

    metadata

    Kubernetes resource ObjectMeta, from which only the name parameter is consumed.

    Object

    metadata.name

    The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}}.

    String of lowercase letters, hyphens (-), and periods (.), such as dev.

    platform

    The configuration for the specific platform upon which to perform the installation: aws, baremetal, azure, openstack, ovirt, vsphere, or {}. For additional information about platform.<platform> parameters, consult the table for your specific platform that follows.

    Object

    Network configuration parameters

    You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.

    Only IPv4 addresses are supported.

    Table 2. Network parameters
    ParameterDescriptionValues

    networking

    The configuration for the cluster network.

    Object

    You cannot modify parameters specified by the networking object after installation.

    networking.networkType

    The cluster network provider Container Network Interface (CNI) plug-in to install.

    Either OpenShiftSDN or OVNKubernetes. The default value is OVNKubernetes.

    networking.clusterNetwork

    The IP address blocks for pods.

    The default value is 10.128.0.0/14 with a host prefix of /23.

    If you specify multiple IP address blocks, the blocks must not overlap.

    An array of objects. For example:

    1. networking:
    2. clusterNetwork:
    3. - cidr: 10.128.0.0/14
    4. hostPrefix: 23

    networking.clusterNetwork.cidr

    Required if you use networking.clusterNetwork. An IP address block.

    An IPv4 network.

    An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32.

    networking.clusterNetwork.hostPrefix

    The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr. A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses.

    A subnet prefix.

    The default value is 23.

    networking.serviceNetwork

    The IP address block for services. The default value is 172.30.0.0/16.

    The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network.

    An array with an IP address block in CIDR format. For example:

    1. networking:
    2. serviceNetwork:
    3. - 172.30.0.0/16

    networking.machineNetwork

    The IP address blocks for machines.

    If you specify multiple IP address blocks, the blocks must not overlap.

    An array of objects. For example:

    1. networking:
    2. machineNetwork:
    3. - cidr: 10.0.0.0/16

    networking.machineNetwork.cidr

    Required if you use networking.machineNetwork. An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24.

    An IP network block in CIDR notation.

    For example, 10.0.0.0/16.

    Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in.

    Optional configuration parameters

    Optional installation configuration parameters are described in the following table:

    Table 3. Optional parameters
    ParameterDescriptionValues

    additionalTrustBundle

    A PEM-encoded X.509 certificate bundle that is added to the nodes’ trusted certificate store. This trust bundle may also be used when a proxy has been configured.

    compute

    The configuration for the machines that comprise the compute nodes.

    Array of MachinePool objects. For details, see the following “Machine-pool” table.

    compute.architecture

    Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

    String

    compute.hyperthreading

    Whether to enable or disable simultaneous multithreading, or hyperthreading, on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores.

    If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

    Enabled or Disabled

    compute.name

    Required if you use compute. The name of the machine pool.

    worker

    compute.platform

    Required if you use compute. Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value.

    aws, azure, gcp, openstack, , vsphere, or {}

    compute.replicas

    The number of compute machines, which are also known as worker machines, to provision.

    A positive integer greater than or equal to 2. The default value is 3.

    controlPlane

    The configuration for the machines that comprise the control plane.

    Array of MachinePool objects. For details, see the following “Machine-pool” table.

    controlPlane.architecture

    Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default).

    String

    controlPlane.hyperthreading

    Whether to enable or disable simultaneous multithreading, or hyperthreading, on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores.

    If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance.

    Enabled or Disabled

    controlPlane.name

    Required if you use controlPlane. The name of the machine pool.

    master

    controlPlane.platform

    Required if you use controlPlane. Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value.

    aws, azure, gcp, openstack, ovirt, vsphere, or {}

    controlPlane.replicas

    The number of control plane machines to provision.

    The only supported value is 3, which is the default value.

    credentialsMode

    The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported.

    Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.

    Mint, Passthrough, Manual, or an empty string (“”).

    imageContentSources

    Sources and repositories for the release-image content.

    Array of objects. Includes a source and, optionally, mirrors, as described in the following rows of this table.

    imageContentSources.source

    Required if you use imageContentSources. Specify the repository that users refer to, for example, in image pull specifications.

    String

    imageContentSources.mirrors

    Specify one or more repositories that may also contain the same images.

    Array of strings

    publish

    How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes.

    Internal or External. To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal. The default value is External.

    sshKey

    The SSH key or keys to authenticate access your cluster machines.

    For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

    One or more keys. For example:

    1. sshKey:
    2. <key1>
    3. <key2>
    4. <key3>

    Optional AWS configuration parameters

    Optional AWS configuration parameters are described in the following table:

    Table 4. Optional AWS parameters
    ParameterDescriptionValues

    compute.platform.aws.amiID

    The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom FCOS AMI.

    Any published or custom FCOS AMI that belongs to the set AWS region.

    compute.platform.aws.iamRole

    A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role.

    The name of a valid AWS IAM role.

    compute.platform.aws.rootVolume.iops

    The Input/Output Operations Per Second (IOPS) that is reserved for the root volume.

    Integer, for example 4000.

    compute.platform.aws.rootVolume.size

    The size in GiB of the root volume.

    Integer, for example 500.

    compute.platform.aws.rootVolume.type

    The type of the root volume.

    Valid AWS EBS volume type, such as io1.

    compute.platform.aws.type

    The EC2 instance type for the compute machines.

    Valid AWS instance type, such as m4.2xlarge. See the Instance types for machines table that follows.

    compute.platform.aws.zones

    The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone.

    A list of valid AWS availability zones, such as us-east-1c, in a .

    compute.aws.region

    The AWS region that the installation program creates compute resources in.

    Any valid AWS region, such as us-east-1.

    controlPlane.platform.aws.amiID

    The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom FCOS AMI.

    Any published or custom FCOS AMI that belongs to the set AWS region.

    controlPlane.platform.aws.iamRole

    A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role.

    The name of a valid AWS IAM role.

    controlPlane.platform.aws.type

    The EC2 instance type for the control plane machines.

    Valid AWS instance type, such as m5.xlarge. See the Instance types for machines table that follows.

    controlPlane.platform.aws.zones

    The availability zones where the installation program creates machines for the control plane machine pool.

    A list of valid AWS availability zones, such as us-east-1c, in a .

    controlPlane.aws.region

    The AWS region that the installation program creates control plane resources in.

    Valid AWS region, such as us-east-1.

    platform.aws.amiID

    The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom FCOS AMI.

    Any published or custom FCOS AMI that belongs to the set AWS region.

    platform.aws.hostedZone

    An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone.

    String, for example Z3URY6TWQ91KVV.

    platform.aws.serviceEndpoints.name

    The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services.

    Valid name.

    platform.aws.serviceEndpoints.url

    The AWS service endpoint URL. The URL must use the https protocol and the host must trust the certificate.

    platform.aws.userTags

    A map of keys and values that the installation program adds as tags to all resources that it creates.

    Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see in the AWS documentation.

    platform.aws.subnets

    If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone.

    Valid subnet IDs.

    Supported AWS machine types

    The following Amazon Web Services (AWS) instance types are supported with OKD.

    Instance types for machines

    Instance typeBootstrapControl planeCompute

    i3.large

    x

    m4.large

    x

    m4.xlarge

    x

    x

    m4.2xlarge

    x

    x

    m4.4xlarge

    x

    x

    m4.10xlarge

    x

    x

    m4.16xlarge

    x

    x

    m5.large

    x

    m5.xlarge

    x

    x

    m5.2xlarge

    x

    x

    m5.4xlarge

    x

    x

    m5.8xlarge

    x

    x

    m5.12xlarge

    x

    x

    m5.16xlarge

    x

    x

    m5a.large

    x

    m5a.xlarge

    x

    x

    m5a.2xlarge

    x

    x

    m5a.4xlarge

    x

    x

    m5a.8xlarge

    x

    x

    m5a.10xlarge

    x

    x

    m5a.16xlarge

    x

    x

    c4.large

    x

    c4.xlarge

    x

    c4.2xlarge

    x

    x

    c4.4xlarge

    x

    x

    c4.8xlarge

    x

    x

    c5.large

    x

    c5.xlarge

    x

    c5.2xlarge

    x

    x

    c5.4xlarge

    x

    x

    c5.9xlarge

    x

    x

    c5.12xlarge

    x

    x

    x

    x

    c5.24xlarge

    x

    x

    c5a.large

    x

    c5a.xlarge

    x

    c5a.2xlarge

    x

    x

    c5a.4xlarge

    x

    x

    c5a.8xlarge

    x

    x

    c5a.12xlarge

    x

    x

    c5a.16xlarge

    x

    x

    c5a.24xlarge

    x

    x

    r4.large

    x

    r4.xlarge

    x

    x

    r4.2xlarge

    x

    x

    r4.4xlarge

    x

    x

    r4.8xlarge

    x

    x

    r4.16xlarge

    x

    x

    r5.large

    x

    r5.xlarge

    x

    x

    r5.2xlarge

    x

    x

    r5.4xlarge

    x

    x

    r5.8xlarge

    x

    x

    r5.12xlarge

    x

    x

    r5.16xlarge

    x

    x

    r5.24xlarge

    x

    x

    r5a.large

    x

    r5a.xlarge

    x

    x

    r5a.2xlarge

    x

    x

    r5a.4xlarge

    x

    x

    r5a.8xlarge

    x

    x

    r5a.12xlarge

    x

    x

    r5a.16xlarge

    x

    x

    r5a.24xlarge

    x

    x

    t3.large

    x

    t3.xlarge

    x

    t3.2xlarge

    x

    t3a.large

    x

    t3a.xlarge

    x

    t3a.2xlarge

    x

    Sample customized install-config.yaml file for AWS

    You can customize the install-config.yaml file to specify more details about your OKD cluster’s platform or modify the values of the required parameters.

    This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.

    1Required.
    2Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content.
    3If you do not provide these parameters and values, the installation program provides the default value.
    4The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, -, and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OKD will support defining multiple compute pools during installation. Only one control plane pool is used.
    5Whether to enable or disable simultaneous multithreading, or hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines’ cores. You can disable it by setting the parameter value to Disabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.

    If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge, for your machines if you disable simultaneous multithreading.

    6To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000.
    7If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
    8The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
    9The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate.
    10The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
    11You can optionally provide the sshKey value that you use to access the machines in your cluster.

    For production OKD clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

    12How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External.
    13The custom CA certificate. This is required when deploying to the AWS C2S Secret Region because the AWS API requires a custom CA trust bundle.

    You can deploy an OKD cluster to Amazon Web Services (AWS) regions without native support for a Fedora CoreOS (FCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. This is required if you are deploying your cluster to an AWS government or secret region. AWS government and secret regions are supported by the AWS SDK.

    If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published FCOS AMIs.

    A region without native support for an FCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file.

    Uploading a custom FCOS AMI in AWS

    If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Fedora CoreOS (FCOS) Amazon Machine Image (AMI) that belongs to that region.

    Prerequisites

    • You configured an AWS account.

    • You created an Amazon S3 bucket with the required IAM .

    • You uploaded your FCOS VMDK file to Amazon S3.

    • You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer.

    Procedure

    1. Export your AWS profile as an environment variable:

      1. $ export AWS_PROFILE=<aws_profile> (1)
      1The AWS profile name that holds your AWS credentials, like govcloud.
    2. Export the region to associate with your custom AMI as an environment variable:

      1. $ export AWS_DEFAULT_REGION=<aws_region> (1)
    3. Export the version of FCOS you uploaded to Amazon S3 as an environment variable:

      1. $ export RHCOS_VERSION=<version> (1)
      1The FCOS VMDK version, like 4.8.0.
    4. Export the Amazon S3 bucket name as an environment variable:

      1. $ export VMIMPORT_BUCKET_NAME=<s3_bucket_name>
    5. Create the containers.json file and define your FCOS VMDK file:

      1. $ cat <<EOF > containers.json
      2. {
      3. "Description": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64",
      4. "Format": "vmdk",
      5. "UserBucket": {
      6. "S3Bucket": "${VMIMPORT_BUCKET_NAME}",
      7. "S3Key": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64.vmdk"
      8. }
      9. }
      10. EOF
    6. Import the FCOS disk as an Amazon EBS snapshot:

      1. $ aws ec2 import-snapshot --region ${AWS_DEFAULT_REGION} \
      2. --description "<description>" \ (1)
      3. --disk-container "file://<file_path>/containers.json" (2)
      1The description of your FCOS disk being imported, like rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64.
      2The file path to the JSON file describing your FCOS disk. The JSON file should contain your Amazon S3 bucket name and key.
    7. Check the status of the image import:

      1. $ watch -n 5 aws ec2 describe-import-snapshot-tasks --region ${AWS_DEFAULT_REGION}

      Example output

      1. {
      2. "ImportSnapshotTasks": [
      3. {
      4. "Description": "rhcos-4.7.0-x86_64-aws.x86_64",
      5. "ImportTaskId": "import-snap-fh6i8uil",
      6. "SnapshotTaskDetail": {
      7. "Description": "rhcos-4.7.0-x86_64-aws.x86_64",
      8. "DiskImageSize": 819056640.0,
      9. "Format": "VMDK",
      10. "SnapshotId": "snap-06331325870076318",
      11. "Status": "completed",
      12. "UserBucket": {
      13. "S3Bucket": "external-images",
      14. }
      15. }
      16. }
      17. ]
      18. }

      Copy the SnapshotId to register the image.

    8. Create a custom FCOS AMI from the FCOS snapshot:

      1. $ aws ec2 register-image \
      2. --region ${AWS_DEFAULT_REGION} \
      3. --architecture x86_64 \ (1)
      4. --description "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \ (2)
      5. --ena-support \
      6. --name "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \ (3)
      7. --virtualization-type hvm \
      8. --root-device-name '/dev/xvda' \
      9. --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' (4)
      1The FCOS VMDK architecture type, like x86_64, s390x, or ppc64le.
      2The Description from the imported snapshot.
      3The name of the FCOS AMI.
      4The SnapshotID from the imported snapshot.

    To learn more about these APIs, see the AWS documentation for and creating EBS-backed AMIs.

    Configuring the cluster-wide proxy during installation

    Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OKD cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.

    Prerequisites

    • You have an existing install-config.yaml file.

    • You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

      The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

      For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

    • If your cluster is on AWS, you added the ec2.<region>.amazonaws.com, elasticloadbalancing.<region>.amazonaws.com, and s3.<region>.amazonaws.com endpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.

    Procedure

    1. Edit your install-config.yaml file and add the proxy settings. For example:

      1. apiVersion: v1
      2. baseDomain: my.domain.com
      3. proxy:
      4. httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
      5. httpsProxy: https://<username>:<pswd>@<ip>:<port> (2)
      6. noProxy: example.com (3)
      7. additionalTrustBundle: | (4)
      8. -----BEGIN CERTIFICATE-----
      9. <MY_TRUSTED_CA_CERT>
      10. -----END CERTIFICATE-----
      11. ...
      1A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify an httpProxy value.
      2A proxy URL to use for creating HTTPS connections outside the cluster. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify an httpsProxy value.
      3A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations.
      4If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Fedora CoreOS (FCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the FCOS trust bundle. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must provide the MITM CA certificate.

      The installation program does not support the proxy readinessEndpoints field.

    2. Save the file and reference it when installing OKD.

    The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

    Only the Proxy object named cluster is supported, and no additional proxies can be created.

    Deploying the cluster

    You can install OKD on a compatible cloud platform.

    You can run the create cluster command of the installation program only once, during initial installation.

    Prerequisites

    • Configure an account with the cloud platform that hosts your cluster.

    • Obtain the OKD installation program and the pull secret for your cluster.

    Procedure

    1. Change to the directory that contains the installation program and initialize the cluster deployment:

      1. $ ./openshift-install create cluster --dir=<installation_directory> \ (1)
      2. --log-level=info (2)
      1For <installation_directory>, specify the location of your customized ./install-config.yaml file.
      2To view different installation details, specify warn, debug, or error instead of info.

      If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

      When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

      Example output

      1. ...
      2. INFO Install complete!
      3. INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
      4. INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
      5. INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
      6. INFO Time elapsed: 36m22s

      The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds.

      The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

      You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.

    2. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster.

      The elevated permissions provided by the AdministratorAccess policy are required only during installation.

    You can install the OpenShift CLI (oc) to interact with OKD from a command-line interface. You can install oc on Linux, Windows, or macOS.

    If you installed an earlier version of oc, you cannot use it to complete all of the commands in OKD 4.8. Download and install the new version of oc.

    Installing the OpenShift CLI on Linux

    You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.

    Procedure

    1. Navigate to https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/ and choose the folder for your operating system and architecture.

    2. Download oc.tar.gz.

    3. Unpack the archive:

    4. Place the oc binary in a directory that is on your PATH.

      To check your PATH, execute the following command:

      1. $ echo $PATH

    After you install the OpenShift CLI, it is available using the oc command:

    1. $ oc <command>

    You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.

    Procedure

    1. Navigate to and choose the folder for your operating system and architecture.

    2. Download oc.zip.

    3. Unzip the archive with a ZIP program.

    4. Move the oc binary to a directory that is on your PATH.

      To check your PATH, open the command prompt and execute the following command:

      1. C:\> path

    After you install the OpenShift CLI, it is available using the oc command:

    1. C:\> oc <command>

    Installing the OpenShift CLI on macOS

    You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.

    Procedure

    1. Navigate to and choose the folder for your operating system and architecture.

    2. Download oc.tar.gz.

    3. Unpack and unzip the archive.

    4. Move the oc binary to a directory on your PATH.

      To check your PATH, open a terminal and execute the following command:

      1. $ echo $PATH

    After you install the OpenShift CLI, it is available using the oc command:

    1. $ oc <command>

    Logging in to the cluster by using the CLI

    You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OKD installation.

    Prerequisites

    • You deployed an OKD cluster.

    • You installed the oc CLI.

    Procedure

    1. Export the kubeadmin credentials:

      1. $ export KUBECONFIG=<installation_directory>/auth/kubeconfig (1)
      1For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2. Verify you can run oc commands successfully using the exported configuration:

      1. $ oc whoami

      Example output

      1. system:admin

    Logging in to the cluster by using the web console

    The kubeadmin user exists by default after an OKD installation. You can log into your cluster as the kubeadmin user by using the OKD web console.

    Prerequisites

    • You have access to the installation host.

    • You completed a cluster installation and all cluster Operators are available.

    Procedure

    1. Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host:

      1. $ cat <installation_directory>/auth/kubeadmin-password

      Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host.

    2. List the OKD web console route:

      1. $ oc get routes -n openshift-console | grep 'console-openshift'

      Example output

      1. console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None
    3. Navigate to the route detailed in the output of the preceding command in a web browser and log in as the user.

    Additional resources

    Additional resources

    • See for more information about the Telemetry service.

    Next steps