Setting up the environment for an OpenShift installation
Preparing the provisioner node for OKD installation
Perform the following steps to prepare the environment.
Procedure
Log in to the provisioner node via
ssh
.Create a non-root user (
kni
) and provide that user withsudo
privileges:Create an
ssh
key for the new user:# su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''"
Log in as the new user on the provisioner node:
# su - kni
$
Use Red Hat Subscription Manager to register the provisioner node:
$ sudo subscription-manager register --username=<user> --password=<pass> --auto-attach
$ sudo subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms
Install the following packages:
$ sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool
Modify the user to add the
libvirt
group to the newly created user:$ sudo usermod --append --groups libvirt <user>
Restart
firewalld
and enable thehttp
service:$ sudo systemctl start firewalld
$ sudo firewall-cmd --zone=public --add-service=http --permanent
$ sudo firewall-cmd --reload
Start and enable the
libvirtd
service:$ sudo systemctl enable libvirtd --now
Create the
default
storage pool and start it:$ sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
$ sudo virsh pool-start default
$ sudo virsh pool-autostart default
Configure networking.
You can also configure networking from the web console.
Export the
baremetal
network NIC name:$ export PUB_CONN=<baremetal_nic_name>
Configure the
baremetal
network:$ sudo nohup bash -c "
nmcli con down \"$PUB_CONN\"
nmcli con delete \"$PUB_CONN\"
# RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists
nmcli con down \"System $PUB_CONN\"
nmcli con delete \"System $PUB_CONN\"
nmcli connection add ifname baremetal type bridge con-name baremetal
nmcli con add type bridge-slave ifname \"$PUB_CONN\" master baremetal
pkill dhclient;dhclient baremetal
"
If you are deploying with a
provisioning
network, export theprovisioning
network NIC name:$ export PROV_CONN=<prov_nic_name>
If you are deploying with a
provisioning
network, configure theprovisioning
network:$ sudo nohup bash -c "
nmcli con down \"$PROV_CONN\"
nmcli con delete \"$PROV_CONN\"
nmcli connection add ifname provisioning type bridge con-name provisioning
nmcli con add type bridge-slave ifname \"$PROV_CONN\" master provisioning
nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual
nmcli con down provisioning
nmcli con up provisioning
"
The
ssh
connection might disconnect after executing these steps.The IPv6 address can be any address as long as it is not routable via the
baremetal
network.Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing.
ssh
back into theprovisioner
node (if required).# ssh kni@provisioner.<cluster-name>.<domain>
Verify the connection bridges have been properly created.
$ sudo nmcli con show
NAME UUID TYPE DEVICE
baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal
provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning
virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0
bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1
bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2
Create a
pull-secret.txt
file.$ vim pull-secret.txt
In a web browser, navigate to , and scroll down to the Downloads section. Click Copy pull secret. Paste the contents into the
pull-secret.txt
file and save the contents in thekni
user’s home directory.
Retrieving the OKD installer
Use the latest-4.x
version of the installer to deploy the latest generally available version of OKD:
$ export VERSION=latest-4.8
export RELEASE_IMAGE=$(curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print $3}')
Extracting the OKD installer
After retrieving the installer, the next step is to extract it.
Procedure
Set the environment variables:
$ export cmd=openshift-baremetal-install
$ export pullsecret_file=~/pull-secret.txt
$ export extract_dir=$(pwd)
Get the
oc
binary:$ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-linux.tar.gz | tar zxvf - oc
Extract the installer:
$ sudo cp oc /usr/local/bin
$ oc adm release extract --registry-config "${pullsecret_file}" --command=$cmd --to "${extract_dir}" ${RELEASE_IMAGE}
$ sudo cp openshift-baremetal-install /usr/local/bin
To employ image caching, you must download two images: the Fedora CoreOS (FCOS) image used by the bootstrap VM and the FCOS image used by the installer to provision the different nodes. Image caching is optional, but especially useful when running the installer on a network with limited bandwidth.
If you are running the installer on a network with limited bandwidth and the FCOS images download takes more than 15 to 20 minutes, the installer will timeout. Caching images on a web server will help in such scenarios.
Install a container that contains the images.
Procedure
Install
podman
:$ sudo dnf install -y podman
Open firewall port
8080
to be used for FCOS image caching:$ sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent
$ sudo firewall-cmd --reload
Create a directory to store the
bootstraposimage
andclusterosimage
:$ mkdir /home/kni/rhcos_image_cache
Set the appropriate SELinux context for the newly created directory:
$ sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?"
$ sudo restorecon -Rv rhcos_image_cache/
Get the commit ID from the installer:
$ export COMMIT_ID=$(/usr/local/bin/openshift-baremetal-install version | grep '^built from commit' | awk '{print $4}')
The ID determines which images the installer needs to download.
Get the URI for the FCOS image that the installer will deploy on the nodes:
$ export RHCOS_OPENSTACK_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .images.openstack.path | sed 's/"//g')
Get the URI for the FCOS image that the installer will deploy on the bootstrap VM:
$ export RHCOS_QEMU_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .images.qemu.path | sed 's/"//g')
Get the path where the images are published:
$ export RHCOS_PATH=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .baseURI | sed 's/"//g')
Get the SHA hash for the FCOS image that will be deployed on the bootstrap VM:
$ export RHCOS_QEMU_SHA_UNCOMPRESSED=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq -r '.images.qemu["uncompressed-sha256"]')
Get the SHA hash for the FCOS image that will be deployed on the nodes:
$ export RHCOS_OPENSTACK_SHA_COMPRESSED=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq -r '.images.openstack.sha256')
Download the images and place them in the
/home/kni/rhcos_image_cache
directory:$ curl -L ${RHCOS_PATH}${RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/${RHCOS_QEMU_URI}
$ curl -L ${RHCOS_PATH}${RHCOS_OPENSTACK_URI} -o /home/kni/rhcos_image_cache/${RHCOS_OPENSTACK_URI}
Confirm SELinux type is of
httpd_sys_content_t
for the newly created files:$ ls -Z /home/kni/rhcos_image_cache
Create the pod:
$ podman run -d --name rhcos_image_cache \
-v /home/kni/rhcos_image_cache:/var/www/html \
-p 8080:8080/tcp \
registry.centos.org/centos/httpd-24-centos7:latest
The above command creates a caching webserver with the name
rhcos_image_cache
, which serves the images for deployment. The first image${RHCOS_PATH}${RHCOS_QEMU_URI}?sha256=${RHCOS_QEMU_SHA_UNCOMPRESSED}
is thebootstrapOSImage
and the second image${RHCOS_PATH}${RHCOS_OPENSTACK_URI}?sha256=${RHCOS_OPENSTACK_SHA_COMPRESSED}
is theclusterOSImage
in theinstall-config.yaml
file.Generate the
bootstrapOSImage
andclusterOSImage
configuration:$ export RHCOS_OPENSTACK_SHA256=$(zcat /home/kni/rhcos_image_cache/${RHCOS_OPENSTACK_URI} | sha256sum | awk '{print $1}')
$ export RHCOS_QEMU_SHA256=$(zcat /home/kni/rhcos_image_cache/${RHCOS_QEMU_URI} | sha256sum | awk '{print $1}')
$ export CLUSTER_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_OPENSTACK_URI}?sha256=${RHCOS_OPENSTACK_SHA256}"
$ export BOOTSTRAP_OS_IMAGE="http://${BAREMETAL_IP}:8080/${RHCOS_QEMU_URI}?sha256=${RHCOS_QEMU_SHA256}"
$ echo "${RHCOS_OPENSTACK_SHA256} ${RHCOS_OPENSTACK_URI}" > /home/kni/rhcos_image_cache/rhcos-ootpa-latest.qcow2.md5sum
$ echo " bootstrapOSImage=${BOOTSTRAP_OS_IMAGE}"
$ echo " clusterOSImage=${CLUSTER_OS_IMAGE}"
Add the required configuration to the
install-config.yaml
file underplatform.baremetal
:platform:
baremetal:
bootstrapOSImage: http://<BAREMETAL_IP>:8080/<RHCOS_QEMU_URI>?sha256=<RHCOS_QEMU_SHA256>
clusterOSImage: http://<BAREMETAL_IP>:8080/<RHCOS_OPENSTACK_URI>?sha256=<RHCOS_OPENSTACK_SHA256>
See the “Configuration files” section for additional details.
Configuration files
The install-config.yaml
file requires some additional details. Most of the information is teaching the installer and the resulting cluster enough about the available hardware so that it is able to fully manage it.
Configure
install-config.yaml
. Change the appropriate variables to match the environment, includingpullSecret
andsshKey
.apiVersion: v1
baseDomain: <domain>
metadata:
name: <cluster-name>
networking:
machineCIDR: <public-cidr>
networkType: OVNKubernetes
compute:
- name: worker
replicas: 2 (1)
controlPlane:
name: master
replicas: 3
platform:
baremetal: {}
platform:
baremetal:
apiVIP: <api-ip>
ingressVIP: <wildcard-ip>
provisioningNetworkCIDR: <CIDR>
hosts:
- name: openshift-master-0
role: master
bmc:
address: ipmi://<out-of-band-ip> (2)
username: <user>
password: <password>
bootMACAddress: <NIC1-mac-address>
rootDeviceHints:
deviceName: "/dev/sda"
- name: <openshift-master-1>
role: master
bmc:
address: ipmi://<out-of-band-ip> (2)
username: <user>
password: <password>
bootMACAddress: <NIC1-mac-address>
rootDeviceHints:
deviceName: "/dev/sda"
- name: <openshift-master-2>
role: master
bmc:
address: ipmi://<out-of-band-ip> (2)
username: <user>
password: <password>
bootMACAddress: <NIC1-mac-address>
rootDeviceHints:
deviceName: "/dev/sda"
- name: <openshift-worker-0>
role: worker
bmc:
address: ipmi://<out-of-band-ip> (2)
username: <user>
password: <password>
bootMACAddress: <NIC1-mac-address>
- name: <openshift-worker-1>
role: worker
bmc:
address: ipmi://<out-of-band-ip>
username: <user>
password: <password>
bootMACAddress: <NIC1-mac-address>
rootDeviceHints:
deviceName: "/dev/sda"
pullSecret: '<pull_secret>'
sshKey: '<ssh_pub_key>'
1 Scale the worker machines based on the number of worker nodes that are part of the OKD cluster. 2 See the BMC addressing sections for more options. Create a directory to store cluster configs.
$ mkdir ~/clusterconfigs
$ cp install-config.yaml ~/clusterconfigs
Ensure all bare metal nodes are powered off prior to installing the OKD cluster.
$ ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off
Remove old bootstrap resources if any are left over from a previous deployment attempt.
for i in $(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print $2'});
do
sudo virsh destroy $i;
sudo virsh undefine $i;
sudo virsh vol-delete $i.ign --pool $i;
sudo virsh pool-destroy $i;
sudo virsh pool-undefine $i;
done
Setting proxy settings within the install-config.yaml
file (optional)
To deploy an OKD cluster using a proxy, make the following changes to the install-config.yaml
file.
apiVersion: v1
baseDomain: <domain>
proxy:
httpProxy: http://USERNAME:PASSWORD@proxy.example.com:PORT
httpsProxy: https://USERNAME:PASSWORD@proxy.example.com:PORT
noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>
The following is an example of noProxy
with values.
noProxy: .example.com,172.22.0.0/24,10.10.0.0/24
With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair.
Key considerations:
If the proxy does not have an HTTPS proxy, change the value of
httpsProxy
fromhttps://
tohttp://
.If using a provisioning network, include it in the
noProxy
setting, otherwise the installer will fail.Set all of the proxy settings as environment variables within the provisioner node. For example,
HTTP_PROXY
,HTTPS_PROXY
, andNO_PROXY
.
Modifying the install-config.yaml
file for no provisioning
network (optional)
To deploy an OKD cluster without a provisioning
network, make the following changes to the install-config.yaml
file.
platform:
baremetal:
apiVIP: <apiVIP>
ingressVIP: <ingress/wildcard VIP>
provisioningNetwork: "Disabled" (1)
1 | Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . |
The |
Modifying the install-config.yaml
file for dual-stack network (optional)
To deploy an OKD cluster with dual-stack networking, edit the machineNetwork
, clusterNetwork
, and serviceNetwork
configuration settings in the install-config.yaml
file. Each setting must have two CIDR entries each. Ensure the first CIDR entry is the IPv4 setting and the second CIDR entry is the IPv6 setting.
machineNetwork:
- cidr: {{ extcidrnet }}
- cidr: {{ extcidrnet6 }}
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
- cidr: fd02::/48
hostPrefix: 64
serviceNetwork:
- 172.30.0.0/16
- fd03::/112
The API VIP IP address and the Ingress VIP address must be of the primary IP address family when using dual-stack networking. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. However, Red Hat does support dual-stack networking with IPv4 as the primary IP address family. Therefore, the IPv4 entries must go before the IPv6 entries. |
Configuring managed Secure Boot in the install-config.yaml
file (optional)
You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish
, redfish-virtualmedia
, or idrac-virtualmedia
. To enable managed Secure Boot, add the bootMode
configuration setting to each node:
Example
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish://<out_of_band_ip> (1)
username: <user>
password: <password>
bootMACAddress: <NIC1_mac_address>
rootDeviceHints:
deviceName: "/dev/sda"
bootMode: UEFISecureBoot (2)
1 | Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See “BMC addressing for HPE iLO” or “BMC addressing for Dell iDRAC” for additional details. |
2 | The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. |
See “Configuring nodes” in the “Prerequisites” to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see “Configuring nodes for Secure Boot manually” in the “Configuring nodes” section. Configuring Secure Boot manually requires Redfish virtual media. |
Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. |
Additional install-config
parameters
See the following tables for the required parameters, the hosts
parameter, and the bmc
parameter for the install-config.yaml
file.
Parameters | Default | Description |
---|---|---|
The domain name for the cluster. For example, | ||
|
| The boot mode for a node. Options are |
The | ||
| The | |
| The name to be given to the OKD cluster. For example, | |
| The public CIDR (Classless Inter-Domain Routing) of the external network. For example, | |
| The OKD cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. | |
| Replicas sets the number of worker (or compute) nodes in the OKD cluster. | |
| The OKD cluster requires a name for control plane (master) nodes. | |
| Replicas sets the number of control plane (master) nodes included as part of the OKD cluster. | |
| The name of the network interface on nodes connected to the | |
| The default configuration used for machine pools without a platform configuration. | |
| The VIP to use for internal API communication. This setting must either be provided or pre-configured in the DNS so that the default name resolves correctly. | |
|
|
|
|
| The VIP to use for ingress traffic. |
Parameters | Default | Description |
---|---|---|
| Defines the IP range for nodes on the | |
|
| The CIDR for the network to use for provisioning. This option is required when not using the default address range on the |
| The third IP address of the | The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the |
| The second IP address of the | The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the |
|
| The name of the |
|
| The name of the |
| The default configuration used for machine pools without a platform configuration. | |
| A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: | |
| A URL to override the default operating system for cluster nodes. The URL must include a SHA-256 hash of the image. For example, | |
| The
| |
| Set this parameter to the appropriate HTTP proxy used within your environment. | |
| Set this parameter to the appropriate HTTPS proxy used within your environment. | |
| Set this parameter to the appropriate list of exclusions for proxy usage within your environment. |
Hosts
The hosts
parameter is a list of separate bare metal assets used to build the cluster.
Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI.
IPMI
Hosts using IPMI use the ipmi://<out-of-band-ip>:<port>
address format, which defaults to port 623
if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: ipmi://<out-of-band-ip>
username: <user>
password: <password>
The |
Redfish network boot
To enable Redfish, use redfish://
or redfish+http://
to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
username: <user>
password: <password>
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True
in the bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True
configuration parameter within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
username: <user>
password: <password>
disableCertificateVerification: True
BMC addressing for Dell iDRAC
The address
field for each bmc
entry is a URL for connecting to the OKD cluster nodes, including the type of controller in the URL scheme and its location on the network.
platform:
baremetal:
hosts:
- name: <hostname>
role: <master | worker>
bmc:
address: <address> (1)
username: <user>
password: <password>
1 | The address configuration setting specifies the protocol. |
For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI.
Protocol | Address Format |
---|---|
iDRAC virtual media |
|
Redfish network boot |
|
IPMI |
|
Use |
See the following sections for additional details.
Redfish virtual media for Dell iDRAC
For Redfish virtual media on Dell servers, use idrac-virtualmedia://
in the address
setting. Using redfish-virtualmedia://
will not work.
The following example demonstrates using iDRAC virtual media within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
username: <user>
password: <password>
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True
in the bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True
configuration parameter within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
username: <user>
password: <password>
disableCertificateVerification: True
Currently, Redfish is only supported on Dell with iDRAC firmware versions Ensure the OKD cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration → Virtual Media → Attach Mode → Use |
Redfish network boot for iDRAC
To enable Redfish, use redfish://
or redfish+http://
to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
username: <user>
password: <password>
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True
in the bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True
configuration parameter within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1
username: <user>
password: <password>
disableCertificateVerification: True
Currently, Redfish is only supported on Dell hardware with iDRAC firmware versions Ensure the OKD cluster nodes have AutoAttach Enabled through the iDRAC console. The menu path is: Configuration → Virtual Media → Attach Mode → AutoAttach . The |
BMC addressing for HPE iLO
The address
field for each bmc
entry is a URL for connecting to the OKD cluster nodes, including the type of controller in the URL scheme and its location on the network.
platform:
baremetal:
hosts:
- name: <hostname>
role: <master | worker>
bmc:
address: <address> (1)
username: <user>
password: <password>
1 | The address configuration setting specifies the protocol. |
For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI.
Protocol | Address Format |
---|---|
Redfish virtual media |
|
Redfish network boot |
|
IPMI |
|
See the following sections for additional details.
Redfish virtual media for HPE iLO
To enable Redfish virtual media for HPE servers, use redfish-virtualmedia://
in the address
setting. The following example demonstrates using Redfish virtual media within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1
username: <user>
password: <password>
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True
in the bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True
configuration parameter within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1
username: <user>
password: <password>
disableCertificateVerification: True
Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. |
Redfish network boot for HPE iLO
To enable Redfish, use redfish://
or redfish+http://
to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
username: <user>
password: <password>
While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True
in the bmc
configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True
configuration parameter within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: redfish://<out-of-band-ip>/redfish/v1/Systems/1
username: <user>
password: <password>
disableCertificateVerification: True
BMC addressing for Fujitsu iRMC
The address
field for each bmc
entry is a URL for connecting to the OKD cluster nodes, including the type of controller in the URL scheme and its location on the network.
1 | The address configuration setting specifies the protocol. |
For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI.
Protocol | Address Format |
---|---|
iRMC |
|
IPMI |
|
iRMC
Fujitsu nodes can use irmc://<out-of-band-ip>
and defaults to port 623
. The following example demonstrates an iRMC configuration within the install-config.yaml
file.
platform:
baremetal:
hosts:
- name: openshift-master-0
role: master
bmc:
address: irmc://<out-of-band-ip>
username: <user>
password: <password>
Root device hints
The rootDeviceHints
parameter enables the installer to provision the Fedora CoreOS (FCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it.
Subfield | Description |
---|---|
| A string containing a Linux device name like |
| A string containing a SCSI bus address like |
| A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. |
A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. | |
| A string containing the device serial number. The hint must match the actual value exactly. |
| An integer representing the minimum size of the device in gigabytes. |
A string containing the unique storage identifier. The hint must match the actual value exactly. | |
| A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. |
| A string containing the unique vendor storage identifier. The hint must match the actual value exactly. |
| A boolean indicating whether the device should be a rotating disk (true) or not (false). |
Example usage
- name: master-0
role: master
bmc:
address: ipmi://10.10.0.3:6203
username: admin
password: redhat
bootMACAddress: de:ad:be:ef:00:40
rootDeviceHints:
deviceName: "/dev/sda"
Creating the OKD manifests
Create the OKD manifests.
$ ./openshift-baremetal-install --dir ~/clusterconfigs create manifests
INFO Consuming Install Config from target directory
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated
OKD installs the chrony
Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes before deployment.
OKD nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.
Procedure
Create a Butane config,
99-master-chrony-conf-override.bu
, including the contents of thechrony.conf
file for the control plane nodes.See “Creating machine configs with Butane” for information about Butane.
Butane config example
variant: openshift
version: 4.8.0
metadata:
name: 99-master-chrony-conf-override
labels:
machineconfiguration.openshift.io/role: master
storage:
files:
- path: /etc/chrony.conf
mode: 0644
overwrite: true
contents:
inline: |
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (https://www.pool.ntp.org/join.html).
# The Machine Config Operator manages this file
server openshift-master-0.<cluster-name>.<domain> iburst (1)
server openshift-master-1.<cluster-name>.<domain> iburst
server openshift-master-2.<cluster-name>.<domain> iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
commandkey 1
generatecommandkey
noclientlog
logchange 0.5
logdir /var/log/chrony
# Configure the control plane nodes to serve as local NTP servers
# for all worker nodes, even if they are not in sync with an
# upstream NTP server.
# Allow NTP client access from the local network.
allow all
# Serve time even if not synchronized to a time source.
local stratum 3 orphan
1 You must replace <cluster-name>
with the name of the cluster and replace<domain>
with the fully qualified domain name.Use Butane to generate a
MachineConfig
object file,99-master-chrony-conf-override.yaml
, containing the configuration to be delivered to the control plane nodes:$ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
Create a Butane config,
99-worker-chrony-conf-override.bu
, including the contents of thechrony.conf
file for the worker nodes that references the NTP servers on the control plane nodes.Butane config example
variant: openshift
version: 4.8.0
metadata:
name: 99-worker-chrony-conf-override
labels:
machineconfiguration.openshift.io/role: worker
storage:
files:
- path: /etc/chrony.conf
mode: 0644
overwrite: true
contents:
inline: |
# The Machine Config Operator manages this file.
server openshift-master-0.<cluster-name>.<domain> iburst (1)
server openshift-master-1.<cluster-name>.<domain> iburst
server openshift-master-2.<cluster-name>.<domain> iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
noclientlog
logchange 0.5
logdir /var/log/chrony
1 You must replace <cluster-name>
with the name of the cluster and replace<domain>
with the fully qualified domain name.Use Butane to generate a
MachineConfig
object file,99-worker-chrony-conf-override.yaml
, containing the configuration to be delivered to the worker nodes:$ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
Copy the
99-master-chrony-conf-override.yaml
file to the~/clusterconfigs/manifests
directory.$ cp 99-master-chrony-conf-override.yaml ~/clusterconfigs/manifests
Copy the
99-worker-chrony-conf-override.yaml
file to the~/clusterconfigs/manifests
directory.$ cp 99-worker-chrony-conf-override.yaml ~/clusterconfigs/manifests
Configure network components to run on the control plane
Configure networking components to run exclusively on the control plane nodes. By default, OKD allows any node in the machine config pool to host the apiVIP
and ingressVIP
virtual IP addresses. However, many environments deploy worker nodes in separate subnets from the control plane nodes. Consequently, you must place the apiVIP
and ingressVIP
virtual IP addresses exclusively with the control plane nodes.
Procedure
Change to the directory storing the
install-config.yaml
file:$ cd ~/clusterconfigs
Switch to the
manifests
subdirectory:$ cd manifests
Create a file named
cluster-network-avoid-workers-99-config.yaml
:$ touch cluster-network-avoid-workers-99-config.yaml
Open the
cluster-network-avoid-workers-99-config.yaml
file in an editor and enter a custom resource (CR) that describes the Operator configuration:apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: 50-worker-fix-ipi-rwn
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- name: nodeip-configuration.service
enabled: true
contents: |
[Unit]
Description=Writes IP address configuration so that kubelet and crio services select a valid node IP
Wants=network-online.target
After=network-online.target ignition-firstboot-complete.service
Before=kubelet.service crio.service
[Service]
Type=oneshot
ExecStart=/bin/bash -c "exit 0 "
[Install]
WantedBy=multi-user.target
storage:
files:
- path: /etc/kubernetes/manifests/keepalived.yaml
mode: 0644
contents:
source: data:,
- path: /etc/kubernetes/manifests/mdns-publisher.yaml
mode: 0644
contents:
source: data:,
- path: /etc/kubernetes/manifests/coredns.yaml
mode: 0644
contents:
source: data:,
This manifest places the
apiVIP
andingressVIP
virtual IP addresses on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only:openshift-ingress-operator
keepalived
Save the
cluster-network-avoid-workers-99-config.yaml
file.Create a
manifests/cluster-ingress-default-ingresscontroller.yaml
file:apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/master: ""
Consider backing up the
manifests
directory. The installer deletes themanifests/
directory when creating the cluster.Modify the
cluster-scheduler-02-config.yml
manifest to make the control plane nodes schedulable by setting themastersSchedulable
field totrue
. Control plane nodes are not schedulable by default. For example:$ sed -i "s;mastersSchedulable: false;mastersSchedulable: true;g" clusterconfigs/manifests/cluster-scheduler-02-config.yml
If control plane nodes are not schedulable, deploying the cluster will fail.
Before deploying the cluster, ensure that the
api.<cluster-name>.<domain>
domain name is resolvable in the external DNS server. When you configure network components to run exclusively on the control plane, the internal DNS resolution no longer works for worker nodes, which is an expected outcome.Failure to create a DNS record for the
api.<cluster-name>.<domain>
domain name in the external DNS server precludes worker nodes from joining the cluster.
Creating a disconnected registry (optional)
In some cases, you might want to install an OKD cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet.
A local, or mirrored, copy of the registry requires the following:
A certificate for the registry node. This can be a self-signed certificate.
A web server that a container on a system will serve.
An updated pull secret that contains the certificate and local repository information.
Creating a disconnected registry on a registry node is optional. The subsequent sections indicate that they are optional since they are steps you need to execute only when creating a disconnected registry on a registry node. You should execute all of the subsequent sub-sections labeled “(optional)” when creating a disconnected registry on a registry node. |
Preparing the registry node to host the mirrored registry (optional)
Make the following changes to the registry node.
Procedure
Open the firewall port on the registry node.
$ sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent
$ sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent
$ sudo firewall-cmd --reload
Install the required packages for the registry node.
$ sudo yum -y install python3 podman httpd httpd-tools jq
Create the directory structure where the repository information will be held.
$ sudo mkdir -p /opt/registry/{auth,certs,data}
Generating the self-signed certificate (optional)
Generate a self-signed certificate for the registry node and put it in the /opt/registry/certs
directory.
Procedure
Adjust the certificate information as appropriate.
$ host_fqdn=$( hostname --long )
$ cert_c="<Country Name>" # Country Name (C, 2 letter code)
$ cert_s="<State>" # Certificate State (S)
$ cert_l="<Locality>" # Certificate Locality (L)
$ cert_o="<Organization>" # Certificate Organization (O)
$ cert_ou="<Org Unit>" # Certificate Organizational Unit (OU)
$ cert_cn="${host_fqdn}" # Certificate Common Name (CN)
$ openssl req \
-newkey rsa:4096 \
-nodes \
-sha256 \
-keyout /opt/registry/certs/domain.key \
-x509 \
-days 365 \
-out /opt/registry/certs/domain.crt \
-addext "subjectAltName = DNS:${host_fqdn}" \
-subj "/C=${cert_c}/ST=${cert_s}/L=${cert_l}/O=${cert_o}/OU=${cert_ou}/CN=${cert_cn}"
When replacing <Country Name>
, ensure that it only contains two letters. For example,US
.Update the registry node’s
ca-trust
with the new certificate.$ sudo cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust extract
Creating the registry podman container (optional)
The registry container uses the /opt/registry
directory for certificates, authentication files, and to store its data files.
The registry container uses httpd
and needs an htpasswd
file for authentication.
Procedure
Create an
htpasswd
file in/opt/registry/auth
for the container to use.$ htpasswd -bBc /opt/registry/auth/htpasswd <user> <passwd>
Replace
<user>
with the user name and<passwd>
with the password.Create and start the registry container.
$ podman create \
--name ocpdiscon-registry \
-p 5000:5000 \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry" \
-e "REGISTRY_HTTP_SECRET=ALongRandomSecretForRegistry" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/domain.key" \
-e "REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true" \
-v /opt/registry/data:/var/lib/registry:z \
-v /opt/registry/auth:/auth:z \
-v /opt/registry/certs:/certs:z \
docker.io/library/registry:2
$ podman start ocpdiscon-registry
Copy and update the pull-secret (optional)
Copy the pull secret file from the provisioner node to the registry node and modify it to include the authentication information for the new registry node.
Procedure
Copy the
pull-secret.txt
file.$ scp kni@provisioner:/home/kni/pull-secret.txt pull-secret.txt
Update the
host_fqdn
environment variable with the fully qualified domain name of the registry node.$ host_fqdn=$( hostname --long )
Update the
b64auth
environment variable with the base64 encoding of thehttp
credentials used to create thehtpasswd
file.$ b64auth=$( echo -n '<username>:<passwd>' | openssl base64 )
Replace
<username>
with the user name and<passwd>
with the password.Set the
AUTHSTRING
environment variable to use thebase64
authorization string. The$USER
variable is an environment variable containing the name of the current user.$ AUTHSTRING="{\"$host_fqdn:5000\": {\"auth\": \"$b64auth\",\"email\": \"$USER@redhat.com\"}}"
Update the
pull-secret.txt
file.$ jq ".auths += $AUTHSTRING" < pull-secret.txt > pull-secret-update.txt
Procedure
Copy the
oc
binary from the provisioner node to the registry node.$ sudo scp kni@provisioner:/usr/local/bin/oc /usr/local/bin
Mirror the remote install images to the local repository.
$ /usr/local/bin/oc adm release mirror \
-a pull-secret-update.txt
--from=$UPSTREAM_REPO \
--to-release-image=$LOCAL_REG/$LOCAL_REPO:${VERSION} \
--to=$LOCAL_REG/$LOCAL_REPO
Modify the install-config.yaml
file to use the disconnected registry (optional)
On the provisioner node, the install-config.yaml
file should use the newly created pull-secret from the pull-secret-update.txt
file. The install-config.yaml
file must also contain the disconnected registry node’s certificate and registry information.
Procedure
Add the disconnected registry node’s certificate to the
install-config.yaml
file. The certificate should follow the"additionalTrustBundle: |"
line and be properly indented, usually by two spaces.$ echo "additionalTrustBundle: |" >> install-config.yaml
$ sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml
Add the mirror information for the registry to the
install-config.yaml
file.$ echo "imageContentSources:" >> install-config.yaml
$ echo "- mirrors:" >> install-config.yaml
$ echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml
$ echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml
$ echo "- mirrors:" >> install-config.yaml
$ echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml
$ echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml
Replace registry.example.com
with the registry’s fully qualified domain name.
Deploying routers on worker nodes
During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If the initial cluster has only one worker node, or if a deployed cluster requires additional routers to handle external traffic loads destined for services within the OKD cluster, you can create a yaml
file to set an appropriate number of router replicas.
By default, the installer deploys two routers. If the cluster has at least two worker nodes, you can skip this section. |
If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. If the cluster has no worker nodes, you can skip this section. |
Procedure
Create a
router-replicas.yaml
file.apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
replicas: <num-of-router-pods>
endpointPublishingStrategy:
type: HostNetwork
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
Save and copy the
router-replicas.yaml
file to theclusterconfigs/openshift
directory.cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml
OKD installer has been retrieved.
OKD installer has been extracted.
Required parameters for the
install-config.yaml
have been configured.The
hosts
parameter for theinstall-config.yaml
has been configured.The
bmc
parameter for theinstall-config.yaml
has been configured.Conventions for the values configured in the
bmc
address
field have been applied.Created a disconnected registry (optional).
(optional) Validate disconnected registry settings if in use.
(optional) Deployed routers on worker nodes.
Deploying the cluster via the OKD installer
Run the OKD installer:
$ ./openshift-baremetal-install --dir ~/clusterconfigs --log-level debug create cluster
Following the installation
During the deployment process, you can check the installation’s overall status by issuing the tail
command to the .openshift_install.log
log file in the install directory folder.
Verifying static IP address configuration
If the DHCP reservation for a cluster node specifies an infinite lease, after the installer successfully provisions the node, the dispatcher script checks the node’s network configuration. If the script determines that the network configuration contains an infinite DHCP lease, it creates a new connection using the IP address of the DHCP lease as a static IP address.
The dispatcher script might run on successfully provisioned nodes while the provisioning of other nodes in the cluster is ongoing. |
Verify the network configuration is working properly.
Procedure
Check the network interface configuration on the node.
Turn off the DHCP server and reboot the OKD node and ensure that the network configuration works properly.
- See OKD upgrade channels and releases for an explanation of the different release channels.