Installer-provisioned post-installation configuration

    OKD installs the Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment.

    OKD nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server.

    Procedure

    1. Create a Butane config, 99-master-chrony-conf-override.bu, including the contents of the chrony.conf file for the control plane nodes.

      Butane config example

      1You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name.
    2. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml, containing the configuration to be delivered to the control plane nodes:

      1. $ butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml
    3. Create a Butane config, 99-worker-chrony-conf-override.bu, including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes.

      Butane config example

      1. variant: openshift
      2. version: 4.13.0
      3. metadata:
      4. name: 99-worker-chrony-conf-override
      5. labels:
      6. machineconfiguration.openshift.io/role: worker
      7. storage:
      8. files:
      9. - path: /etc/chrony.conf
      10. mode: 0644
      11. overwrite: true
      12. contents:
      13. inline: |
      14. # The Machine Config Operator manages this file.
      15. server openshift-master-0.<cluster-name>.<domain> iburst (1)
      16. server openshift-master-1.<cluster-name>.<domain> iburst
      17. server openshift-master-2.<cluster-name>.<domain> iburst
      18. stratumweight 0
      19. driftfile /var/lib/chrony/drift
      20. rtcsync
      21. makestep 10 3
      22. bindcmdaddress 127.0.0.1
      23. bindcmdaddress ::1
      24. keyfile /etc/chrony.keys
      25. commandkey 1
      26. generatecommandkey
      27. noclientlog
      28. logchange 0.5
      29. logdir /var/log/chrony
    4. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml, containing the configuration to be delivered to the worker nodes:

      1. $ butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml
    5. Apply the 99-worker-chrony-conf-override.yaml policy to the worker nodes.

      Example output

      1. machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created
    6. Check the status of the applied NTP settings.

      1. $ oc describe machineconfigpool

    The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node’s baseboard management controller is routable via the network.

    Prerequisites

    • A dedicated physical network must exist, connected to all worker and control plane nodes.

    • You must isolate the native, untagged physical network.

    • The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed.

    • You can omit the provisioningInterface setting in OKD 4.10 to use the bootMACAddress configuration setting.

    Procedure

    1. When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1.

    2. Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes.

    3. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file:

      1. $ oc get provisioning -o yaml > enable-provisioning-nw.yaml
    4. Modify the provisioning CR file:

      1. $ vim ~/enable-provisioning-nw.yaml

      Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed. Then, add the provisioningIP, provisioningNetworkCIDR, provisioningDHCPRange, provisioningInterface, and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting.

      1. apiVersion: v1
      2. items:
      3. - apiVersion: metal3.io/v1alpha1
      4. kind: Provisioning
      5. metadata:
      6. name: provisioning-configuration
      7. spec:
      8. provisioningNetwork: (1)
      9. provisioningIP: (2)
      10. provisioningNetworkCIDR: (3)
      11. provisioningDHCPRange: (4)
      12. provisioningInterface: (5)
      13. watchAllNameSpaces: (6)
      1The provisioningNetwork is one of Managed, Unmanaged, or Disabled. When set to Managed, Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged, the system administrator configures the DHCP server manually.
      2The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled. The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server.
      3The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled. For example: 192.168.0.1/24.
      4The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled. For example: 192.168.0.64, 192.168.0.253.
      5The NIC name for the interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled. Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead.
      6Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false.
    5. Save the changes to the provisioning CR file.

    6. Apply the provisioning CR file to the cluster:

    You can configure an OKD cluster to use an external load balancer in place of the default load balancer.

    You can also configure an OKD cluster to use an external load balancer that supports multiple subnets. If you use multiple subnets, you can explicitly list all the IP addresses in any networks that are used by your load balancer targets. This configuration can reduce maintenance overhead because you can create and destroy nodes within those networks without reconfiguring the load balancer targets.

    If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28, you can simplify your load balancer targets.

    • On your load balancer, TCP over ports 6443, 443, and 80 must be reachable by all users of your system that are located outside the cluster.

    • Load balance the application ports, 443 and 80, between all the compute nodes.

    • Load balance the API port, 6443, between each of the control plane nodes.

    • On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster.

    • Your load balancer can access the required ports on each node in your cluster. You can ensure this level of access by completing the following actions:

      • The API load balancer can access ports 22623 and 6443 on the control plane nodes.

      • The ingress load balancer can access ports 443 and 80 on the nodes where the ingress pods are located.

    External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes.

    Procedure

    1. Enable access to the cluster from your load balancer on ports 6443, 443, and 80.

      As an example, note this HAProxy configuration:

      A section of a sample HAProxy configuration

      1. listen my-cluster-api-6443
      2. bind 0.0.0.0:6443
      3. mode tcp
      4. balance roundrobin
      5. server my-cluster-master-2 192.0.2.2:6443 check
      6. server my-cluster-master-0 192.0.2.3:6443 check
      7. server my-cluster-master-1 192.0.2.1:6443 check
      8. listen my-cluster-apps-443
      9. bind 0.0.0.0:443
      10. mode tcp
      11. balance roundrobin
      12. server my-cluster-worker-0 192.0.2.6:443 check
      13. server my-cluster-worker-1 192.0.2.5:443 check
      14. server my-cluster-worker-2 192.0.2.4:443 check
      15. listen my-cluster-apps-80
      16. bind 0.0.0.0:80
      17. mode tcp
      18. balance roundrobin
      19. server my-cluster-worker-0 192.0.2.7:80 check
      20. server my-cluster-worker-1 192.0.2.9:80 check
      21. server my-cluster-worker-2 192.0.2.8:80 check
    2. Add records to your DNS server for the cluster API and apps over the load balancer. For example:

      1. <load_balancer_ip_address> api.<cluster_name>.<base_domain>
      2. <load_balancer_ip_address> apps.<cluster_name>.<base_domain>
    3. From a command line, use curl to verify that the external load balancer and DNS configuration are operational.

      1. Verify that the cluster API is accessible:

        1. $ curl https://<loadbalancer_ip_address>:6443/version --insecure

        If the configuration is correct, you receive a JSON object in response:

        1. {
        2. "major": "1",
        3. "minor": "11+",
        4. "gitVersion": "v1.11.0+ad103ed",
        5. "gitCommit": "ad103ed",
        6. "gitTreeState": "clean",
        7. "buildDate": "2019-01-09T06:44:10Z",
        8. "goVersion": "go1.10.3",
        9. "compiler": "gc",
        10. "platform": "linux/amd64"
        11. }
      2. Verify that cluster applications are accessible:

          If the configuration is correct, you receive an HTTP response: