OKD 4.13 Documentation

    To navigate the OKD 4.13 documentation, you can use one of the following methods:

    • Use the left navigation bar to browse the documentation.

    • Select the task that interests you from the contents of this Welcome page.

    Start with Architecture and .

    Explore the following OKD installation tasks:

    • : You can install OKD on installer-provisioned or user-provisioned infrastructure. The OKD installation program provides the flexibility to deploy OKD on a range of different platforms.

    • Install a cluster on Alibaba: You can install OKD on Alibaba Cloud on installer-provisioned infrastructure. This is currently a Technology Preview feature only.

    • : You have many installation options when you deploy a cluster on Amazon Web Services (AWS). You can deploy clusters with default settings or . You can also deploy a cluster on AWS infrastructure that you provisioned yourself. You can modify the provided AWS CloudFormation templates to meet your needs.

    • : You can deploy clusters with default settings, , or custom networking settings in Microsoft Azure. You can also provision OKD into an or use Azure Resource Manager Templates to provision your own infrastructure.

    • : You can install OKD on Azure Stack Hub on installer-provisioned infrastructure.

    • Install a cluster on GCP: You can deploy clusters with or custom GCP settings on Google Cloud Platform (GCP). You can also perform a GCP installation where you provision your own infrastructure.

    • : You can install OKD on supported versions of vSphere.

    • Install a cluster on VMware Cloud: You can install OKD on supported versions of VMware Cloud (VMC) on AWS.

    • : You can install OKD on bare metal with an installer-provisioned architecture.

    • Install a user-provisioned cluster on bare metal: If none of the available platform and cloud provider deployment options meet your needs, you can install OKD on user-provisioned, bare-metal infrastructure.

    • Install a cluster on OpenStack: You can install a cluster on , with network customizations, or on installer-provisioned infrastructure.

      You can install a cluster on OpenStack with customizations or on user-provisioned infrastructure.

    • Install a cluster on oVirt: You can deploy clusters on oVirt with a quick install or an .

    • Install a cluster in a restricted network: If your cluster that uses user-provisioned infrastructure on AWS, , or bare metal does not have full access to the internet, then and install a cluster in a restricted network.

    • Install a private cluster: If your cluster does not require external internet access, you can install a private cluster on AWS, , GCP, or . Internet access is still required to access the cloud APIs and installation media.

    • Access OKD: Use credentials output at the end of the installation process to log in to the OKD cluster from the command line or web console.

    • : You can install Red Hat OpenShift Data Foundation as an Operator to provide highly integrated and simplified persistent storage management for containers.

    • Install a cluster on Nutanix: You can install a cluster on your Nutanix instance that uses installer-provisioned infrastructure. With this type of installation, you can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains.

    • provides a way for you to add new images on top of the base RHCOS image. This layering does not modify the base RHCOS image. Instead, the layering creates a custom layered image that includes all RHCOS functionality and adds additional functionality to specific nodes in the cluster.

    Develop and deploy containerized applications with OKD. OKD is a platform for developing and deploying containerized applications. OKD documentation helps you:

    • : Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators.

    • Work with projects: Create projects from the OKD web console or OpenShift CLI () to organize and share the software you develop.

    • .

    • Use the Developer perspective in the OKD web console to .

    • Use the Topology view to see your applications, monitor status, connect and group components, and modify your code base.

    • : With the Service Binding Operator, an application developer can bind workloads with Operator-managed backing services by automatically collecting and sharing binding data with the workloads. The Service Binding Operator improves the development lifecycle with a consistent and declarative service binding method that prevents discrepancies in cluster environments.

    • Create CI/CD Pipelines: Pipelines are serverless, cloud-native, continuous integration and continuous deployment systems that run in isolated containers. Pipelines use standard Tekton custom resources to automate deployments and are designed for decentralized teams that work on microservice-based architecture.

    • : GitOps is a declarative way to implement continuous deployment for cloud native applications. GitOps defines infrastructure and application definitions as code. GitOps uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. GitOps also handles and automates complex deployments at a fast pace, which saves time during deployment and release cycles.

    • Deploy Helm charts: is a software package manager that simplifies deployment of applications and services to OKD clusters. Helm uses a packaging format called charts. A Helm chart is a collection of files that describes the OKD resources.

    • Understand image builds: Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials, such as Git repositories, local binary inputs, and external artifacts. You can follow examples of build types from basic builds to advanced builds.

    • : A container image is the most basic building block in OKD and Kubernetes applications. By defining image streams, you can gather multiple versions of an image in one place as you continue to develop the image stream. With S2I containers, you can insert your source code into a base container. The base container is configured to run code of a particular type, such as Ruby, Node.js, or Python.

    • Create deployments: Use Deployment and objects to exert fine-grained management over applications. by using the Workloads page or OpenShift CLI (oc). Learn rolling, recreate, and custom deployment strategies.

    • : Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built.

    • Understand Operators: Operators are the preferred method for creating on-cluster applications for OKD 4.13. Learn about the Operator Framework and how to deploy applications by using installed Operators into your projects.

    • : Operators are the preferred method for creating on-cluster applications for OKD 4.13. Learn the workflow for building, testing, and deploying Operators. You can then create your own Operators based on Ansible or , or configure built-in Prometheus monitoring by using the Operator SDK.

    • Understand OKD management: Learn about components of the OKD 4.13 control plane. See how OKD control plane and worker nodes are managed and updated through the and Operators.

    • Enable cluster capabilities that were disabled prior to installation Cluster administrators can enable cluster capabilities that were disabled prior to installation. For more information, see .

    • Manage machines: Manage machines in your cluster on , Azure, or by deploying health checks and .

    • Manage container registries: Each OKD cluster includes a built-in container registry for storing its images. You can also configure a separate registry to use with OKD. The Quay.io website provides a public container registry that stores OKD containers and Operators.

    • : Add users and groups with different levels of permissions to use or modify clusters.

    • Manage authentication: Learn how user, group, and API authentication works in OKD. OKD supports .

    • Manage ingress, , and service certificates: OKD creates certificates by default for the Ingress Operator, the API server, and for services needed by complex middleware applications that require encryption. You might need to change, add, or rotate these certificates.

    • : The cluster network in OKD is managed by the Cluster Network Operator (CNO). The CNO uses rules in to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. By using features, you can isolate your pods or permit selected traffic.

    • Manage storage: With OKD, a cluster administrator can configure persistent storage by using , AWS Elastic Block Store, , iSCSI, , and more. You can expand persistent volumes, configure , and use CSI to configure, , and use snapshots of persistent storage.

    • : Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters. After you install them, you can , upgrade, back up, or otherwise manage the Operator on your cluster.

    • . You can use the Red Hat OpenShift support for Windows Containers feature to run Windows compute nodes in an OKD cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes.

    • : Cluster features implemented with Operators can be modified with CRDs. Learn to create a CRD and .

    • Set resource quotas: Choose from CPU, memory, and other system resources to .

    • Prune and reclaim resources: Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs.

    • and tune clusters: Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment.

    • Update a cluster: Use the Cluster Version Operator (CVO) to upgrade your OKD cluster. If an update is available from the OpenShift Update Service (OSUS), you apply that cluster update from either the OKD or the OpenShift CLI (oc).

    • : Learn about installing and managing a local OpenShift Update Service for recommending OKD updates in disconnected environments.

    • Improving cluster stability in high latency environments by using worker latency profiles: If your network has latency issues, you can use one of three worker latency profiles to help ensure that your control plane does not accidentally evict pods in case it cannot reach a worker node. You can configure or modify the profile at any time during the life of the cluster.

    • Work with OpenShift Logging: Learn about OpenShift Logging and configure different OpenShift Logging types, such as Elasticsearch, Fluentd, and Kibana.

    • : Learn to configure the monitoring stack. After configuring monitoring, use the web console to access . In addition to infrastructure metrics, you can also scrape and view metrics for your own services.

    • Remote health monitoring: OKD collects anonymized aggregated information about your cluster. By using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve OKD. You can view the .