Post-installation network configuration
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named . The CR specifies the fields for the Network
API in the operator.openshift.io
API group.
The CNO configuration inherits the following fields during cluster installation from the Network
API in the Network.config.openshift.io
API group and these fields cannot be changed:
clusterNetwork
IP address pools from which pod IP addresses are allocated.
serviceNetwork
IP address pool for services.
defaultNetwork.type
Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.
Enabling the cluster-wide proxy
The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec
. For example:
A cluster administrator can configure the proxy for OKD by modifying this cluster
Proxy object.
Only the Proxy object named |
Prerequisites
Cluster administrator permissions
OKD
oc
CLI tool installed
Procedure
Create a ConfigMap that contains any additional CA certificates required for proxying HTTPS connections.
You can skip this step if the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
Create a file called
user-ca-bundle.yaml
with the following contents, and provide the values of your PEM-encoded certificates:apiVersion: v1
data:
ca-bundle.crt: | (1)
<MY_PEM_ENCODED_CERTS> (2)
kind: ConfigMap
metadata:
name: user-ca-bundle (3)
namespace: openshift-config (4)
1 This data key must be named ca-bundle.crt
.2 One or more PEM-encoded X.509 certificates used to sign the proxy’s identity certificate. 3 The ConfigMap name that will be referenced from the Proxy object. 4 The ConfigMap must be in the openshift-config
namespace.Create the ConfigMap from this file:
$ oc create -f user-ca-bundle.yaml
Use the
oc edit
command to modify the Proxy object:$ oc edit proxy/cluster
Configure the necessary fields for the proxy:
apiVersion: config.openshift.io/v1
kind: Proxy
metadata:
name: cluster
spec:
httpProxy: http://<username>:<pswd>@<ip>:<port> (1)
httpsProxy: http://<username>:<pswd>@<ip>:<port> (2)
noProxy: example.com (3)
readinessEndpoints:
- http://www.google.com (4)
- https://www.google.com
trustedCA:
name: user-ca-bundle (5)
1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http
.2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with
.
to match subdomains only. For example,.y.com
matchesx.y.com
, but noty.com
. Use*
to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by thenetworking.machineNetwork[].cidr
field from the installation configuration, you must add them to this list to prevent connection issues.This field is ignored if neither the
httpProxy
orhttpsProxy
fields are set.4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy
andhttpsProxy
values to status.5 A reference to the ConfigMap in the openshift-config
namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the ConfigMap must already exist before referencing it here. This field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.Save the file to apply the changes.
The URL scheme must be |
Setting DNS to private
After you deploy a cluster, you can modify its DNS to use only a private zone.
Procedure
Review the
DNS
custom resource for your cluster:$ oc get dnses.config.openshift.io/cluster -o yaml
Example output
apiVersion: config.openshift.io/v1
kind: DNS
metadata:
creationTimestamp: "2019-10-25T18:27:09Z"
generation: 2
name: cluster
resourceVersion: "37966"
selfLink: /apis/config.openshift.io/v1/dnses/cluster
uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
spec:
baseDomain: <base_domain>
privateZone:
tags:
Name: <infrastructure_id>-int
kubernetes.io/cluster/<infrastructure_id>: owned
publicZone:
id: Z2XXXXXXXXXXA4
status: {}
Note that the
spec
section contains both a private and a public zone.Patch the
DNS
custom resource to remove the public zone:$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
dns.config.openshift.io/cluster patched
Because the Ingress Controller consults the
DNS
definition when it createsIngress
objects, when you create or modifyIngress
objects, only private records are created.DNS records for the existing Ingress objects are not modified when you remove the public zone.
Optional: Review the
DNS
custom resource for your cluster and confirm that the public zone was removed:$ oc get dnses.config.openshift.io/cluster -o yaml
Example output
apiVersion: config.openshift.io/v1
kind: DNS
metadata:
creationTimestamp: "2019-10-25T18:27:09Z"
generation: 2
name: cluster
resourceVersion: "37966"
selfLink: /apis/config.openshift.io/v1/dnses/cluster
uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
spec:
baseDomain: <base_domain>
privateZone:
tags:
Name: <infrastructure_id>-int
kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned
status: {}
OKD provides the following methods for communicating from outside the cluster with services running in the cluster:
If you have HTTP/HTTPS, use an Ingress Controller.
If you have a TLS-encrypted protocol other than HTTPS, such as TLS with the SNI header, use an Ingress Controller.
Otherwise, use a load balancer, an external IP, or a node port.
Configuring the node port service range
As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports.
The default port range is 30000-32767
. You can never reduce the port range, even if you first expand it beyond the default range.
- Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to
30000-32900
, the inclusive port range of32768-32900
must be allowed by your firewall or packet filtering configuration.
Expanding the node port range
You can expand the node port range for the cluster.
Prerequisites
Install the OpenShift CLI (
oc
).Log in to the cluster with a user with
cluster-admin
privileges.
Procedure
To expand the node port range, enter the following command. Replace
<port>
with the largest port number in the new range.$ oc patch network.config.openshift.io cluster --type=merge -p \
'{
"spec":
{ "serviceNodePortRange": "30000-<port>" }
}'
You can alternatively apply the following YAML to update the node port range:
apiVersion: config.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
serviceNodePortRange: “30000-<port>”
Example output
network.config.openshift.io/cluster patched
To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply.
$ oc get configmaps -n openshift-kube-apiserver config \
-o jsonpath="{.data['config\.yaml']}" | \
grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]'
Example output
"service-node-port-range":["30000-33000"]
Configuring network policy
As a cluster administrator or project administrator, you can configure network policies for a project.
About network policy
In a cluster using a Kubernetes Container Network Interface (CNI) plug-in that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy
objects. In OKD 4.8, OpenShift SDN supports using network policy in its default network isolation mode.
When using the OpenShift SDN cluster network provider, the following limitations apply regarding network policies:
|
Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. |
By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy
objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy
objects within their own project.
If a pod is matched by selectors in one or more NetworkPolicy
objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy
objects. A pod that is not selected by any NetworkPolicy
objects is fully accessible.
The following example NetworkPolicy
objects demonstrate supporting different scenarios:
Deny all traffic:
To make a project deny by default, add a
NetworkPolicy
object that matches all pods but accepts no traffic:Only allow connections from the OKD Ingress Controller:
To make a project allow only connections from the OKD Ingress Controller, add the following
NetworkPolicy
object.apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {}
policyTypes:
- Ingress
Only accept connections from pods within a project:
To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following
NetworkPolicy
object:kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
Only allow HTTP and HTTPS traffic based on pod labels:
To enable only HTTP and HTTPS access to the pods with a specific label (
role=frontend
in following example), add aNetworkPolicy
object similar to the following:kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-http-and-https
spec:
podSelector:
matchLabels:
role: frontend
ingress:
- ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
Accept connections by using both namespace and pod selectors:
To match network traffic by combining namespace and pod selectors, you can use a
NetworkPolicy
object similar to the following:kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-pod-and-namespace-both
spec:
podSelector:
matchLabels:
name: test-pods
ingress:
- from:
- namespaceSelector:
matchLabels:
project: project_name
podSelector:
matchLabels:
name: test-pods
NetworkPolicy
objects are additive, which means you can combine multiple NetworkPolicy
objects together to satisfy complex network requirements.
For example, for the NetworkPolicy
objects defined in previous samples, you can define both allow-same-namespace
and allow-http-and-https
policies within the same project. Thus allowing the pods with the label role=frontend
, to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80
and 443
from pods in any namespace.
Example NetworkPolicy object
The following annotates an example NetworkPolicy object:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-27107 (1)
spec:
podSelector: (2)
matchLabels:
app: mongodb
ingress:
- from:
- podSelector: (3)
matchLabels:
app: app
ports: (4)
- protocol: TCP
port: 27017
1 | The name of the NetworkPolicy object. |
2 | A selector describing the pods the policy applies to. The policy object can only select pods in the project that the NetworkPolicy object is defined. |
3 | A selector matching the pods that the policy object allows ingress traffic from. The selector will match pods in any project. |
4 | A list of one or more destination ports to accept traffic on. |
Creating a network policy
If you log in with a user with the |
Prerequisites
Your cluster uses a cluster network provider that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN.You installed the OpenShift CLI (
oc
).You are logged in to the cluster with a user with
admin
privileges.You are working in the namespace that the network policy applies to.
Your cluster is using a cluster network provider that supports
NetworkPolicy
objects, such as the OpenShift SDN network provider withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN.
Procedure
Create a policy rule:
Create a
<policy_name>.yaml
file:$ touch <policy_name>.yaml
where:
<policy_name>
Specifies the network policy file name.
Define a network policy in the file that you just created, such as in the following examples:
Deny ingress from all pods in all namespaces
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-by-default
spec:
podSelector:
ingress: []
Allow ingress from all pods in the same namespace
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
To create the network policy object, enter the following command:
$ oc apply -f <policy_name>.yaml -n <namespace>
where:
<policy_name>
Specifies the network policy file name.
<namespace>
Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace.
Example output
networkpolicy.networking.k8s.io/default-deny created
Configuring multitenant isolation by using network policy
You can configure your project to isolate it from pods and services in other project namespaces.
Prerequisites
Your cluster uses a cluster network provider that supports
NetworkPolicy
objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN.You installed the OpenShift CLI (
oc
).You are logged in to the cluster with a user with
admin
privileges.
Procedure
Create the following
NetworkPolicy
objects:A policy named
allow-from-openshift-ingress
.$ cat << EOF| oc create -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {}
policyTypes:
- Ingress
EOF
A policy named
allow-from-openshift-monitoring
:$ cat << EOF| oc create -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-monitoring
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: monitoring
podSelector: {}
- Ingress
EOF
A policy named
allow-same-namespace
:$ cat << EOF| oc create -f -
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
name: allow-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
EOF
Optional: To confirm that the network policies exist in your current project, enter the following command:
$ oc describe networkpolicy
Example output
``` Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels:
Annotations: Spec: PodSelector: (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: Not affecting egress traffic Policy Types: Ingress
Name: allow-from-openshift-monitoring
Namespace: example1
Created on: 2020-06-09 00:29:57 -0400 EDT
Labels: <none>
Annotations: <none>
Spec:
PodSelector: <none> (Allowing the specific traffic to all pods in this namespace)
Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
From:
NamespaceSelector: network.openshift.io/policy-group: monitoring
Not affecting egress traffic
Policy Types: Ingress
```
As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy
objects when you create a new project.
Modifying the template for new projects
As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.
To create your own custom project template:
Procedure
Log in as a user with
cluster-admin
privileges.Generate the default project template:
$ oc adm create-bootstrap-project-template -o yaml > template.yaml
Use a text editor to modify the generated
template.yaml
file by adding objects or modifying existing objects.The project template must be created in the
openshift-config
namespace. Load your modified template:$ oc create -f template.yaml -n openshift-config
Edit the project configuration resource using the web console or CLI.
Using the web console:
Navigate to the Administration → Cluster Settings page.
Click Global Configuration to view all configuration resources.
Find the entry for Project and click Edit YAML.
Using the CLI:
Edit the
project.config.openshift.io/cluster
resource:$ oc edit project.config.openshift.io/cluster
Update the
spec
section to include theprojectRequestTemplate
andname
parameters, and set the name of your uploaded project template. The default name isproject-request
.Project configuration resource with custom project template
apiVersion: config.openshift.io/v1
kind: Project
metadata:
...
spec:
projectRequestTemplate:
name: <template_name>
After you save your changes, create a new project to verify that your changes were successfully applied.
Adding network policies to the new project template
As a cluster administrator, you can add network policies to the default template for new projects. OKD will automatically create all the NetworkPolicy
objects specified in the template in the project.
Prerequisites
Your cluster uses a default CNI network provider that supports
NetworkPolicy
objects, such as the OpenShift SDN network provider withmode: NetworkPolicy
set. This mode is the default for OpenShift SDN.You installed the OpenShift CLI (
oc
).You must log in to the cluster with a user with
cluster-admin
privileges.You must have created a custom default project template for new projects.
Procedure
Edit the default template for a new project by running the following command:
$ oc edit template <project_template> -n openshift-config
Replace
<project_template>
with the name of the default template that you configured for your cluster. The default template name isproject-request
.In the template, add each
NetworkPolicy
object as an element to theobjects
parameter. Theobjects
parameter accepts a collection of one or more objects.In the following example, the
objects
parameter collection includes severalNetworkPolicy
objects.For the OVN-Kubernetes network provider plug-in, when the Ingress Controller is configured to use the
HostNetwork
endpoint publishing strategy, there is no supported way to apply network policy so that ingress traffic is allowed and all other traffic is denied.objects:
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-same-namespace
spec:
podSelector:
ingress:
- from:
- podSelector: {}
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-openshift-ingress
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
network.openshift.io/policy-group: ingress
podSelector: {}
policyTypes:
- Ingress
...
Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands:
Create a new project:
$ oc new-project <project> (1)
Confirm that the network policy objects in the new project template exist in the new project:
$ oc get networkpolicy
NAME POD-SELECTOR AGE
allow-from-openshift-ingress <none> 7s
allow-from-same-namespace <none> 7s
The following configurations are supported for the current release of Red Hat OpenShift Service Mesh:
Red Hat OKD version 4.x.
Red Hat Red Hat OpenShift Dedicated version 4.
Azure Red Hat OpenShift version 4.
Red Hat OpenShift Online is not supported for Red Hat OpenShift Service Mesh. |
This release of Red Hat OpenShift Service Mesh is only available on OKD x86_64, IBM Z, and IBM Power Systems.
IBM Z is only supported on OKD 4.6 and later.
IBM Power Systems is only supported on OKD 4.6 and later.
Configurations where all Service Mesh components are contained within a single OKD cluster. Red Hat OpenShift Service Mesh does not support management of microservices that reside outside of the cluster within which Service Mesh is running.
Configurations that do not integrate external services such as virtual machines.
For additional information about Red Hat OpenShift Service Mesh lifecycle and supported configurations, refer to the .
Supported network configurations
Red Hat OpenShift Service Mesh supports the following network configurations.
OVN-Kubernetes is supported as a technology preview in OKD version 4.7.
Supported configurations for Kiali
- The Kiali observability console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers.
Supported configurations for Distributed Tracing
- Jaeger agent as a sidecar is the only supported configuration for Jaeger. Jaeger as a daemonset is not supported for multitenant installations or OpenShift Dedicated.
This release only supports the following Mixer adapter:
- 3scale Istio Adapter
Operator overview
Red Hat OpenShift Service Mesh requires the following four Operators:
OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with Jaeger. It is based on the open source Elasticsearch project.
Jaeger - Provides tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source project.
Kiali - Provides observability for your service mesh. Allows you to view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project.
Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the
ServiceMeshControlPlane
resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source project.
Next steps
- Install Red Hat OpenShift Service Mesh in your OKD environment.
Optimizing routing
The OKD HAProxy router scales to optimize performance.
Baseline Ingress Controller (router) performance
The OKD Ingress Controller, or router, is the Ingress point for all external traffic destined for OKD services.
When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular:
HTTP keep-alive/close mode
Route type
TLS session resumption client support
Number of concurrent connections per target route
Number of target routes
Back end server page size
Underlying infrastructure (network/SDN solution, CPU, and so on)
While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second.
In HTTP keep-alive mode scenarios:
Encryption | LoadBalancerService | HostNetwork |
---|---|---|
none | 21515 | 29622 |
edge | 16743 | 22913 |
passthrough | 36786 | 53295 |
re-encrypt | 21583 | 25198 |
In HTTP close (no keep-alive) scenarios:
Encryption | LoadBalancerService | HostNetwork |
---|---|---|
none | 5719 | 8273 |
edge | 2729 | 4069 |
passthrough | 4121 | 5344 |
re-encrypt | 2320 | 2941 |
Default Ingress Controller configuration with ROUTER_THREADS=4
was used and two different endpoint publishing strategies (LoadBalancerService/HostNetwork) were tested. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating 1 Gbit NIC at page sizes as small as 8 kB.
When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router:
Number of applications | Application type |
---|---|
5-10 | static file/web server or caching proxy |
100-1000 | applications generating dynamic content |
In general, HAProxy can support routes for 5 to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content.
Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier.
Ingress Controller (router) performance optimizations
OKD no longer supports modifying Ingress Controller deployments by setting environment variables such as ROUTER_THREADS
, ROUTER_DEFAULT_TUNNEL_TIMEOUT
, ROUTER_DEFAULT_CLIENT_TIMEOUT
, ROUTER_DEFAULT_SERVER_TIMEOUT
, and RELOAD_INTERVAL
.
You can modify the Ingress Controller deployment, but if the Ingress Operator is enabled, the configuration is overwritten.
Post-installation RHOSP network configuration
You can configure some aspects of a OKD on Red Hat OpenStack Platform (RHOSP) cluster after installation.
Configuring application access with floating IP addresses
After you install OKD, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic.
You do not need to perform this procedure if you provided values for |
Prerequisites
OKD cluster must be installed
Floating IP addresses are enabled as described in the OKD on RHOSP installation documentation.
Procedure
After you install the OKD cluster, attach a floating IP address to the ingress port:
Show the port:
$ openstack port show <cluster_name>-<cluster_ID>-ingress-port
Attach the port to the IP address:
$ openstack floating ip set --port <ingress_port_ID> <apps_FIP>
Add a wildcard
A
record for*apps.
to your DNS file:*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>
If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to
|
A Kuryr ports pool maintains a number of ports on standby for pod creation.
Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted.
The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OKD cluster nodes.
Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair.
Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml
manifest file to configure ports pool behavior:
The
enablePortPoolsPrepopulation
parameter controls pool prepopulation, which forces Kuryr to add ports to the pool when it is created, such as when a new host is added, or a new namespace is created. The default value isfalse
.The
poolMinPorts
parameter is the minimum number of free ports that are kept in the pool. The default value is1
.The
poolMaxPorts
parameter is the maximum number of free ports that are kept in the pool. A value of0
disables that upper bound. This is the default setting.If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted.
The
poolBatchPorts
parameter defines the maximum number of Neutron ports that can be created at once. The default value is3
.
Adjusting Kuryr ports pool settings in active deployments on RHOSP
You can use a custom resource (CR) to configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation on a deployed cluster.
Procedure
From a command line, open the Cluster Network Operator (CNO) CR for editing:
$ oc edit networks.operator.openshift.io cluster
Edit the settings to meet your requirements. The following file is provided as an example:
Save your changes and quit the text editor to commit your changes.