Customize Calico configuration
Concepts
Calico is installed by an operator which manages the installation, upgrade, and general lifecycle of a Calico cluster. The operator is installed directly on the cluster as a Deployment, and is configured through one or more custom Kubernetes API resources.
Calico manifests
Calico can also be installed using raw manifests as an alternative to the operator. The manifests contain the necessary resources for installing Calico on each node in your Kubernetes cluster. Using manifests is not recommended as they cannot automatically manage the lifecycle of the Calico as the operator does. However, manifests may be useful for clusters that require highly specific modifications to the underlying Kubernetes resources.
How to
- Operator
- Manifest
Operator installations read their configuration from a specific set of Kubernetes APIs. These APIs are installed on the cluster as part of in the operator.tigera.io/v1
API group.
- Installation: a singleton resource with name “default” that configures common installation parameters for a Calico cluster.
- : a singleton resource with name “default” that configures installation of the Calico API server extension.
Configure the pod IP range
For many environments, Calico will auto-detect the correct pod IP range to use, or select an unused range on the cluster.
You can select a specific pod IP range by modifying the spec.calicoNetwork.ipPools
array in the Installation API resource.
note
The the ipPools array can take at most one IPv4 and one IPv6 CIDR, and only takes effect when installing Calico for the first time on a given cluster. To add additional pools, see .
You can enable VXLAN in a cluster by setting the option on your IPv4 pool. You can also disable BGP via the spec.calicoNetwork.bgp
field.
kind: Installation
apiVersion: operator.tigera.io/v1
metadata:
name: default
spec:
calicoNetwork:
bgp: Disabled
ipPools:
- cidr: 198.51.100.0/24
encapsulation: VXLAN
We provide a number of manifests to make deployment of Calico easy. You can optionally modify the manifests before applying them. Or you can modify the manifest and reapply it to change settings as needed.
About customizing Calico manifests
Each manifest contains all the necessary resources for installing Calico on each node in your Kubernetes cluster.
It installs the following Kubernetes resources:
- Installs the
calico/node
container on each host using a DaemonSet. - Installs the Calico CNI binaries and network config on each host using a DaemonSet.
- Runs
calico/kube-controllers
as a deployment. - The
calico-etcd-secrets
secret, which optionally allows for providing etcd TLS assets.
The sections that follow discuss the configurable parameters in greater depth.
Calico IPAM assigns IP addresses from .
To change the default IP range used for pods, modify the CALICO_IPV4POOL_CIDR
section of the calico.yaml
manifest. For more information, see Configuring calico/node.
- Their cluster is running in a properly configured AWS VPC.
- All their Kubernetes nodes are connected to the same layer 2 network.
- They intend to use BGP peering to make their underlying infrastructure aware of pod IP addresses.
To disable IP-in-IP encapsulation, modify the CALICO_IPV4POOL_IPIP
section of the manifest. For more information, see .
Switching from IP-in-IP to VXLAN
By default, the Calico manifests enable IP-in-IP encapsulation. If you are on a network that blocks IP-in-IP, such as Azure, you may wish to switch to . To do this at install time (so that Calico creates the default IP pool with VXLAN and no IP-in-IP configuration has to be undone):
- Start with one of the Calico for policy and networking manifests.
- Replace environment variable name
CALICO_IPV4POOL_IPIP
withCALICO_IPV4POOL_VXLAN
. Leave the value of the new variable as “Always”. - Optionally, (to save some resources if you’re running a VXLAN-only cluster) completely disable Calico’s BGP-based networking:
- Replace
calico_backend: "bird"
withcalico_backend: "vxlan"
. This disables BIRD. - Comment out the line
- -bird-ready
and- -bird-live
from the calico/node readiness/liveness check (otherwise disabling BIRD will cause the readiness/liveness check to fail on every node):
- Replace
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
readinessProbe:
exec:
command:
- /bin/calico-node
- -bird-ready
- -felix-ready
For more information on calico/node’s configuration variables, including additional VXLAN settings, see .
note
The CALICO_IPV4POOL_VXLAN
environment variable only takes effect when the first calico/node to start creates the default IP pool. It has no effect after the pool has already been created. To switch to VXLAN mode after installation time, use calicoctl to modify the IPPool resource.
Configuring etcd
By default, these manifests do not configure secure access to etcd and assume an etcd proxy is running on each host. The following configuration options let you specify custom etcd cluster endpoints as well as TLS.
The following table outlines the supported options for etcd:
To use these manifests with a TLS-enabled etcd cluster you must do the following:
Download the v3.24 manifest that corresponds to your installation method.
Calico for policy and networking
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico-etcd.yaml -O
Calico for policy and flannel for networking
Within the
ConfigMap
section, uncomment theetcd_ca
,etcd_key
, andetcd_cert
lines so that they look as follows.etcd_ca: '/calico-secrets/etcd-ca'
etcd_cert: '/calico-secrets/etcd-cert'
etcd_key: '/calico-secrets/etcd-key'
Ensure that you have three files, one containing the
etcd_ca
value, another containing theetcd_key
value, and a third containing theetcd_cert
value.Using a command like the following to strip the newlines from the files and base64-encode their contents.
cat <file> | base64 -w 0
-
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
data:
Populate the following files with etcd TLS configuration if desired, but leave blank if
not using TLS for etcd.
This self-hosted install expects three files with the following names. The values
should be base64 encoded strings of the entire contents of each file.
etcd-key: LS0tLS1CRUdJTiB...VZBVEUgS0VZLS0tLS0=
etcd-cert: LS0tLS1...ElGSUNBVEUtLS0tLQ==
etcd-ca: LS0tLS1CRUdJTiBD...JRklDQVRFLS0tLS0=
Apply the manifest.
Calico for policy and networking
Calico for policy and flannel for networking
kubectl apply -f canal.yaml
Calico’s manifests assign its components one of two service accounts. Depending on your cluster’s authorization mode, you’ll want to back these service accounts with the necessary permissions.
Other configuration options
The following table outlines the remaining supported ConfigMap
options.
CNI network configuration template
The cni_network_config
configuration option supports the following template fields, which will be filled in automatically by the calico/cni
container:
Instead of installing from our pre-modified Istio manifests, you may wish to customize your Istio install or use a different Istio version. This section walks you through the necessary changes to a generic Istio install manifest to allow application layer policy to operate.
The standard Istio manifests for the sidecar injector include a ConfigMap that contains the template used when adding pods to the cluster. The template adds an init container and the Envoy sidecar. Application layer policy requires an additional lightweight sidecar called Dikastes which receives Calico policy from Felix and applies it to incoming connections and requests.
If you haven’t already done so, download an Istio release and untar it to a working directory.
Open the install/kubernetes/istio-demo-auth.yaml
file in an editor, and locate the istio-sidecar-injector
ConfigMap. In the existing istio-proxy
container, add a new volumeMount
.
- mountPath: /var/run/dikastes
name: dikastes-sock
Add a new container to the template.
- name: dikastes
image: calico/dikastes:v3.24.5
args: ["server", "-l", "/var/run/dikastes/dikastes.sock", "-d", "/var/run/felix/nodeagent/socket"]
securityContext:
allowPrivilegeEscalation: false
livenessProbe:
exec:
command:
- /healthz
- liveness
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
exec:
command:
- /healthz
- readiness
initialDelaySeconds: 3
periodSeconds: 3
volumeMounts:
- mountPath: /var/run/dikastes
name: dikastes-sock
- mountPath: /var/run/felix
Add two new volumes.
The volumes you added are used to create Unix domain sockets that allow communication between Envoy and Dikastes and between Dikastes and Felix. Once created, a Unix domain socket is an in-memory communications channel. The volumes are not used for any kind of stateful storage on disk.
Refer to the for an example with the above changes.