Upgrade Calico on Kubernetes
If you are using Calico in etcd mode on a Kubernetes cluster, we recommend upgrading to the Kubernetes API datastore .
If you have installed Calico using the manifest, we recommend upgrading to the Calico operator, as discussed here.
note
Do not use older versions of calicoctl
after the upgrade. This may result in unexpected behavior and data.
Host Endpoints
caution
If your cluster has host endpoints with interfaceName: *
you must prepare your cluster before upgrading. Failure to do so will result in an outage.
In versions of Calico prior to v3.14, all-interfaces host endpoints (host endpoints with interfaceName: *
) only supported pre-DNAT policy. The default behavior of all-interfaces host endpoints, in the absence of any policy, was to allow all traffic.
Beginning from v3.14, all-interfaces host endpoints support normal policy in addition to pre-DNAT policy. The support for normal policy includes a change in default behavior for all-interfaces host endpoints: in the absence of policy the default behavior is to drop traffic. This default behavior is consistent with “named” host endpoints (which specify a named interface such as “eth0”); named host endpoints drop traffic in the absence of policy.
Before upgrading to v3.24, you must ensure that global network policies are in place that select existing all-interfaces host endpoints and explicitly allow existing traffic flows. As a starting point, you can create an allow-all policy that selects existing all-interfaces host endpoints. First, we’ll add a label to the existing host endpoints. Get a list of the nodes that have an all-interfaces host endpoint:
With the names of the all-interfaces host endpoints, we can label each host endpoint with a new label (for example, host-endpoint-upgrade: “”):
calicoctl get hep -owide | grep '*' | awk '"print $1"' \| xargs -I {} kubectl exec -i -n kube-system calicoctl -- /calicoctl label hostendpoint {} host-endpoint-upgrade=
Now that the nodes with an all-interfaces host endpoint are labeled with host-endpoint-upgrade, we can create a policy to log and allow all traffic going into or out of the host endpoints temporarily:
cat > allow-all-upgrade.yaml <<EOF
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-all-upgrade
spec:
selector: has(host-endpoint-upgrade)
types:
- Ingress
- Egress
ingress:
- action: Log
- action: Allow
egress:
- action: Log
EOF
Apply the policy:
calicoctl apply -f - < allow-all-upgrade.yaml
After applying this policy, all-interfaces host endpoints will log and allow all traffic through them. This policy will allow all traffic not accounted for by other policies. After upgrading, please review syslog logs for traffic going through the host endpoints and update the policy as needed to secure traffic to the host endpoints.
Prior to release v3.23, the Calico helm chart itself deployed the tigera-operator
namespace and required that the helm release was installed in the default
namespace. Newer releases properly defer creation of the tigera-operator
namespace to the user and allow installation of the chart into the namespace.
When upgrading from a version of Calico v3.22 or lower to a version of Calico v3.23 or greater, you must complete the following steps to migrate ownership of the helm resources to the new chart location.
Patch existing resources so that the new chart can assume ownership.
kubectl patch installation default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}'
Install the helm chart in the
tigera-operator
namespace.helm install calico projectcalico/tigera-operator --version v3.24.5 --namespace tigera-operator
Once the install has succeeded, you can delete any old releases in the
default
namespace.kubectl delete secret -n default -l name=calico,owner=helm --dry-run
The above command uses —dry-run to avoid making changes to your cluster. We recommend reviewing the output and then re-running the command without —dry-run to commit to the changes.
All other upgrades
Run the helm upgrade:
helm upgrade calico projectcalico/tigera-operator
Upgrading an installation that uses the operator
Download the v3.24 operator manifest.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml -O
Use the following command to initiate an upgrade.
Download the v3.24 manifest that corresponds to your original installation method.
Calico for policy and networking
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico.yaml -O
Calico for policy and flannel for networking
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/canal.yaml -O
Calico for policy (advanced)
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico-policy-only.yaml -O
note
If you manually modified the manifest, you must manually apply the same changes to the downloaded manifest.
Use the following command to initiate a rolling update, after replacing
<manifest-file-name>
with the file name of your v3.24 manifest.kubectl apply -f <manifest-file-name>
Watch the status of the upgrade as follows.
watch kubectl get pods -n kube-system
Verify that the status of all Calico pods indicate
Running
.calico-node-hvvg8 2/2 Running 0 3m
calico-node-vm8kh 2/2 Running 0 3m
calico-node-w92wk 2/2 Running 0 3m
Use the following command to check the Calico version number.
calicoctl version
It should return a of
v3.24.x
.If you have , follow the instructions below to complete your upgrade. Skip this if you are not using Istio with Calico.
If you were upgrading from a version of Calico prior to v3.14 and followed the pre-upgrade steps for host endpoints above, review traffic logs from the temporary policy, add any global network policies needed to allow traffic, and delete the temporary network policy allow-all-upgrade.
Congratulations! You have upgraded to Calico v3.24.
Upgrading an installation that uses an etcd datastore
Download the v3.24 manifest that corresponds to your original installation method.
Calico for policy and networking
curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico-etcd.yaml -O
Calico for policy and flannel for networking
You must manually apply the changes you made to the manifest during installation to the downloaded v3.24 manifest. At a minimum, you must set the
etcd_endpoints
value.Use the following command to initiate a rolling update, after replacing
<manifest-file-name>
with the file name of your v3.24 manifest.kubectl apply -f <manifest-file-name>
Watch the status of the upgrade as follows.
watch kubectl get pods -n kube-system
Verify that the status of all Calico pods indicate
Running
.calico-kube-controllers-6d4b9d6b5b-wlkfj 1/1 Running 0 3m
calico-node-hvvg8 1/2 Running 0 3m
calico-node-vm8kh 1/2 Running 0 3m
calico-node-w92wk 1/2 Running 0 3m
tip
The calico-node pods will report
1/2
in theREADY
column, as shown.Remove any existing
calicoctl
instances, install the new calicoctl and .Use the following command to check the Calico version number.
calicoctl version
It should return a
Cluster Version
ofv3.24
.If you have enabled application layer policy, follow to complete your upgrade. Skip this if you are not using Istio with Calico.
If you were upgrading from a version of Calico prior to v3.14 and followed the pre-upgrade steps for host endpoints above, review traffic logs from the temporary policy, add any global network policies needed to allow traffic, and delete the temporary network policy allow-all-upgrade.
Congratulations! You have upgraded to Calico v3.24.
Dikastes is versioned the same as the rest of Calico, but an upgraded calico-node
will still be able to work with a downlevel Dikastes so that you will not lose data plane connectivity during the upgrade. Once calico-node
is upgraded, you can begin redeploying your service pods with the updated version of Dikastes.
If you have , take the following steps to upgrade the Dikastes sidecars running in your application pods. Skip these steps if you are not using Istio with Calico.
Update the Istio sidecar injector template to use the new version of Dikastes. Replace
<your Istio version>
below with the full version string of your Istio install, for example1.4.2
.kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/alp/istio-inject-configmap-<your Istio version>.yaml
Once the new template is in place, newly created pods use the upgraded version of Dikastes. Perform a rolling update of each of your service deployments to get them on the new version of Dikastes.
Migrating to auto host endpoints
caution
Auto host endpoints have an allow-all profile attached which allows all traffic in the absence of network policy. This may result in unexpected behavior and data.
In order to migrate existing all-interfaces host endpoints to Calico-managed auto host endpoints:
Add any labels on existing all-interfaces host endpoints to their corresponding Kubernetes nodes. Calico manages labels on automatic host endpoints by syncing labels from their nodes. Any labels on existing all-interfaces host endpoints should be added to their respective nodes. For example, if your existing all-interface host endpoint for node node1 has the label environment: dev, then you must add that same label to its node:
kubectl label node node1 environment=dev
Enable auto host endpoints by following the . Note that automatic host endpoints are created with a profile attached that allows all traffic in the absence of network policy.
calicoctl patch kubecontrollersconfiguration default --patch ={"spec": {"controllers": {"node": {"hostEndpoint": {"autoCreate": "Enabled"}}}}}
Delete old all-interfaces host endpoints. You can distinguish host endpoints managed by Calico from others in several ways. First, automatic host endpoints have the label projectcalico.org/created-by: calico-kube-controllers. Secondly, automatic host endpoints’ name have the suffix -auto-hep.
calicoctl delete hostendpoint <old_hostendpoint_name>