Cilium

    To use the Cilium, specify the following in the cluster spec.

    The following command sets up a cluster using Cilium.

    1. kops create cluster \
    2. --zones $ZONES \
    3. --networking cilium\
    4. --yes \
    5. --name cilium.example.com

    This feature is in beta state as of kOps 1.18.

    By default, Cilium will use CRDs for synchronizing agent state. This can cause performance problems on larger clusters. As of kOps 1.18, kOps can manage an etcd cluster using etcd-manager dedicated for cilium agent state sync. The contains recommendations for when this must be enabled.

    For new clusters you can use the cilium-etcd networking provider:

    1. export ZONES=mylistofzones
    2. kops create cluster \
    3. --zones $ZONES \
    4. --networking cilium-etcd \
    5. --yes \
    6. --name cilium.example.com

    For existing clusters, add the following to spec.etcdClusters: Make sure instanceGroup match the other etcd clusters. You should also enable auto compaction.

    1. - instanceGroup: master-az-1a
    2. name: a
    3. - instanceGroup: master-az-1b
    4. name: b
    5. - instanceGroup: master-az-1c
    6. name: c
    7. env:
    8. - name: ETCD_AUTO_COMPACTION_MODE
    9. value: revision
    10. - name: ETCD_AUTO_COMPACTION_RETENTION
    11. value: "2500"
    12. name: cilium

    It is important that you perform a rolling update on the entire cluster so that all the nodes can connect to the new etcd cluster.

    1. kops update cluster
    2. kops update cluster --yes
    3. kops rolling-update cluster --force --yes

    Then enable etcd as kvstore:

    In this mode, the cluster is fully functional without kube-proxy, with Cilium replacing kube-proxy’s NodePort implementation using BPF. Read more about this in the Cilium docs - kubeproxy free and

    Be aware that you need to use an AMI with at least Linux 4.19.57 for this feature to work.

    Also be aware that while enabling this on an existing cluster is safe, disabling this is disruptive and requires you to run kops rolling-upgrade cluster --cloudonly.

    1. kubeProxy:
    2. enabled: false
    3. networking:
    4. cilium:
    5. enableNodePort: true

    If you are migrating an existing cluster, you need to manually roll the cilium DaemonSet before rolling the cluster:

    1. kops update cluster
    2. kops update cluster --yes
    3. kubectl rollout restart ds/cilium -n kube-system

    This feature is in beta state.

    You can have Cilium provision AWS managed addresses and attach them directly to Pods much like AWS VPC. See the Cilium docs for more information

    1. networking:
    2. cilium:

    In kOps versions before 1.22, when using ENI IPAM you need to explicitly disable masquerading in Cilium as well.

    1. networking:
    2. cilium:
    3. disableMasquerade: true
    4. ipam: eni

    Note that since Cilium Operator is the entity that interacts with the EC2 API to provision and attaching ENIs, we force it to run on the master nodes when this IPAM is used.

    Enabling Encryption in Cilium

    ipsec

    As of kOps 1.19, it is possible to enable encryption for Cilium agent. In order to enable encryption, you must first generate the pre-shared key using this command:

    The above command will create a dedicated secret for cilium and store it in the kOps secret store. Once the secret has been created, encryption can be enabled by setting enableEncryption option in spec.networking.cilium to true:

    1. networking:
    2. cilium:
    3. enableEncryption: true
    wireguard

    Cilium can make use of the wireguard protocol for transparent encryption. Take care to familiarise yourself with the .

    1. networking:
    2. cilium:
    3. enableEncryption: true
    4. enableL7Proxy: false
    5. encryptionType: wireguard

    Resources in Cilium

    As of kOps 1.20, it is possible to choose your own values for Cilium Agents + Operator. Example:

    1. networking:
    2. cilium:
    3. cpuRequest: "25m"
    4. memoryRequest: "128Mi"

    Hubble is the observability layer of Cilium and can be used to obtain cluster-wide visibility into the network and security layer of your Kubernetes cluster. See the for more information.

    Hubble can be enabled by adding the following to the spec:

    1. networking:
    2. cilium:
    3. enabled: true

    This will enable Hubble in the Cilium agent as well as install hubble-relay. kOps will also configure mTLS between the Cilium agent and relay. Note that since the Hubble UI does not support TLS, the relay is not configured to listen on a secure port.

    The Hubble UI has to be installed separatly.

    For support with Cilium Network Policies you can reach out on Slack or Github: