Kubernetes policy, demo

    To create a Kubernetes cluster which supports the Kubernetes network policy API, follow one of our .

    Running the stars example

    Wait for all the pods to enter state.

    1. kubectl get pods --all-namespaces --watch

    The management UI runs as a NodePort Service on Kubernetes, and shows the connectivity of the Services in this example.

    Once all the pods are started, they should have full connectivity. You can see this by visiting the UI. Each service is represented by a single node in the graph.

    • backend -> Node “B”
    • -> Node “C”

    2) Enable isolation

    Running following commands will prevent all access to the frontend, backend, and client Services.

    Confirm isolation

    Refresh the management UI (it may take up to 10 seconds for changes to be reflected in the UI). Now that we’ve enabled isolation, the UI can no longer access the pods, and so they will no longer show up in the UI.

    Apply the following YAMLs to allow access from the management UI.

    1. kubectl create -f https://docs.tigera.io/files/allow-ui.yaml
    2. kubectl create -f https://docs.tigera.io/files/allow-ui-client.yaml

    4) Create the backend-policy.yaml file to allow traffic from the frontend to the backend

    Refresh the UI. You should see the following:

    • The frontend can now access the backend (on TCP port 6379 only).
    • The backend cannot access the frontend at all.

    The client can now access the frontend, but not the backend. Neither the frontend nor the backend can initiate connections to the client. The frontend can still access the backend.

    To use Calico to enforce egress policy on Kubernetes pods, see .

    6) (Optional) Clean up the demo environment

    You can clean up the demo by deleting the demo Namespaces: