Communicate Between Containers in the Same Pod Using a Shared Volume

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

    To check the version, enter .

    In this exercise, you create a Pod that runs two Containers. The two containers share a Volume that they can use to communicate. Here is the configuration file for the Pod:

    In the configuration file, you can see that the Pod has a Volume named shared-data.

    The first container listed in the configuration file runs an nginx server. The mount path for the shared Volume is /usr/share/nginx/html. The second container is based on the debian image, and has a mount path of /pod-data. The second container runs the following command and then terminates.

    1. echo Hello from the debian container > /pod-data/index.html

    Notice that the second container writes the index.html file in the root directory of the nginx server.

    1. kubectl apply -f https://k8s.io/examples/pods/two-container-pod.yaml

    View information about the Pod and the Containers:

    Here is a portion of the output:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. ...
    5. name: two-containers
    6. namespace: default
    7. spec:
    8. ...
    9. containerStatuses:
    10. - containerID: docker://c1d8abd1 ...
    11. image: debian
    12. ...
    13. lastState:
    14. terminated:
    15. ...
    16. name: debian-container
    17. ...
    18. - containerID: docker://96c1ff2c5bb ...
    19. ...
    20. name: nginx-container
    21. ...
    22. state:
    23. running:
    24. ...

    You can see that the debian Container has terminated, and the nginx Container is still running.

    Get a shell to nginx Container:

    1. kubectl exec -it two-containers -c nginx-container -- /bin/bash

    In your shell, verify that nginx is running:

    The output is similar to this:

    1. USER PID ... STAT START TIME COMMAND
    2. root 1 ... Ss 21:12 0:00 nginx: master process nginx -g daemon off;

    Recall that the debian Container created the index.html file in the nginx root directory. Use to send a GET request to the nginx server:

    1. root@two-containers:/# curl localhost

    The primary reason that Pods can have multiple containers is to support helper applications that assist a primary application. Typical examples of helper applications are data pullers, data pushers, and proxies. Helper and primary applications often need to communicate with each other. Typically this is done through a shared filesystem, as shown in this exercise, or through the loopback network interface, localhost. An example of this pattern is a web server along with a helper program that polls a Git repository for new updates.

    The Volume in this exercise provides a way for Containers to communicate during the life of the Pod. If the Pod is deleted and recreated, any data stored in the shared Volume is lost.