Example: Deploying PHP Guestbook application with Redis

    • A single-instance Redis to store guestbook entries
    • Multiple web frontend instances
    • Start up a Redis leader.
    • Start up two Redis followers.
    • Start up the guestbook frontend.
    • Expose and view the Frontend Service.
    • Clean up.

    Before you begin

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using or you can use one of these Kubernetes playgrounds:

    Your Kubernetes server must be at or later than version v1.14. To check the version, enter kubectl version.

    The guestbook application uses Redis to store its data.

    The manifest file, included below, specifies a Deployment controller that runs a single replica Redis Pod.

    application/guestbook/redis-leader-deployment.yaml

    1. Launch a terminal window in the directory you downloaded the manifest files.

    2. Apply the Redis Deployment from the redis-leader-deployment.yaml file:

      1. kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml
    3. Query the list of Pods to verify that the Redis Pod is running:

      1. kubectl get pods

      The response should be similar to this:

      1. NAME READY STATUS RESTARTS AGE
      2. redis-leader-fb76b4755-xjr2n 1/1 Running 0 13s
    4. Run the following command to view the logs from the Redis leader Pod:

      1. kubectl logs -f deployment/redis-leader

    Creating the Redis leader Service

    The guestbook application needs to communicate to the Redis to write its data. You need to apply a Service to proxy the traffic to the Redis Pod. A Service defines a policy to access the Pods.

    Example: Deploying PHP Guestbook application with Redis - 图2

    1. # SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: redis-leader
    6. labels:
    7. app: redis
    8. role: leader
    9. tier: backend
    10. spec:
    11. ports:
    12. - port: 6379
    13. targetPort: 6379
    14. selector:
    15. app: redis
    16. role: leader
    17. tier: backend
    1. Apply the Redis Service from the following redis-leader-service.yaml file:

      1. kubectl apply -f https://k8s.io/examples/application/guestbook/redis-leader-service.yaml
    2. Query the list of Services to verify that the Redis Service is running:

      1. kubectl get service

      The response should be similar to this:

      1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      2. kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1m
      3. redis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 16s

    Note: This manifest file creates a Service named redis-leader with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis Pod.

    Although the Redis leader is a single Pod, you can make it highly available and meet traffic demands by adding a few Redis followers, or replicas.

    1. # SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. name: redis-follower
    6. labels:
    7. app: redis
    8. role: follower
    9. tier: backend
    10. spec:
    11. replicas: 2
    12. selector:
    13. matchLabels:
    14. app: redis
    15. metadata:
    16. labels:
    17. app: redis
    18. role: follower
    19. tier: backend
    20. spec:
    21. containers:
    22. - name: follower
    23. image: gcr.io/google_samples/gb-redis-follower:v2
    24. resources:
    25. requests:
    26. cpu: 100m
    27. memory: 100Mi
    28. - containerPort: 6379
    1. Apply the Redis Deployment from the following redis-follower-deployment.yaml file:

      1. kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml
      1. kubectl get pods

      The response should be similar to this:

      1. NAME READY STATUS RESTARTS AGE
      2. redis-follower-dddfbdcc9-82sfr 1/1 Running 0 37s
      3. redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 38s
      4. redis-leader-fb76b4755-xjr2n 1/1 Running 0 11m

    Creating the Redis follower service

    The guestbook application needs to communicate with the Redis followers to read data. To make the Redis followers discoverable, you must set up another .

    application/guestbook/redis-follower-service.yaml Example: Deploying PHP Guestbook application with Redis - 图4

    1. Apply the Redis Service from the following redis-follower-service.yaml file:

      1. kubectl apply -f https://k8s.io/examples/application/guestbook/redis-follower-service.yaml
    2. Query the list of Services to verify that the Redis Service is running:

      1. kubectl get service

      The response should be similar to this:

      1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      2. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h
      3. redis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 9s
      4. redis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 6m10s

    Note: This manifest file creates a Service named redis-follower with a set of labels that match the labels previously defined, so the Service routes network traffic to the Redis Pod.

    Set up and Expose the Guestbook Frontend

    Now that you have the Redis storage of your guestbook up and running, start the guestbook web servers. Like the Redis followers, the frontend is deployed using a Kubernetes Deployment.

    The guestbook app uses a PHP frontend. It is configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX.

    1. # SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
    2. apiVersion: apps/v1
    3. kind: Deployment
    4. metadata:
    5. name: frontend
    6. spec:
    7. replicas: 3
    8. selector:
    9. matchLabels:
    10. app: guestbook
    11. tier: frontend
    12. template:
    13. metadata:
    14. labels:
    15. app: guestbook
    16. tier: frontend
    17. spec:
    18. containers:
    19. - name: php-redis
    20. image: gcr.io/google_samples/gb-frontend:v5
    21. env:
    22. - name: GET_HOSTS_FROM
    23. value: "dns"
    24. resources:
    25. requests:
    26. cpu: 100m
    27. memory: 100Mi
    28. ports:
    29. - containerPort: 80
    1. Apply the frontend Deployment from the frontend-deployment.yaml file:

      1. kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
    2. Query the list of Pods to verify that the three frontend replicas are running:

      1. kubectl get pods -l app=guestbook -l tier=frontend

      The response should be similar to this:

      1. NAME READY STATUS RESTARTS AGE
      2. frontend-85595f5bf9-5tqhb 1/1 Running 0 47s
      3. frontend-85595f5bf9-qbzwm 1/1 Running 0 47s
      4. frontend-85595f5bf9-zchwc 1/1 Running 0 47s

    Creating the Frontend Service

    The Redis Services you applied is only accessible within the Kubernetes cluster because the default type for a Service is . ClusterIP provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.

    If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the Kubernetes cluster. However a Kubernetes user can use kubectl port-forward to access the service even though it uses a ClusterIP.

    Note: Some cloud providers, like Google Compute Engine or Google Kubernetes Engine, support external load balancers. If your cloud provider supports load balancers and you want to use it, uncomment type: LoadBalancer.

    application/guestbook/frontend-service.yaml Example: Deploying PHP Guestbook application with Redis - 图6

    1. # SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
    2. apiVersion: v1
    3. kind: Service
    4. name: frontend
    5. labels:
    6. app: guestbook
    7. tier: frontend
    8. spec:
    9. # if your cluster supports it, uncomment the following to automatically create
    10. # an external load-balanced IP for the frontend service.
    11. # type: LoadBalancer
    12. #type: LoadBalancer
    13. ports:
    14. # the port that this service should serve on
    15. - port: 80
    16. selector:
    17. app: guestbook
    18. tier: frontend
    1. Apply the frontend Service from the frontend-service.yaml file:

      1. kubectl apply -f https://k8s.io/examples/application/guestbook/frontend-service.yaml
    2. Query the list of Services to verify that the frontend Service is running:

      1. kubectl get services
      1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      2. frontend ClusterIP 10.97.28.230 <none> 80/TCP 19s
      3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d19h
      4. redis-follower ClusterIP 10.110.162.42 <none> 6379/TCP 5m48s
      5. redis-leader ClusterIP 10.103.78.24 <none> 6379/TCP 11m
    1. Run the following command to forward port 8080 on your local machine to port 80 on the service.

      1. kubectl port-forward svc/frontend 8080:80

      The response should be similar to this:

    2. load the page http://localhost:8080 in your browser to view your guestbook.

    Viewing the Frontend Service via LoadBalancer

    If you deployed the frontend-service.yaml manifest with type: LoadBalancer you need to find the IP address to view your Guestbook.

    1. Run the following command to get the IP address for the frontend Service.

      1. kubectl get service frontend

      The response should be similar to this:

      1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      2. frontend LoadBalancer 10.51.242.136 109.197.92.229 80:32372/TCP 1m
    2. Copy the external IP address, and load the page in your browser to view your guestbook.

    Note: Try adding some guestbook entries by typing in a message, and clicking Submit. The message you typed appears in the frontend. This message indicates that data is successfully added to Redis through the Services you created earlier.

    You can scale up or down as needed because your servers are defined as a Service that uses a Deployment controller.

    1. Run the following command to scale up the number of frontend Pods:

      1. kubectl scale deployment frontend --replicas=5
    2. Query the list of Pods to verify the number of frontend Pods running:

      1. kubectl get pods

      The response should look similar to this:

      1. NAME READY STATUS RESTARTS AGE
      2. frontend-85595f5bf9-5df5m 1/1 Running 0 83s
      3. frontend-85595f5bf9-7zmg5 1/1 Running 0 83s
      4. frontend-85595f5bf9-cpskg 1/1 Running 0 15m
      5. frontend-85595f5bf9-l2l54 1/1 Running 0 14m
      6. frontend-85595f5bf9-l9c8z 1/1 Running 0 14m
      7. redis-follower-dddfbdcc9-82sfr 1/1 Running 0 97m
      8. redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 97m
      9. redis-leader-fb76b4755-xjr2n 1/1 Running 0 108m
    3. Run the following command to scale down the number of frontend Pods:

      1. kubectl scale deployment frontend --replicas=2
    4. Query the list of Pods to verify the number of frontend Pods running:

      1. kubectl get pods

      The response should look similar to this:

      1. NAME READY STATUS RESTARTS AGE
      2. frontend-85595f5bf9-cpskg 1/1 Running 0 16m
      3. frontend-85595f5bf9-l9c8z 1/1 Running 0 15m
      4. redis-follower-dddfbdcc9-82sfr 1/1 Running 0 98m
      5. redis-follower-dddfbdcc9-qrt5k 1/1 Running 0 98m
      6. redis-leader-fb76b4755-xjr2n 1/1 Running 0 109m

    Cleaning up

    Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.

    1. Run the following commands to delete all Pods, Deployments, and Services.

      1. kubectl delete deployment -l app=redis
      2. kubectl delete service -l app=redis
      3. kubectl delete deployment frontend
      4. kubectl delete service frontend

      The response should look similar to this:

      1. deployment.apps "redis-follower" deleted
      2. deployment.apps "redis-leader" deleted
      3. deployment.apps "frontend" deleted
      4. service "frontend" deleted
    2. Query the list of Pods to verify that no Pods are running:

      1. No resources found in default namespace.