Connect a Frontend to a Backend Using Services

    • Create and run a sample hello backend microservice using a object.
    • Use a Service object to send traffic to the backend microservice’s multiple replicas.
    • Create and run a nginx frontend microservice, also using a Deployment object.
    • Configure the frontend microservice to send traffic to the backend microservice.
    • Use a Service object of type=LoadBalancer to expose the frontend microservice outside the cluster.

    Before you begin

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using or you can use one of these Kubernetes playgrounds:

    To check the version, enter kubectl version.

    This task uses Services with external load balancers, which require a supported environment. If your environment does not support this, you can use a Service of type instead.

    Creating the backend using a Deployment

    The backend is a simple hello greeter microservice. Here is the configuration file for the backend Deployment:

    Create the backend Deployment:

    1. kubectl apply -f https://k8s.io/examples/service/access/backend-deployment.yaml

    View information about the backend Deployment:

    1. kubectl describe deployment backend

    The output is similar to this:

    1. Name: backend
    2. Namespace: default
    3. CreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700
    4. Labels: app=hello
    5. tier=backend
    6. track=stable
    7. Annotations: deployment.kubernetes.io/revision=1
    8. Selector: app=hello,tier=backend,track=stable
    9. Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    10. StrategyType: RollingUpdate
    11. MinReadySeconds: 0
    12. RollingUpdateStrategy: 1 max unavailable, 1 max surge
    13. Pod Template:
    14. Labels: app=hello
    15. tier=backend
    16. track=stable
    17. Containers:
    18. hello:
    19. Image: "gcr.io/google-samples/hello-go-gke:1.0"
    20. Port: 80/TCP
    21. Environment: <none>
    22. Volumes: <none>
    23. Conditions:
    24. Type Status Reason
    25. ---- ------ ------
    26. Available True MinimumReplicasAvailable
    27. Progressing True NewReplicaSetAvailable
    28. OldReplicaSets: <none>
    29. NewReplicaSet: hello-3621623197 (3/3 replicas created)
    30. Events:
    31. ...

    The key to sending requests from a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses to find the Pods that it routes traffic to.

    First, explore the Service configuration file:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: hello
    5. spec:
    6. selector:
    7. app: hello
    8. tier: backend
    9. ports:
    10. - protocol: TCP
    11. port: 80
    12. targetPort: http
    13. ...

    In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.

    Create the backend Service:

    1. kubectl apply -f https://k8s.io/examples/service/access/backend-service.yaml

    At this point, you have a backend Deployment running three replicas of your hello application, and you have a Service that can route traffic to them. However, this service is neither available nor resolvable outside the cluster.

    Creating the frontend

    Now that you have your backend running, you can create a frontend that is accessible outside the cluster, and connects to the backend by proxying requests to it.

    The frontend sends requests to the backend worker Pods by using the DNS name given to the backend Service. The DNS name is hello, which is the value of the name field in the examples/service/access/backend-service.yaml configuration file.

    The Pods in the frontend Deployment run a nginx image that is configured to proxy requests to the hello backend Service. Here is the nginx configuration file:

    Connect a Frontend to a Backend Using Services - 图3

    Similar to the backend, the frontend has a Deployment and a Service. An important difference to notice between the backend and frontend services, is that the configuration for the frontend Service has type: LoadBalancer, which means that the Service uses a load balancer provisioned by your cloud provider and will be accessible from outside the cluster.

    service/access/frontend-service.yaml

    1. ---
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: frontend
    6. spec:
    7. selector:
    8. app: hello
    9. tier: frontend
    10. ports:
    11. - protocol: "TCP"
    12. port: 80
    13. targetPort: 80
    14. type: LoadBalancer

    Connect a Frontend to a Backend Using Services - 图5

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: frontend
    5. spec:
    6. selector:
    7. matchLabels:
    8. app: hello
    9. tier: frontend
    10. track: stable
    11. replicas: 1
    12. template:
    13. metadata:
    14. labels:
    15. app: hello
    16. tier: frontend
    17. track: stable
    18. spec:
    19. containers:
    20. - name: nginx
    21. image: "gcr.io/google-samples/hello-frontend:1.0"
    22. lifecycle:
    23. preStop:
    24. exec:
    25. command: ["/usr/sbin/nginx","-s","quit"]
    26. ...
    1. kubectl apply -f https://k8s.io/examples/service/access/frontend-deployment.yaml
    2. kubectl apply -f https://k8s.io/examples/service/access/frontend-service.yaml

    The output verifies that both resources were created:

    1. deployment.apps/frontend created
    2. service/frontend created

    Note: The nginx configuration is baked into the container image. A better way to do this would be to use a , so that you can change the configuration more easily.

    Interact with the frontend Service

    Once you’ve created a Service of type LoadBalancer, you can use this command to find the external IP:

    1. kubectl get service frontend --watch

    This displays the configuration for the frontend Service and watches for changes. Initially, the external IP is listed as <pending>:

    As soon as an external IP is provisioned, however, the configuration updates to include the new IP under the EXTERNAL-IP heading:

    1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    2. frontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m

    That IP can now be used to interact with the frontend service from outside the cluster.

    The frontend and backend are now connected. You can hit the endpoint by using the curl command on the external IP of your frontend Service.

    1. curl http://${EXTERNAL_IP} # replace this with the EXTERNAL-IP you saw earlier

    The output shows the message generated by the backend:

    1. {"message":"Hello"}

    Cleaning up

    To delete the Services, enter this command:

    To delete the Deployments, the ReplicaSets and the Pods that are running the backend and frontend applications, enter this command:

    1. kubectl delete deployment frontend backend

    What’s next

    • Learn more about Services
    • Learn more about