Example: Add logging and metrics to the PHP / Redis Guestbook example

    • Start up the PHP Guestbook with Redis.
    • Install kube-state-metrics.
    • Create a Kubernetes secret.
    • View dashboards of your logs and metrics.

    Before you begin

    You need to have a Kubernetes cluster, and the kubectl command-line tool mustbe configured to communicate with your cluster. If you do not already have acluster, you can create one by usingMinikube,or you can use one of these Kubernetes playgrounds:

    To check the version, enter .

    Additionally you need:

    Start up the PHP Guestbook with Redis

    This tutorial builds on the PHP Guestbook with Redis tutorial. If you have the guestbook application running, then you can monitor that. If you do not have it running then follow the instructions to deploy the guestbook and do not perform the Cleanup steps. Come back to this page when you have the guestbook running.

    Add a Cluster role binding

    Create a cluster level role binding so that you can deploy kube-state-metrics and the Beats at the cluster level (in kube-system).

    Install kube-state-metrics

    Kubernetes kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.

    1. kubectl get pods --namespace=kube-system | grep kube-state

    Install kube-state-metrics if needed

    1. git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
    2. kubectl create -f examples/standard
    3. kubectl get pods --namespace=kube-system | grep kube-state-metrics

    Verify that kube-state-metrics is running and ready

    1. kubectl get pods -n kube-system -l app.kubernetes.io/name=kube-state-metrics

    Output:

    1. NAME READY STATUS RESTARTS AGE
    2. kube-state-metrics-89d656bf8-vdthm 2/2 Running 0 21s
    1. git clone https://github.com/elastic/examples.git

    The rest of the commands will reference files in the examples/beats-k8s-send-anywhere directory, so change dir there:

    1. cd examples/beats-k8s-send-anywhere

    Create a Kubernetes Secret

    A Kubernetes is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image; putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.

    Self managed

    Switch to the Managed service tab if you are connecting to Elasticsearch Service in Elastic Cloud.

    Set the credentials

    There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud). The files are:

    • ELASTICSEARCH_HOSTS
    • ELASTICSEARCH_PASSWORD
    • ELASTICSEARCH_USERNAME
    • KIBANA_HOSTSet these with the information for your Elasticsearch cluster and your Kibana host. Here are some examples

    ELASTICSEARCH_HOSTS

    • A nodeGroup from the Elastic Elasticsearch Helm Chart:
    1. ["http://elasticsearch-master.default.svc.cluster.local:9200"]
    • A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:
    1. ["http://host.docker.internal:9200"]
    • Two Elasticsearch nodes running in VMs or on physical hardware:
    1. ["http://host1.example.com:9200", "http://host2.example.com:9200"]
    1. vi ELASTICSEARCH_HOSTS

    ELASTICSEARCH_PASSWORD

    Just the password; no whitespace, quotes, or <>:

    1. <yoursecretpassword>

    Edit ELASTICSEARCH_PASSWORD

    1. vi ELASTICSEARCH_PASSWORD

    ELASTICSEARCH_USERNAME

    Just the username; no whitespace, quotes, or <>:

    Edit ELASTICSEARCH_USERNAME

    1. vi ELASTICSEARCH_USERNAME

    KIBANA_HOST

    • The Kibana instance from the Elastic Kibana Helm Chart. The subdomain default refers to the default namespace. If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:
    1. "kibana-kibana.default.svc.cluster.local:5601"
    • A Kibana instance running on a Mac where your Beats are running in Docker for Mac:
    1. "host.docker.internal:5601"
    • Two Elasticsearch nodes running in VMs or on physical hardware:
    1. "host1.example.com:5601"

    Edit KIBANA_HOST

    1. vi KIBANA_HOST

    This command creates a secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:

    1. kubectl create secret generic dynamic-logging \
    2. --from-file=./ELASTICSEARCH_HOSTS \
    3. --from-file=./ELASTICSEARCH_PASSWORD \
    4. --from-file=./ELASTICSEARCH_USERNAME \
    5. --from-file=./KIBANA_HOST \
    6. --namespace=kube-system

    Managed service

    This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with .

    Set the credentials

    There are two files to edit to create a k8s secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:

    • ELASTIC_CLOUD_AUTH
    • ELASTIC_CLOUD_IDSet these with the information provided to you from the Elasticsearch Service console when you created the deployment. Here are some examples:

    ELASTIC_CLOUD_ID

    1. devk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==

    ELASTIC_CLOUD_AUTH

    Just the username, a colon (:), and the password, no whitespace or quotes:

    1. elastic:VFxJJf9Tjwer90wnfTghsn8w

    Edit the required files:

    1. vi ELASTIC_CLOUD_ID
    2. vi ELASTIC_CLOUD_AUTH

    Create a Kubernetes secret

    This command creates a secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:

    1. kubectl create secret generic dynamic-logging \
    2. --from-file=./ELASTIC_CLOUD_AUTH \
    3. --namespace=kube-system

    Deploy the Beats

    Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.

    Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes. Filebeat is deployed as a DaemonSetEnsures a copy of a Pod is running across a set of nodes in a cluster.. Filebeat can autodiscover applications running in your Kubernetes cluster. At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.

    Here is the autodiscover configuration that enables Filebeat to locate and parse Redis logs from the Redis containers deployed with the guestbook application. This configuration is in the file filebeat-kubernetes.yaml:

    1. - condition.contains:
    2. kubernetes.labels.app: redis
    3. config:
    4. - module: redis
    5. log:
    6. input:
    7. type: docker
    8. - ${data.kubernetes.container.id}
    9. slowlog:
    10. enabled: true
    11. var.hosts: ["${data.host}:${data.port}"]

    This configures Filebeat to apply the Filebeat module redis when a container is detected with a label app containing the string redis. The redis module has the ability to collect the log stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container). Additionally, the module has the ability to collect Redis slowlog entries by connecting to the proper pod host and port, which is provided in the container metadata.

    Deploy Filebeat:

    1. kubectl create -f filebeat-kubernetes.yaml

    Verify

    About Metricbeat

    1. - condition.equals:
    2. kubernetes.labels.tier: backend
    3. config:
    4. - module: redis
    5. metricsets: ["info", "keyspace"]
    6. period: 10s
    7. # Redis hosts
    8. hosts: ["${data.host}:${data.port}"]

    This configures Metricbeat to apply the Metricbeat module redis when a container is detected with a label tier equal to the string backend. The redis module has the ability to collect the info and keyspace metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.

    Deploy Metricbeat

    1. kubectl create -f metricbeat-kubernetes.yaml

    Verify

    Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.

    Note: If you are running a service on a non-standard port add that port number to the appropriate type in filebeat.yaml and delete / create the Packetbeat DaemonSet.
    1. packetbeat.interfaces.device: any
    2. packetbeat.protocols:
    3. - type: dns
    4. ports: [53]
    5. include_authorities: true
    6. include_additionals: true
    7. - type: http
    8. ports: [80, 8000, 8080, 9200]
    9. - type: mysql
    10. ports: [3306]
    11. - type: redis
    12. ports: [6379]
    13. packetbeat.flows:
    14. timeout: 30s
    15. period: 10s

    Deploy Packetbeat

    1. kubectl create -f packetbeat-kubernetes.yaml

    Verify

    1. kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic

    View in Kibana

    Open Kibana in your browser and then open the Dashboard application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.

    Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.

    Similarly, view dashboards for Apache and Redis. You will see dashboards for logs and metrics for each. The Apache Metricbeat dashboard will be blank. Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.

    To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.

    List the existing deployments:

    1. kubectl get deployments

    The output:

    1. NAME READY UP-TO-DATE AVAILABLE AGE
    2. frontend 3/3 3 3 3h27m
    3. redis-master 1/1 1 1 3h27m
    4. redis-slave 2/2 2 2 3h27m

    Scale the frontend down to two pods:

    1. kubectl scale --replicas=2 deployment/frontend

    The output:

    1. deployment.extensions/frontend scaled

    View the changes in Kibana

    See the screenshot, add the indicated filters and then add the columns to the view. You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.

    Cleaning up

    Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.

    • Run the following commands to delete all Pods, Deployments, and Services.
    1. kubectl delete deployment -l app=redis
    2. kubectl delete service -l app=redis
    3. kubectl delete deployment -l app=guestbook
    4. kubectl delete service -l app=guestbook
    5. kubectl delete -f filebeat-kubernetes.yaml
    6. kubectl delete -f metricbeat-kubernetes.yaml
    7. kubectl delete -f packetbeat-kubernetes.yaml
    8. kubectl delete secret dynamic-logging -n kube-system
    • Query the list of Pods to verify that no Pods are running:

    The response should be this:

    What's next

    Feedback

    Was this page helpful?