Intro

is an open-source system for automating deployment, scaling, and management of containerized applications.

Developers normally don't setup Kubernetes locally. There is a lot of networking and operational detail to learn which could be overwhelming if one would want to stay productive and focused. Instead, Google built Minikube, which is a single node Kubernetes cluster running inside a VM. So you'll need a virtualisation infrastructure in your local dev environment. I personally use KVM on Linux which allows me to preserve the same IP for my VM between restarts.

This is a really good guideline on how to setup minikube:

Steps

The default docker image is based on alpine Linux which causes issues with the DNS addon in Minikubes (see ). To get around this problem, I rebuilt the docker image from "scratch".

  • Modified the Dockerfile in the docker folder
  • Added a new bash script to build the image using the docker file
  • Added 2 yaml files to create master and volume "deployments" (which create PODs and containers)
  • Added an additional yaml file to expose both volume and master "deployments" through a "service"

The new docker file was changed to use scratch base image without anything but the application. You can always create different docker files for different needs. For instance, for troubleshooting purposes you could build development images containing other tools than just the application.

The script uses the docker file to build my image. Note the followings:

  • I'm building the application with cgo disabled which produces a static binary. This is why I can create the image from scratch without worrying about missing libraries on the operating system.
  • The image is built using the latest development code. You might want to pull the latest stable tag instead.
  • 192.168.42.23 is the IP of a private docker registry that I've setup from which the image is pulled and deployed to minikube cluster (later steps).
  1. #!/bin/sh
  2. go get github.com/chrislusf/seaweedfs/weed/...
  3. CGO_ENABLED=0 GOOS=linux go build github.com/chrislusf/seaweedfs/weed
  4. docker build -t weed:latest -f ./Dockerfile .
  5. docker tag weed:latest 192.168.42.23:80/weed:latest
  6. docker push 192.168.42.23:80/weed:latest
  7. docker rmi $(docker images -qa -f 'dangling=true') 2>/dev/null
  8. exit 0

In order to deploy the docker image onto minikube in form of "pods" and "services", we need yaml files. Quickly explaining what POD and Service are:

  • A POD is a group of containers that are deployed together on the same host. A given POD by default has 1 container. In this case, I have 2 PODs for master and volume containers (created through "deployment") each of which has 1 container.
  • A service is a grouping of pods that are running on the cluster. In this case, I'm creating one service for both master and volume PODs.

Create master POD

  1. apiVersion: extensions/v1beta1
  2. kind: Deployment
  3. metadata:
  4. name: weedvolumedeployment
  5. template:
  6. metadata:
  7. labels:
  8. app: haystack
  9. spec:
  10. containers:
  11. - name: weedvol
  12. image: 192.168.42.23:80/weed:latest
  13. args: ["-log_dir", "/var/containerdata/logs", "volume", "-port", "8080", "-mserver", "haystackservice:9333", "-dir", "/var/containerdata/haystack/volume", "-ip", "haystackservice"]
  14. ports:
  15. - containerPort: 18080
  16. volumeMounts:
  17. - mountPath: /var/containerdata
  18. name: vlm
  19. volumes:
  20. - name: vlm
  21. hostPath:

Create a service exposing PODs

Please note:

  • You'll need to slightly modify the yaml files that create deployments (the first 2 yaml files) if you intend to run them against an actual Kubernetes cluster. Essentially, in the volumes section of the yaml file, instead of using a hostpath you'd need to specify a persistent volume (persistent volume claim to be exact). For more information about PV and PVC see https://kubernetes.io/docs/concepts/storage/persistent-volumes/.
  • If you're however deploying the application to minikube, you'll have to make sure the hostpath exists in the cluster prior to deployment. ssh into minikube and create the folders:
  1. $$ minikube ssh
  2. $ sudo mkdir -p /data/vlm/logs && \
  3. sudo mkdir -p /data/vlm/haystack/master && \
  4. sudo mkdir /data/vlm/haystack/volume && \
  5. sudo chown -R docker:root /mnt/sda1/data/

/data is a softlink to /mnt/sda1/data, hence the use of full path in the last command.

You can then deploy everything using kubectl CLI tool (installed when you install minikube).

  1. http://minikubecluster:30069/dir/assign
  2. curl -F file=@/home/amir/Downloads/hardhat.png http://minikubecluster:30070/2,0333d4fea4
  3. http://minikubecluster:30070/2,0333d4fea4
  4. http://minikubecluster:30070/ui/index.html

minikubecluster in my environment resolves to the IP address of the minikube's VM which you can get using command.The port numbers in these commands are the node ports defined as part of the service spec which map to the internal container ports and to which they forward all the requests. For more information about Node Ports see .