We are going to install a Rancher RKE custom cluster with a fixed number of nodes with the etcd and controlplane roles, and a variable nodes with the worker role, managed by .
Prerequisites
These elements are required to follow this guide:
- The Rancher server is up and running
- You have an AWS EC2 user with proper permissions to create virtual machines, auto scaling groups, and IAM profiles and roles
On Rancher server, we should create a custom k8s cluster v1.18.x. Be sure that cloud_provider name is set to amazonec2
. Once cluster is created we need to get:
- clusterID:
c-xxxxx
will be used on EC2kubernetes.io/cluster/<clusterID>
instance tag - clusterName: will be used on EC2
k8s.io/cluster-autoscaler/<clusterName>
instance tag nodeCommand: will be added on EC2 instance user_data to include new nodes on cluster
2. Configure the Cloud Provider
On AWS EC2, we should create a few objects to configure our system. We’ve defined three distinct groups and IAM profiles to configure on AWS.
Autoscaling group: Nodes that will be part of the EC2 Auto Scaling Group (ASG). The ASG will be used by
cluster-autoscaler
to scale up and down.- IAM profile: Required by k8s nodes where cluster-autoscaler will be running. It is recommended for Kubernetes master nodes. This profile is called
K8sAutoscalerProfile
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:DescribeTags",
"autoscaling:DescribeLaunchConfigurations",
"ec2:DescribeLaunchTemplateVersions"
],
"Resource": [
"*"
]
}
]
}
- IAM profile: Required by k8s nodes where cluster-autoscaler will be running. It is recommended for Kubernetes master nodes. This profile is called
Master group: Nodes that will be part of the Kubernetes etcd and/or control planes. This will be out of the ASG.
- IAM profile: Required by the Kubernetes cloud_provider integration. Optionally,
AWS_ACCESS_KEY
andAWS_SECRET_KEY
can be used instead using-aws-credentials. This profile is calledK8sMasterProfile
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVolumes",
"ec2:CreateSecurityGroup",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:ModifyInstanceAttribute",
"ec2:ModifyVolume",
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:DeleteSecurityGroup",
"ec2:DeleteVolume",
"ec2:DetachVolume",
"ec2:RevokeSecurityGroupIngress",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:AttachLoadBalancerToSubnets",
"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateLoadBalancerPolicy",
"elasticloadbalancing:CreateLoadBalancerListeners",
"elasticloadbalancing:ConfigureHealthCheck",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:DeleteLoadBalancerListeners",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DetachLoadBalancerFromSubnets",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
"elasticloadbalancing:AddTags",
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:CreateTargetGroup",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:DeleteTargetGroup",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeLoadBalancerPolicies",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
"iam:CreateServiceLinkedRole",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage",
"kms:DescribeKey"
],
"Resource": [
"*"
]
}
]
}
- IAM role:
K8sMasterRole: [K8sMasterProfile,K8sAutoscalerProfile]
- Security group:
K8sMasterSg
More info at - Tags:
kubernetes.io/cluster/<clusterID>: owned
- User data:
K8sMasterUserData
Ubuntu 18.04(ami-0e11cbb34015ff725), installs docker and add etcd+controlplane node to the k8s cluster
- IAM profile: Required by the Kubernetes cloud_provider integration. Optionally,
-
- IAM profile: Provides cloud_provider worker integration. This profile is called
K8sWorkerProfile
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
],
"Resource": "*"
}
]
}
- IAM role:
K8sWorkerRole: [K8sWorkerProfile]
- Security group:
K8sWorkerSg
More info at RKE ports (custom nodes tab) - Tags:
kubernetes.io/cluster/<clusterID>: owned
k8s.io/cluster-autoscaler/<clusterName>: true
k8s.io/cluster-autoscaler/enabled: true
#!/bin/bash -x
cat <<EOF > /etc/sysctl.d/90-kubelet.conf
vm.overcommit_memory = 1
kernel.panic = 10
kernel.panic_on_oops = 1
kernel.keys.root_maxkeys = 1000000
kernel.keys.root_maxbytes = 25000000
EOF
sysctl -p /etc/sysctl.d/90-kubelet.conf
curl -sL https://releases.rancher.com/install-docker/19.03.sh | sh
sudo usermod -aG docker ubuntu
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
PRIVATE_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/local-ipv4)
PUBLIC_IP=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" -s http://169.254.169.254/latest/meta-data/public-ipv4)
K8S_ROLES="--worker"
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:<RANCHER_VERSION> --server https://<RANCHER_URL> --token <RANCHER_TOKEN> --ca-checksum <RANCHER_CA_CHECKCSUM> --address ${PUBLIC_IP} --internal-address ${PRIVATE_IP} ${K8S_ROLES}
- IAM profile: Provides cloud_provider worker integration. This profile is called
More info is at and Cluster Autoscaler on AWS.
Once we’ve configured AWS, let’s create VMs to bootstrap our cluster:
master (etcd+controlplane): Depending your needs, deploy three master instances with proper size. More info is at
- IAM role:
K8sMasterRole
- Security group:
K8sMasterSg
- Tags:
kubernetes.io/cluster/<clusterID>: owned
- User data:
K8sMasterUserData
- IAM role:
worker: Define an ASG on EC2 with the following settings:
- Name:
K8sWorkerAsg
- IAM role:
K8sWorkerRole
- Security group:
K8sWorkerSg
- Tags:
kubernetes.io/cluster/<clusterID>: owned
k8s.io/cluster-autoscaler/<clusterName>: true
k8s.io/cluster-autoscaler/enabled: true
- User data:
K8sWorkerUserData
- Instances:
- minimum: 2
- desired: 2
- maximum: 10
- Name:
Once the VMs are deployed, you should have a Rancher custom cluster up and running with three master and two worker nodes.
4. Install Cluster-autoscaler
At this point, we should have rancher cluster up and running. We are going to install cluster-autoscaler on master nodes and kube-system
namespace, following cluster-autoscaler recommendation.
Parameters
This table shows cluster-autoscaler parameters for fine tuning:
Deployment
Once the manifest file is prepared, deploy it in the Kubernetes cluster (Rancher UI can be used instead):
kubectl -n kube-system apply -f cluster-autoscaler-deployment.yaml
Note: Cluster-autoscaler deployment can also be set up using
Testing
At this point, we should have a cluster-scaler up and running in our Rancher custom cluster. Cluster-scale should manage K8sWorkerAsg
ASG to scale up and down between 2 and 10 nodes, when one of the following conditions is true:
- There are pods that failed to run in the cluster due to insufficient resources. In this case, the cluster is scaled up.
- There are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes. In this case, the cluster is scaled down.
We’ve prepared a test-deployment.yaml
just to generate load on the Kubernetes cluster and see if cluster-autoscaler is working properly. The test deployment is requesting 1000m CPU and 1024Mi memory by three replicas. Adjust the requested resources and/or replica to be sure you exhaust the Kubernetes cluster resources:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-world
name: hello-world
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: rancher/hello-world
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 1000m
Once the test deployment is prepared, deploy it in the Kubernetes cluster default namespace (Rancher UI can be used instead):
Checking Scale
Once the Kubernetes resources got exhausted, cluster-autoscaler should scale up worker nodes where pods failed to be scheduled. It should scale up until up until all pods became scheduled. You should see the new nodes on the ASG and on the Kubernetes cluster. Check the logs on the cluster-autoscaler pod.
Once scale up is checked, let check for scale down. To do it, reduce the replica number on the test deployment until you release enough Kubernetes cluster resources to scale down. You should see nodes disappear on the ASG and on the Kubernetes cluster. Check the logs on the kube-system
cluster-autoscaler pod.