Run kOps in an existing VPC

    1. Use with the --vpc argument for your existing VPC:
    1. Then kops edit cluster ${CLUSTER_NAME} will show you something like:
    1. metadata:
    2. name: ${CLUSTER_NAME}
    3. spec:
    4. cloudProvider: aws
    5. networkCIDR: ${NETWORK_CIDR}
    6. networkID: ${VPC_ID}
    7. nonMasqueradeCIDR: 100.64.0.0/10
    8. subnets:
    9. - cidr: 172.20.32.0/19
    10. name: us-east-1b
    11. type: Public
    12. zone: us-east-1b

    Verify that networkCIDR and networkID match your VPC CIDR and ID. You probably need to set the CIDR on each of the Zones, as subnets in a VPC cannot overlap.

    1. You can then run kops update cluster in preview mode (without --yes). You don’t need any arguments because they’re all in the cluster spec:
    1. kops update cluster ${CLUSTER_NAME}

    Review the changes to make sure they are OK—the Kubernetes settings might not be ones you want on a shared VPC (in which case, open an issue!)

    Note also the Kubernetes VPCs (currently) require EnableDNSHostnames=true. kOps will detect the required change, but refuse to make it automatically because it is a shared VPC. Please review the implications and make the change to the VPC manually.

    1. Once you’re happy, you can create the cluster using:
    1. kops update cluster ${CLUSTER_NAME} --yes

    This will add an additional tag to your AWS VPC resource. This tag will be removed automatically if you delete your kOps cluster.

    AWS allows you to add more CIDRs to a VPC. The parameter additionalNetworkCIDRs allows you to specify any additional CIDRs added to the VPC.

    1. metadata:
    2. name: ${CLUSTER_NAME}
    3. spec:
    4. cloudProvider: aws
    5. networkCIDR: 10.1.0.0/16
    6. - 10.2.0.0/16
    7. networkID: vpc-00aa5577
    8. subnets:
    9. - cidr: 10.1.0.0/19
    10. name: us-east-1b
    11. zone: us-east-1b
    12. id: subnet-1234567
    13. - cidr: 10.2.0.0/19
    14. name: us-east-1b
    15. type: Public
    16. zone: us-east-1b
    17. id: subnet-1234568

    Advanced Options for Creating Clusters in Existing VPCs

    Shared Subnets

    kops can create a cluster in shared subnets in both public and private network topologies.

    1. Use kops create cluster with the --subnets argument for your existing subnets:
    1. export KOPS_STATE_STORE=s3://<somes3bucket>
    2. export CLUSTER_NAME=<sharedvpc.mydomain.com>
    3. export VPC_ID=vpc-12345678 # replace with your VPC id
    4. export NETWORK_CIDR=10.100.0.0/16 # replace with the cidr for the VPC ${VPC_ID}
    5. export SUBNET_ID=subnet-12345678 # replace with your subnet id
    6. export SUBNET_CIDR=10.100.0.0/24 # replace with your subnet CIDR
    7. export SUBNET_IDS=$SUBNET_IDS # replace with your comma separated subnet ids
    8. kops create cluster --zones=us-east-1b --name=${CLUSTER_NAME} --subnets=${SUBNET_IDS}
    1. Then kops edit cluster ${CLUSTER_NAME} will show you something like:
    1. metadata:
    2. name: ${CLUSTER_NAME}
    3. spec:
    4. cloudProvider: aws
    5. networkCIDR: ${NETWORK_CIDR}
    6. networkID: ${VPC_ID}
    7. nonMasqueradeCIDR: 100.64.0.0/10
    8. - cidr: ${SUBNET_CIDR}
    9. id: ${SUBNET_ID}
    10. type: Public
    11. zone: us-east-1b
    1. Once you’re happy, you can create the cluster using:

    By default, kOps will tag your existing subnets with the standard tags:

    Public/Utility Subnets:

    1. "kubernetes.io/cluster/<cluster-name>" = "shared"
    2. "kubernetes.io/role/elb" = "1"
    3. "SubnetType" = "Utility"

    Private Subnets:

    1. "kubernetes.io/cluster/<cluster-name>" = "shared"
    2. "kubernetes.io/role/internal-elb" = "1"
    3. "SubnetType" = "Private"

    These tags are important, for example, your services will be unable to create public or private Elastic Load Balancers (ELBs) if the respective elb or internal-elb tags are missing.

    If you would like to manage these tags externally then specify --disable-subnet-tags during your cluster creation. This will prevent kOps from tagging existing subnets and allow some custom control, such as separate subnets for internal ELBs.

    Shared NAT Egress

    On AWS in private topology, kOps creates one NAT Gateway (NGW) per AZ. If your shared VPC is already set up with an NGW in the subnet that kops deploys private resources to, it is possible to specify the ID and have kops/kubernetes use it.

    After creating a basic cluster spec, edit your cluster to specify NGW:

    kops edit cluster ${CLUSTER_NAME}

    1. spec:
    2. subnets:
    3. - cidr: 10.20.64.0/21
    4. name: us-east-1a
    5. egress: nat-987654321
    6. type: Private
    7. zone: us-east-1a
    8. - cidr: 10.20.96.0/21
    9. name: us-east-1b
    10. egress: i-987654321
    11. type: Private
    12. zone: us-east-1a
    13. - cidr: 10.20.32.0/21
    14. name: utility-us-east-1a
    15. type: Utility
    16. zone: us-east-1a

    Please note:

    • You must specify pre-created subnets for either all of the subnets or none of them.
    • kOps won’t alter your existing subnets. They must be correctly set up with route tables, etc. The Public or Utility subnets should have public IPs and an Internet Gateway configured as their default route in their route table. Private subnets should not have public IPs and will typically have a NAT Gateway configured as their default route.
    • kOps won’t create a route-table at all if it’s not creating subnets.

    If you are using an unsupported egress configuration in your VPC, kOps can be told to ignore egress by using a configuration such as:

    This tells kOps that egress is managed externally. This is preferable when using virtual private gateways (currently unsupported) or using other configurations to handle egress routing.

    Proxy VPC Egress

    See HTTP Forward Proxy Support