Using deployment strategies

    Because the end user usually accesses the application through a route handled by a router, the deployment strategy can focus on object features or routing features. Strategies that focus on the deployment impact all routes that use the application. Strategies that use router features target individual routes.

    Many deployment strategies are supported through the DeploymentConfig object, and some additional strategies are supported through router features. Deployment strategies are discussed in this section.

    Choosing a deployment strategy

    Consider the following when choosing a deployment strategy:

    • Long-running connections must be handled gracefully.

    • Database conversions can be complex and must be done and rolled back along with the application.

    • If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition.

    • You must have the infrastructure to do this.

    • If you have a non-isolated test environment, you can break both new and old versions.

    A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the DeploymentConfig object retries to run the pod until it times out. The default timeout is 10m, a value set in TimeoutSeconds in dc.spec.strategy.*params.

    A rolling deployment slowly replaces instances of the previous version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig object.

    A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted.

    When to use a rolling deployment:

    • When you want to take no downtime during an application update.

    • When your application supports having old code and new code running at the same time.

    A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility.

    Example rolling strategy definition

    The rolling strategy:

    1. Executes any pre lifecycle hook.

    2. Scales up the new replication controller based on the surge count.

    3. Scales down the old replication controller based on the max unavailable count.

    4. Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero.

    5. Executes any post lifecycle hook.

    When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure.

    The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10%) or an absolute value (e.g., 2). The default value for both is 25%.

    These parameters allow the deployment to be tuned for availability and speed. For example:

    • maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up.

    • maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update).

    • maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss.

    Generally, if you want fast rollouts, use maxSurge. If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable.

    All rolling deployments in OKD are canary deployments; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig object will be automatically rolled back.

    The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy.

    Rolling deployments are the default type in OKD. You can create a rolling deployment using the CLI.

    Procedure

    1. Create an application based on the example deployment images found in Quay.io:

      1. $ oc new-app quay.io/openshifttest/deployment-example:latest
    2. If you have the router installed, make the application available via a route or use the service IP directly.

      1. $ oc expose svc/deployment-example
    3. Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image.

    4. Scale the DeploymentConfig object up to three replicas:

      1. $ oc scale dc/deployment-example --replicas=3
    5. In your browser, refresh the page until you see the v2 image.

    6. When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1:

      1. $ oc describe dc deployment-example

    During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as ready (by passing their readiness check), the deployment process continues.

    If the pods do not become ready, the process aborts, and the deployment rolls back to its previous version.

    Prerequisites

    • Ensure that you are in the Developer perspective of the web console.

    • Ensure that you have created an application using the Add view and see it deployed in the Topology view.

    Procedure

    To start a rolling deployment to upgrade an application:

    1. In the Topology view of the Developer perspective, click on the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy.

    Additional resources

    The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.

    Example recreate strategy definition

    1. strategy:
    2. type: Recreate
    3. recreateParams: (1)
    4. pre: {} (2)
    5. post: {}

    The recreate strategy:

    1. Executes any pre lifecycle hook.

    2. Scales down the previous deployment to zero.

    3. Executes any mid lifecycle hook.

    4. Scales up the new deployment.

    5. Executes any post lifecycle hook.

    During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure.

    When to use a recreate deployment:

    • When you must run migrations or other data transformations before your new code starts.

    • When you do not support having new and old versions of your application code running at the same time.

    • When you want to use a RWO volume, which is not supported being shared between multiple replicas.

    A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time.

    You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console.

    Prerequisites

    • Ensure that you are in the Developer perspective of the web console.

    • Ensure that you have created an application using the Add view and see it deployed in the Topology view.

    Procedure

    To switch to a recreate update strategy and to upgrade an application:

    1. In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application.

    2. In the YAML editor, change the spec.strategy.type to Recreate and click Save.

    3. Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version.

      Figure 2. Recreate update

    Additional resources

    The custom strategy allows you to provide your own deployment behavior.

    Example custom strategy definition

    1. strategy:
    2. type: Custom
    3. customParams:
    4. image: organization/strategy
    5. command: [ "command", "arg1" ]
    6. environment:
    7. - name: ENV_1
    8. value: VALUE_1

    In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image’s Dockerfile. The optional environment variables provided are added to the execution environment of the strategy process.

    Additionally, OKD provides the following environment variables to the deployment process:

    The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user.

    Alternatively, use the customParams object to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default OKD deployer image is used instead:

    This results in following deployment:

    1. Started deployment #2
    2. --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    3. Scaling custom-deployment-2 up to 1
    4. --> Reached 50% (currently 50%)
    5. Halfway there
    6. --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods)
    7. Scaling custom-deployment-1 down to 1
    8. Scaling custom-deployment-1 down to 0
    9. --> Success
    10. Complete

    If the custom deployment strategy process requires access to the OKD API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication.

    The rolling and recreate strategies support lifecycle hooks, or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy:

    Example pre lifecycle hook

    1. pre:
    2. failurePolicy: Abort
    3. execNewPod: {} (1)
    1execNewPod is a pod-based lifecycle hook.

    Every hook has a failure policy, which defines the action the strategy should take when a hook failure is encountered:

    Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field.

    Pod-based lifecycle hook

    Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a object.

    The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity:

    1. kind: DeploymentConfig
    2. apiVersion: apps.openshift.io/v1
    3. metadata:
    4. name: frontend
    5. spec:
    6. template:
    7. metadata:
    8. labels:
    9. name: frontend
    10. spec:
    11. containers:
    12. - name: helloworld
    13. image: openshift/origin-ruby-sample
    14. replicas: 5
    15. selector:
    16. name: frontend
    17. strategy:
    18. type: Rolling
    19. rollingParams:
    20. pre:
    21. failurePolicy: Abort
    22. execNewPod:
    23. containerName: helloworld (1)
    24. command: [ "/usr/bin/command", "arg1", "arg2" ] (2)
    25. env: (3)
    26. - name: CUSTOM_VAR1
    27. value: custom_value1
    28. volumes:
    29. - data (4)
    1The helloworld name refers to spec.template.spec.containers[0].name.
    2This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image.
    3env is an optional set of environment variables for the hook container.
    4volumes is an optional set of volume references for the hook container.

    In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod has the following properties:

    • The hook command is /usr/bin/command arg1 arg2.

    • The hook container has the CUSTOM_VAR1=custom_value1 environment variable.

    • The hook failure policy is Abort, meaning the deployment process fails if the hook fails.

    • The hook pod inherits the data volume from the DeploymentConfig object pod.

    You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI.

    Procedure