The DevOps engineer's handbook The DevOps engineer's handbook

7 Kubernetes deployment strategies: Pros, cons, and how to choose

What is a Kubernetes deployment strategy?

Kubernetes deployment strategies determine how changes to applications are rolled out while aiming to minimize disruption to existing systems. These strategies define the process for updating applications, distributing traffic, and, in some cases, managing rollback to avoid downtime or service disruption.

Deployment strategies built into Kubernetes include rolling update, ramped slow rollout, and recreate deployment. More advanced deployment strategies, such as blue/green and canary deployment, are not built into Kubernetes but can be implemented with the help of additional tools.

Deployment strategies in Kubernetes are essential for effective application lifecycle management. They enable developers to push updates to applications in a controlled manner, ensuring that potential issues can be quickly mitigated. Properly executed deployment strategies lead to smoother transitions between application versions, better resource utilization, and more stable production environments.

Kubernetes deployment strategies built into Kubernetes

1. Rolling update

A Rolling Update is a Kubernetes deployment strategy where updates to an application are applied to pods gradually, while other pods continue to run a previous version. Instead of taking down the entire application, a few pods are updated at a time. This ensures that some instances of the application remain available during the update, reducing downtime and maintaining service continuity. As each pod is updated, Kubernetes waits for it to become healthy before proceeding to update the next pod. This method is particularly useful for applications that require high availability.

Pros:

  • Ensures high availability by updating a few pods at a time.
  • Reduces downtime, maintaining service continuity.
  • Provides a gradual rollback option if issues occur during the update.

Cons:

  • Can be complex to configure and manage, requiring careful planning.
  • Longer update process compared to other methods.
  • Potential for transient errors during the update process as old and new versions run simultaneously.

Example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 6
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 2
      maxSurge: 2
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-container
        image: my-app:v2

Here are important points about this manifest:

  • maxUnavailable: 2: Ensures that at least four pods are available at all times by allowing only one pod to be down during the update.
  • maxSurge: 2: Allows two additional pods to be created above the desired number of replicas during the update.
  • This configuration results in a gradual, one-pod-at-a-time update process, ensuring minimal disruption.

2. Ramped slow rollout

Ramped slow rollout is essentially the same as a rolling update, but with both maxUnavailable and maxSurge parameters set to 1. It is a gradual deployment strategy where the new version is slowly rolled out over time, allowing for careful monitoring and validation at each step, to ensure that any issues can be identified and addressed gradually.

Pros:

  • Gradual rollout allows for thorough monitoring and validation.
  • Minimizes risk by slowly introducing the new version.
  • Easier to identify and address issues incrementally.

Cons:

  • Longer deployment time compared to more aggressive strategies.
  • Requires careful configuration to balance rollout speed and stability.
  • Potential for temporary inconsistencies as both versions run concurrently.

Ramped slow rollout example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 4
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-container
        image: my-app:v2

Here are important points about these manifests and how to implement a ramped slow rollout:

  • Similar to Rolling Update but with a focus on slower, more deliberate rollout.
  • By adjusting maxUnavailable and maxSurge, the rollout process can be controlled to be gradual, ensuring stability and careful monitoring.

3. Recreate deployment

Recreate deployment is a strategy where all existing pods are terminated before new pods are created. This method is straightforward and ensures that there is no overlap between old and new versions, but it can cause downtime since the service is temporarily unavailable during the update process.

Pros:

  • Simple and straightforward deployment process.
  • No version overlap, ensuring a clean state for the new version.
  • Easier to implement for small-scale applications.
  • Suitable for edge deployments, where downtime is acceptable.
  • Useful when running multiple clusters, because it is predictable and consistent.

Cons:

  • Causes downtime as all existing pods are terminated before new ones are created.
  • Not suitable for applications requiring high availability.
  • User disruption due to service unavailability during the update.

Recreate deployment example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 4
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-container
        image: my-app:v2

Here are important points about this manifest:

  • strategy.type: Recreate: All existing pods are terminated before new pods are created.
  • This ensures a clean update but results in downtime as no pods are available during the transition.

Advanced Kubernetes deployment strategies

The following strategies are not built into Kubernetes. You can implement them with the help of additional tools such as load balancers, service mesh, or dedicated progressive deployment tools. For example, Octopus Deploy allows you to model complex deployment strategies for applications deployed to Kubernetes or hybrid platforms, such as a Kubernetes application with a database.

4. Blue/green deployment

Blue/Green Deployment maintains two separate environments, Blue (current stable version) and Green (new version). The Green environment is updated and tested independently. Once the new version is verified, traffic is switched from Blue to Green. This strategy ensures zero downtime and provides a predictable rollback option by switching back to the Blue environment if issues arise.

Pros:

  • Zero downtime, as the switch between Blue and Green environments is instantaneous.
  • Simplified rollback by switching back to the previous environment.
  • Independent testing of the new version before production release.

Cons:

  • Requires double the infrastructure, as both environments must be maintained simultaneously.
  • More complex network configuration to switch traffic between environments.
  • Higher operational cost due to resource duplication.
  • Harder to configure, especially if Kubernetes is integrated with additional systems.
  • Code in both blue and green versions must be backward compatible with the states of external systems.

Blue/green deployment example

apiVersion: v1
kind: Service
metadata:
  name: my-app
spec:
  selector:
    app: my-app-blue
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-green
  labels:
    app: my-app-green
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: my-app-green
    spec:
      containers:
      - name: my-container
        image: my-app:v2

Here are important points about these manifests and how to implement a blue/green pattern:

  • The Service initially directs traffic to the Blue environment (app: my-app-blue).
  • The Green deployment (my-app-green) is updated and tested independently.
  • Once the Green environment is validated, the service selector can be updated to app: my-app-green to switch traffic.

5. Canary deployment

A canary deployment gradually introduces a new application version by deploying it to a small subset of users first (these are known as the ‘canary’). This strategy allows early detection of issues by exposing the new version to a limited audience. If no problems are found, the new version is incrementally rolled out to the rest of the users.

Pros:

  • Early detection of issues by exposing the new version to a small subset of users.
  • Incremental rollout minimizes risk and allows for controlled monitoring.
  • Easier rollback compared to full-scale deployments.

Cons:

  • Requires high application traffic to be effective.
  • Complex traffic routing to manage the subset of users.
  • Prolonged deployment process due to incremental updates.
  • Risk of inconsistent user experience if canary and main versions differ significantly.

Canary deployment example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 10
  template:
    metadata:
      labels:
        app: my-app
        version: v1
    spec:
      containers:
      - name: my-container
        image: my-app:v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-canary
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: my-app
        version: v2
    spec:
      containers:
      - name: my-container
        image: my-app:v2

Here are important points about these manifests and how to implement a canary pattern:

  • The main deployment (my-app) runs version 1 with 10 replicas.
  • A canary deployment (my-app-canary) runs version 2 with 2 replicas.
  • Traffic is gradually shifted to the canary deployment for testing before a full rollout. This can be done with a load balancer, service mesh, or dedicated deployment tool.

6. A/B testing

A/B Testing involves running two or more versions of an application simultaneously to compare their performance, usability, or other metrics. This strategy is useful for experimenting with new features and collecting user feedback to determine which version performs better.

Pros:

  • Allows comparison of different versions to determine the best performing one.
  • Collects user feedback and performance data for informed decision-making.
  • Facilitates experimentation with new features and optimizations.

Cons:

  • Requires sophisticated routing logic to split traffic between versions.
  • Potential for user confusion or inconsistent experience.
  • Increased resource consumption by running multiple versions simultaneously.

A/B testing example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-v1
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: my-app
        version: v1
    spec:
      containers:
      - name: my-container
        image: my-app:v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-v2
spec:
  replicas: 5
  template:
    metadata:
      labels:
        app: my-app
        version: v2
    spec:
      containers:
      - name: my-container
        image: my-app:v2

Here are important points about these manifests and how to implement an A/B testing pattern:

  • Two separate deployments (my-app-v1 and my-app-v2) run simultaneously.
  • Traffic can be split between these versions using a service mesh or custom routing logic to gather comparative data.

7. Shadow deployment

Shadow Deployment runs the new version alongside the existing version without impacting live traffic. The new version receives a copy of the live traffic for testing purposes. This allows performance and functionality validation in a production-like environment without affecting users.

Pros:

  • Does not impact live traffic, ensuring user experience remains unaffected.
  • Provides real-world performance and functionality validation.
  • Easy to identify and fix issues before a full rollout.

Cons:

  • Additional complexity in setting up traffic mirroring or duplication.
  • Requires extra resources to run the shadow version.
  • No direct feedback from end-users since they are not interacting with the shadow deployment.

Shadow deployment example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-shadow
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: my-app-shadow
    spec:
      containers:
      - name: my-container
        image: my-app:v2

Here are important points about this manifest and how to implement a shadow deployment:

  • The shadow deployment (my-app-shadow) runs the new version and receives mirrored traffic.
  • Traffic mirroring tools or a service mesh can be used to duplicate incoming traffic to the shadow deployment for testing.

Related content: Read our guide to Kubernetes deployment YAML

Which Kubernetes deployment strategy to choose?

Key considerations when choosing a Kubernetes deployment strategy:

  1. Application downtime tolerance: Assess the level of downtime your application can tolerate. Critical applications require strategies like Blue/Green or Canary Deployments to ensure zero or minimal downtime.
  2. Rollback mechanism: Evaluate the ease of rolling back to a previous version in case of issues. Strategies like Blue/Green Deployment offer straightforward rollback options.
  3. Traffic management: Consider how traffic is handled during the deployment. Strategies like A/B Testing and Canary Deployment provide granular control over traffic distribution.
  4. Resource utilization: Examine the impact on resources, including memory and CPU usage. Rolling Updates and Ramped Slow Rollouts can help in balancing resource consumption.
  5. Testing and validation: Determine the extent of testing required before fully rolling out a new version. Shadow Deployment and Canary Deployment allow testing in a production-like environment without impacting end-users.
  6. Complexity and maintenance: Factor in the complexity of implementing and maintaining the deployment strategy. Some strategies, like Recreate Deployment, are simpler but may not be suitable for all use cases.
  7. Monitoring and observability: Ensure robust monitoring and observability are in place to detect and address issues promptly during the deployment process. Strategies like Ramped Slow Rollout facilitate careful monitoring at each step.

Discover the hierarchy of considerations for choosing the right deployment strategy.

Not all applications can or should implement a zero-downtime progressive-style canary deployment. This whitepaper explores the various strategies available today and the considerations for picking one.

Download the white paper

Kubernetes progressive deployment with Octopus

Octopus Deploy is a Continuous Delivery platform that enables you to start with simple deployments and then gradually model more complex rollout scenarios.

For Kubernetes-only deployments, Octopus has a built-in blue-green option, along with support for Rolling Updates and Recreate Deployment. As you progress, you can enhance your deployments by adding custom tests, manual interventions, and infrastructure provisioning steps to make them more robust.

As a highly customizable platform, Octopus also lets you model rollouts for multi-component applications or applications running on hybrid infrastructure. For these advanced scenarios, you’ll need to use Octopus environments. Depending on your use case, steps for infrastructure management (such as Terraform) and scripts for network configuration and tests may also be required. For instance, you can create a Canary scenario for two applications running on Kubernetes or set up a blue-green deployment for an application consisting of a database and an API microservice.

Learn more about rollout strategies with Octopus Deploy.

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Categories:

Next article
Kubernetes YAML