The DevOps engineer's handbook The DevOps engineer's handbook

Kubernetes deployments

Kubernetes - known as ‘K8s’ - makes many elements of container management easier. Deployments are a different matter, however.

In this glossary page, we explore:

  • Kubernetes deployment basics
  • Challenges and best practices
  • Tools that can help you manage Kubernetes deployments

Kubernetes deployments at their most basic

How you interact with Kubernetes during deployment is not that different from deploying containers normally.

Everything before it is the same. You still commit your changes as normal. Your build process will run and, if successful, send your software to a container repository. From there, you can still deploy your software to its intended destination.

The only major difference is that on that destination are Kubernetes clusters to run and manage the containers. When deploying to Kubernetes, you need to tell it the state you want your containers to run in, and it will do the rest.

But how do you tell Kubernetes what you need? There are a few ways.

Kubernetes manifests

Kubernetes manifests define how a cluster operates. You use manifests to set the desired state for software to run on a cluster. For example, you can create a manifest that asks Kubernetes to run a container, or create a certificate or disk.

Manifests deploying software often include:

  • App and service names
  • Metadata
  • Networking settings like protocol type and port numbers

Helm

Helm is a package manager for Kubernetes that’s helpful if you deploy the same containers many times.

Helm uses ‘charts’ to deploy containerized applications. Charts are a collection of YAML or JSON files that dictate and contain everything Kubernetes needs to run the software in the desired state.

When you install or update software with a Helm chart, you do so with one command line. Helm then creates a template from your configuration files to deploy your software to the cluster.

Kustomize

Kustomize is a YAML-based command line interface (CLI) tool. It lets you customize your applications without templating.

Challenges and concerns with Kubernetes deployments

Scale

Though Kubernetes takes some of the pain out of managing containerized applications at scale, deployments at scale are a different prospect.

Deploying to 1 or 2 clusters is fairly easy, but as your cluster number grows, so does the complexity. Deploying often to tens, hundreds, or thousands of clusters makes your software hard to track, manage securely, and maintain.

Modern software architectures - like multi-tenancy or microservices - complicate things further, where customers or applications need isolated instances or customization.

If you’re a software provider managing releases with GitOps, you could also suffer ‘YAML sprawl’. YAML sprawl is where you waste valuable time editing files across your organization’s Git repositories to progress releases.

Observability

DevOps processes suggest giving important information as quickly as possible to those who need it.

As Kubernetes is a command-line first solution, it can be hard to quickly understand what’s deployed where unless you’re a Kubernetes expert. Of course, in modern software development, a lot of non-experts need to understand your application’s status too.

Kubernetes does offer an optional dashboard of sorts, but it’s not particularly useful if you have a lot of clusters in different locations or use hybrid environments.

Best practices for Kubernetes deployments

Use labeling and annotations

Labeling and annotations are different metadata options in Kubernetes.

Labels help you give a cluster’s objects meaningful names or descriptions. You could use them to categorize pods for informational or process purposes. You could distinguish between back-end or front-end services, or highlight pods with sensitive data.

As with their text editing counterparts, Kubernetes annotations exist to give more information to others. You could use them to add context or warnings to objects, or to add clarity for why something works the way it works.

The easiest way to think about it is that labels identify, annotations explain.

Resource and request limits on containers

This one’s pretty simple. When you deploy a container, Kubernetes’ control plane will send it to a node with the resources it needs to run. The container asks; the control plane provides.

Setting limits helps ensure containers only get what they need and won’t impact your overall system. It’s especially useful for fast-scaling applications.

Security

There are many security recommendations for those running their software in Kubernetes. Some decisions will depend on your tech stack or your software’s structure.

Here’s what we think works best for most Kubernetes use cases:

  • Control access to your clusters - Use security protocols or services like Lightweight Directory Access Protocol (LDAP), Open ID Connect (OIDC), or Role-based Access Control (RBAC).
  • Give your infrastructure as little access to other systems as possible - Known as the Principle of Least Privilege (PoLP).
  • Use your tooling’s security features - Development tools often come with built-in security options - you should use them where it makes sense.
  • Add logging tools to your pipeline - Tracking actions taken in your clusters (and across your infrastructure) will help you see:
    • What caused problems
    • Points of weakness
    • Areas for process or system improvement
  • Use image scanning - Check your images for known security problems before deployment.
  • Track network traffic across your infrastructure - Unexpected traffic could be a sign of a security breach.
  • Keep your tools up-to-date - Software and service providers often update their offerings to fix known vulnerabilities.
  • Follow cyber security news - Being aware of what’s happening in cyber security can alert you to major vulnerabilities.

CI/CD pipeline integration

The most important thing you can do to improve your K8s deployment experience is to embed Kubernetes into your pipeline. Doing so helps reduce manual tasks and gives you fewer things to think about.

More importantly, it ensures you get regular, valuable feedback. Regular feedback helps guide process improvements and results in more reliable, stable software delivery.

Tools to help with Kubernetes deployments

Octopus Deploy

Octopus Deploy helps simplify Kubernetes deployments (and all complex deployments) at scale, with:

  • Consistent deployments with environmental progression
  • Easy-to-understand but thorough observability
  • Routine maintenance features

By using Octopus’s variables to customize different actions in similar deployments, you can easily cut down on YAML sprawl.

Codefresh

Codefresh is a complete CI/CD platform that offers a full deployment pipeline. It’s useful for those practicing GitOps and deploying to Kubernetes thanks to features built on Argo.

Octopus acquired Codefresh in March 2024, so no matter your strategy preferences or needs, we have a tool to help you deploy.

Argo CD

Argo CD is an open-source Kubernetes deployment tool. It’s focused heavily on GitOps, meaning you manage all your pipeline’s components in version control.

Other deployment tools

  • GitLab - An end-to-end CI/CD tool for broad use.
  • Spinnaker - An open-source CI/CD solution created by Netflix. Includes integrations for common pipeline tooling, but also their ‘chaos engineering’ testing tool, Chaos Monkey.
  • Flux - A GitOps-focused Kubernetes Continuous Deployment tool.

More reading

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Categories:

Next article
Kubernetes