Kubernetes - also known as ‘K8s’ - is an open-source container management tool that simplifies the maintenance and scaling of software.
Kubernetes automates many of the trickier elements of container deployments, like:
- Allocating and scaling resources
- Updating software
- Restoring services
The basic concepts of Kubernetes
Containers versus Kubernetes
Newcomers to containerization sometimes wonder what the difference between containers and Kubernetes is. The answer is that they’re not comparable because they’re technologies that work together.
Containers are lightweight, portable virtual environments you can use to serve software to a deployment target. Kubernetes is a tool designed to solve the problems of managing containers and their resources on a deployment target.
To that end, you can deploy containers without Kubernetes, but without containers there’s no need to consider Kubernetes.
When you should consider Kubernetes
You should consider Kubernetes if you:
- Need to scale software alongside demand
- Manage different versions of the same software for many customers, as with multi-tenancy
- Need infrastructure flexibility to save costs
- Develop your software using a microservices architecture
- Want your software to have high availability
Kubernetes structure (clusters, nodes, and pods explained)
Each Kubernetes instance is a ‘cluster,’ called so because it contains a few functions: a ‘control plane’ and ‘nodes.’
The control plane (formerly called the ‘master node’) is both a group of nodes the API uses to communicate with the cluster and the engine that helps orchestrate everything within. It directs your containers to where they need to go and proactively tells nodes when to act by talking to its ‘kubelet’- a service that runs on the node to manage communication to and from the control plane.
Standard nodes, also known as workers, contain ‘pods.’ Pods are where deployed containers live and run. Nodes spin up and tear down pods when told to do so by the control plane.
Kubernetes nodes also have a networking function called a ‘kube-proxy’. The kube-proxy manages network traffic to and from pods and its container.
Other elements that help make up a cluster are container engines and runtimes. These both help with the movement and running of containers. There are different types of engines and runtimes but they’re usually dictated by your hosting service, which makes knowledge of them moot for most.
In fact, most of this is moot. Though it’s helpful to understand the structure and workings of a cluster, Kubernetes’ purpose is to save you from fretting about how to manage or automate these small details.
When working with Kubernetes, you only really need to tell your cluster what you want to deploy and the state it needs to run in.
Cost-effective scalability and high availability
Kubernetes can detect cluster activity and automatically change the number of nodes or resources needed to keep running.
Let’s say the node running your containerized software suddenly gets more traffic than expected. Your software could slow down or, worse, fall over. You can set up Kubernetes to help manage this by automatically spinning up extra pods and nodes when it detects high traffic, and have it tear them down when no longer needed.
This can help cut infrastructure costs, especially if hosting with major cloud providers.
Self-healing and resilience
When you have problems unrelated to traffic - containers becoming corrupt, for example - Kubernetes can detect and automatically take action.
If a container running your software stops working, Kubernetes can tear it down and replace it with a fresh container as soon as it’s detected.
That means you can spend less time monitoring and troubleshooting your software.
You can run Kubernetes anywhere
Wherever you can run containers, you can use Kubernetes to manage them, including:
- Windows (using VMs)
- Self-hosted hardware
- All major cloud service providers
Steep learning curve
Kubernetes solves many problems for those delivering software with containers but it can be difficult to understand.
Those used to interacting with their tools via UIs could struggle. The most common way to interact is through a terminal window via a command line interface (CLI) called kubectl (though you can install a web-based dashboard, you’ll still need to use the CLI to get up and running).
Even if you’re used to working with CLIs, Kubernetes can be… intricate. It’s an opinionated solution that splits containerization management into layers not easily or obviously understandable without experience.
Smaller talent pools
Due to its swift emergence and learning curve, people specializing in Kubernetes can be hard to find. As Kubernetes gets more widely adopted, that should get easier over time.
If adding Kubernetes to your stack, expect to spend significant time and resources on learning or hiring.
If deploying one app to a single Kubernetes cluster, you’ll likely know exactly what version is live.
If you deploy apps to many Kubernetes clusters, it’s harder to get a quick, clear picture of what deployed where. It’s a problem that can only worsen as software scales. Without solid tooling, you might spend a lot of time trawling through text logs.
- Kubernetes deployments
- Set up a local Kubernetes deployment pipeline
- How octopus can help with container deployments
Help us continuously improve
Please let us know if you have any feedback about this page.