Menu

3 Kubernetes Pod YAML Examples: From Basic to Advanced

What is a Kubernetes YAML?

A Kubernetes YAML file is a configuration document written in YAML (YAML Ain’t Markup Language) that describes resources managed by the Kubernetes container orchestration system. These YAML files allow users to define the desired state of objects such as pods, deployments, and services.

The primary advantage of using YAML in Kubernetes is its human-readability, enabling both operators and developers to create and maintain resource definitions efficiently. Each YAML file outlines the specifications Kubernetes uses to create and manage resources in the cluster.

In the context of Kubernetes pods, a YAML file can declare the number of pod replicas, container image versions, port mappings, environment variables, resource limits, and other operational settings. Kubernetes uses the information in these files to reconcile the pod state with the user’s declarative instructions.

What are Kubernetes pods?

In Kubernetes, a pod is the smallest deployable unit and acts as a wrapper for one or more containers. A pod encapsulates containers that share the same network IP, storage, and set of instructions for how to run the application. While most pods contain a single container, they can include additional containers that work closely together, supporting features like sidecar patterns for logging or proxying. Pods are ephemeral in nature.

Once a pod is created, it runs on a node until it is terminated or evicted. If a pod dies, Kubernetes creates a new pod with a different identifier if necessary. This model supports rolling updates and high availability while abstracting many of the operational burdens from application teams. By grouping closely related containers in a pod, Kubernetes simplifies networking, storage sharing, and resource scheduling.

This is part of a series of articles about Kubernetes deployment

Key components of a pod YAML file

Here is an example of a pod YAML file:

apiVersion: v1  
kind: Pod
metadata:  
    name: example-pod  
    namespace: default  
    labels:  
        app: my-app
        tier: frontend  
    annotations:
        description: "This pod runs a simple web application"
spec:  
    containers:  
        - name: web-container  
        image: nginx:1.21
        ports:
            - containerPort: 80
        env:
            - name: ENVIRONMENT
            value: production  
        resources:
            limits:
                memory: "256Mi"
                cpu: "500m"
            requests:
                memory: "128Mi"
                cpu: "250m"
        volumeMounts:
            - name: shared-data
            mountPath: /usr/share/nginx/html
    volumes:
        - name: shared-data
        emptyDir: {}
    restartPolicy: Always
    nodeSelector:
        disktype: ssd
    affinity:
        nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: zone
                        operator: In
                        values:
                          - us-east1-a
                          - us-east1-b

The above YAML file contains several key sections that define how a pod should be configured and deployed in Kubernetes:

  1. apiVersion: Specifies the version of the Kubernetes API that the resource conforms to. For pods, this is usually v1.
  2. kind: Identifies the type of resource being defined, which, in this case, is a pod.
  3. metadata: Provides information about the pod, such as its name, namespace, labels, and annotations. This helps Kubernetes identify and manage the pod.
  4. spec: Defines the desired state of the pod. Within the spec section, the key fields include:
  5. containers: A list of containers that will run in the pod. Each container has its own configuration, such as the image to use, ports to expose, environment variables, resource limits, etc.
  6. volumes: Defines storage resources that are shared across containers in the pod. Volumes can be used to persist data or share information between containers.
  7. restartPolicy: Specifies the behavior when a container in the pod fails. Common values are Always, OnFailure, and Never.
  8. nodeSelector: Optional field that restricts the pod to run only on certain nodes that match labels.
  9. affinity: Provides more controls for pod placement based on rules for node or pod selection.

These components ensure that Kubernetes understands the structure, configuration, and operational expectations of the pod within the cluster.

Related content: Read our guide to Kubernetes deployment YAML

Kubernetes pod YAML examples

The examples below are adapted from the Kubernetes documentation.

1. Creating YAML manifest for a simple Kubernetes pod

To deploy a pod in Kubernetes, you first need to create a YAML file that defines the configuration for the pod. This YAML file specifies the essential details about the pod, such as the containers to run, the image to use, and other configurations. Here’s a basic example of a YAML file that deploys a simple web application:

apiVersion: v1
kind: Pod
metadata:
    name: my-first-app
spec:
    containers:
    - name: my-first-app
        image: richardchesterwood/k8s-fleetman-webapp-angular:release0

Let’s store the above file as a p1.yaml. We can apply this configuration using the following command:

kubectl apply \-f p1.yaml

Kubernetes YAML command line

In this example:

  1. apiVersion: v1: Specifies the version of the Kubernetes API to use. Version v1 is commonly used for basic resources like pods.
  2. kind: Pod: Defines the type of Kubernetes object being created—in this case, a pod.
  3. metadata: Contains essential information such as the pod’s name, my-first-app, which uniquely identifies it within the cluster.
  4. spec: Specifies the desired configuration for the pod, including the containers that will run inside the pod.
  5. containers: Defines the container that will be deployed inside the pod. In this example, it specifies the container name (my-first-app) and the image (richardchesterwood/k8s-fleetman-webapp-angular:release0), which pulls the application image from DockerHub.

After creating this file, you can apply it to your cluster using the kubectl apply -f <filename>.yaml command, which will instruct Kubernetes to create the resources as defined in the YAML file.

Create static pods in Kubernetes

Static pods are special types of pods that are managed directly by the kubelet on a given node, without the Kubernetes API server’s involvement in their lifecycle. These pods are bound to one particular node and are not controlled by Kubernetes’ control plane, unlike regular pods managed by deployments or other controllers. The kubelet continuously monitors these pods and restarts them if they fail.

There are two ways to create a static pod: using a filesystem-hosted configuration or a web-hosted configuration.

Filesystem-hosted static pod manifest

  1. Choose a node: Select the node where you want to run the static pod. For example, use my-node1. If you are using minikube, then you will need to SSH into minikube VM, using the following command:

    minikube ssh
  2. Create a manifest directory: On the node, create a directory where the static pod YAML file will be placed. This directory is periodically scanned by the kubelet for changes.

    mkdir -p /etc/kubernetes/manifests/
  3. Create the static pod definition: Create a YAML file that defines the static pod. For example, to run an NGINX web server:

    apiVersion: v1
    kind: Pod
    metadata:
    name: static-web
    labels:
        role: my-role
    spec:
        containers:
        - name: web
        image: nginx
        ports:
            - name: web
            containerPort: 80
            protocol: TCP

Kubernetes YAML command line: kubectl get pods -A

  1. Place the manifest file: Save the YAML file in the /etc/kubernetes/manifests/ directory.

    cat <<EOF >/etc/kubernetes/manifests/static-web.yaml
    apiVersion: v1
    kind: Pod
    metadata:
        name: static-web
        labels:
        role: my-role
    spec:
        containers:
        - name: web
            image: nginx
            ports:
            - name: web
                containerPort: 80
                protocol: TCP
    EOF
  2. Configure the kubelet: Ensure that the kubelet is set to scan the correct directory for static pods by updating the kubelet’s configuration file. In the kubelet configuration, set the staticPodPath to /etc/kubernetes/manifests/.

  3. Restart the kubelet: After configuring the kubelet, restart it to pick up the changes:

    systemctl restart kubelet

Web-hosted static pod manifest

  1. Create a web-hosted YAML file: Place the static pod definition on a web server, making it accessible via a URL.

  2. Configure the kubelet: Set the --manifest-url flag in the kubelet configuration to point to the URL of the web-hosted manifest file.

    KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --manifest-url=<manifest-url>"
  3. Restart the kubelet: After configuring the URL, restart the kubelet to apply the changes.

Observing static pod behavior

Once the kubelet restarts, it automatically starts all static pods defined in the specified directory. You can check the status of the static pod by running the following command on the node:

crictl ps

Kubernetes YAML command line: crictl ps

This command will display information about running containers, including static pods.
You can also view the mirror pod on the Kubernetes API server:

kubectl get pods

Kubernetes YAML command line: kubectl get pods

If you attempt to delete the mirror pod using kubectl, Kubernetes will not remove the static pod. The kubelet continues to run the static pod on the node.

Dynamic addition and removal of static pods

The kubelet periodically scans the directory where the static pod manifests are stored. If a file is added or removed, the kubelet automatically creates or deletes the corresponding static pod.

For example:

  1. Move the static pod definition file to another location:

    mv /etc/kubernetes/manifests/static-web.yaml /tmp
    
    sleep 20
    
    crictl ps

Kubernetes YAML command line: Dynamic addition and removal of pods

  1. Move the file back to the original directory:

    mv /tmp/static-web.yaml /etc/kubernetes/manifests/
    
    sleep 20
    
    crictl ps

Kubernetes YAML command line: Dynamic addition and removal of pods

This will show the static pod restarting/running after being removed and then re-added.

3. Configure a pod to use a PersistentVolume for storage

In Kubernetes, PersistentVolumes (PVs) provide durable storage that persists beyond the lifecycle of individual pods. You can configure a pod to use a PersistentVolume by creating a PersistentVolumeClaim (PVC) that binds to the PV and then referencing that claim in the pod configuration. Here’s how to set up a pod that uses a PersistentVolume for storage.

1. Create an Index File on the Node

Before creating the PersistentVolume and PersistentVolumeClaim, you’ll need to create a file on the node that will be used as part of the PersistentVolume storage.

  1. SSH into the Node: Open a shell to the Node where you’ll configure the PersistentVolume (e.g., using minikube ssh if using Minikube).

  2. Create the directory:

    sudo mkdir /mnt/data
  3. Create an index file:

    sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
  4. Verify the file:

    cat /mnt/data/index.html`

The output should display: Hello from Kubernetes storage.

Kubernetes YAML command line: Create and verify index.html

2. Create a PersistentVolume (PV)

Now, create a PersistentVolume that references the file or directory you created on the node.

  1. PV configuration: Here’s an example of a hostPath PersistentVolume configuration in YAML:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
        name: task-pv-volume
        labels:
        type: local
    spec:
        storageClassName: manual
        capacity:
            storage: 10Gi`
        accessModes:
            - ReadWriteOnce
        hostPath:
            path: "/mnt/data"`

This configuration:

  • Uses a hostPath volume pointing to /mnt/data on the node.
  • Specifies 10Gi of storage capacity and ReadWriteOnce access mode (for use by one node at a time).
  1. Create the PV:

    kubectl apply -f pv-volume.yaml

Kubernetes YAML command line: Create PV

  1. Check the PV Status:

    kubectl get pv task-pv-volume

Kubernetes YAML command line: Check PV status

The STATUS should show as Available, indicating the PV is available but not yet bound.

3. Create a PersistentVolumeClaim (PVC)

Next, create a PersistentVolumeClaim to request a defined amount of storage that can be bound to a PersistentVolume.

  1. PVC Configuration:
    Example YAML for a PersistentVolumeClaim:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
        name: task-pv-claim
    spec:
        storageClassName: manual
        accessModes:
            - ReadWriteOnce
        resources:
            requests:
                storage: 3Gi

This claim requests 3Gi of storage, which is less than the 10Gi available in the PV.

  1. Create the PVC:

    kubectl apply -f pv-claim.yaml`
  2. Check the status of the PV and PVC:

    kubectl get pv task-pv-volume
    
    kubectl get pvc task-pv-claim

The STATUS of the PV should now show as Bound, and the PVC should be Bound to the PV.

Kubernetes YAML command line: Check PV and PVC status

4. Create a Pod That Uses the PVC

Now, create a pod that mounts the PersistentVolumeClaim to access the storage.

  1. Pod configuration: Here is an example pod configuration that mounts the PVC as a volume:

    apiVersion: v1
    kind: Pod
    metadata:
        name: task-pv-pod
    spec:
        volumes:
            - name: task-pv-storage
            persistentVolumeClaim:
                claimName: task-pv-claim
        containers:
            - name: task-pv-container
            image: nginx
            ports:
                - containerPort: 80
                name: "http-server"
            volumeMounts:
                - mountPath: "/usr/share/nginx/html"
                name: task-pv-storage

    This configuration mounts the PVC (task-pv-claim) to /usr/share/nginx/html in the container running the NGINX web server.

  2. Create the pod:

    kubectl apply -f pv-pod.yaml
  3. Verify the pod is running:

    kubectl get pod task-pv-pod

Kubernetes YAML command line: Verify pod is running

5. Verify Storage Access in the Pod

After the pod is running, you can verify that NGINX is serving the file from the PersistentVolume.

  1. Access the pod shell:

    kubectl exec -it task-pv-pod -- /bin/bash`
  2. Install curl (if necessary) and test:

    apt update
    apt install curl
    curl http://localhost/

The output should be:

Hello from Kubernetes storage

This confirms that NGINX is serving the index.html file from the PersistentVolume.

Kubernetes YAML command line: Install curl and test

Best practices for writing and managing Kubernetes pod YAML files

Developers should be familiar with these practices when working with YAML files for Kubernetes pods.

1. Organize YAML files clearly

Organizing YAML files contributes to maintainability and reduces errors. One principle is to break complex setups into multiple files, grouping related resources—such as pods, services, and ConfigMaps—into their own YAML manifests. Using descriptive filenames and logical folder structures makes it easier for operators and developers to locate and update configuration segments.

Clear separation also aids version control, where changes to resources can be isolated and tracked independently. For multi-environment workflows, maintaining environment-specific directories or using overlays allows configuration to be customized without duplicating entire files.

2. Versioning and compatibility

Maintaining API version compatibility is essential as Kubernetes evolves. YAML files should specify the correct apiVersion for each resource type, aligning with the cluster’s Kubernetes version. As deprecated APIs are removed in newer releases, outdated YAML manifests may fail to deploy or function as intended. Regular reviews and upgrades of YAML files, using official Kubernetes changelogs and migration guides, help prevent unexpected disruptions.

Using version control systems, such as Git, is a best practice for managing YAML files. Commit history provides traceability for each change, aiding rollback and troubleshooting when needed. Tagging or branching files per environment or release cycle also helps teams manage compatibility with various clusters, enabling smooth upgrades and maintenance processes.

3. Use labels and annotations wisely

Labels in Kubernetes act as key-value pairs for resource grouping, selection, or filtering. Well-chosen labels enable efficient querying and management of resources through selectors, impacting scaling, maintenance, and monitoring workflows. For example, labeling pods by application, environment, or release version supports granular operations using Kubernetes commands or tools.

Annotations are meant for attaching non-identifying metadata to resources—such as build numbers, contact details, or links to documentation. Unlike labels, annotations are not used for selection but can carry extended information beneficial for automation or troubleshooting. Consistently applying labels and annotations across YAML manifests improves discoverability.

4. Define resource limits

Specifying resource limits and requests in pod YAML files is crucial for efficient cluster operation. The resources field within a container specification allows you to set CPU and memory thresholds, which the scheduler uses to allocate resources and prevent overcommitment. Setting reasonable minimum (requests) and maximum (limits) values helps avoid performance degradation and pod evictions under heavy load.

Leaving out resource definitions can result in unpredictable cluster behavior, as pods may consume more resources than intended, potentially starving other workloads or impacting system components. Reviewing and tuning these limits based on profiling data or recommendations ensures a stable, fair, and economic use of cluster resources.

5. Use health checks

Health checks, defined as liveness and readiness probes in YAML, are vital for reliable workload management in Kubernetes. Liveness probes detect and restart unhealthy containers, helping eliminate manual intervention and minimizing downtime. Readiness probes signal when a container is prepared to receive traffic, preventing premature routing of requests to services that are not fully initialized or operational.

Configuring these probes typically involves HTTP endpoints, TCP checks, or command execution within the container. Adjusting parameters such as initial delays, intervals, and failure thresholds tailors the health checks to the application’s startup characteristics and recovery patterns.

Automating Kubernetes deployment with Octopus

Simplified Kubernetes deployment management

Octopus Deploy addresses the complexity that emerges when scaling Kubernetes deployments across hundreds or thousands of applications. While Kubernetes starts simple with basic scripts and YAML files, it quickly evolves into unwieldy “YAML sprawl” and DIY tooling that becomes difficult to manage and troubleshoot. Octopus provides a unified platform that models all the complexity of delivering software and AI workloads to Kubernetes at scale, eliminating the need for developers to bounce between multiple tools when debugging deployments.

Streamlined Environment Management and Promotion

One of Octopus’s key strengths is its built-in environment modeling, which many Kubernetes tools lack. Instead of manually updating manifest files and managing risky, inconsistent release promotions, Octopus allows teams to define their deployment process once and reuse it across all environments. This approach reduces custom scripting requirements and enables confident production deployments, as the same process has already been validated in other environments like staging and testing.

Enterprise-Grade Visibility and Compliance

Octopus provides comprehensive visibility into Kubernetes applications through a single dashboard that displays live status, deployment history, logs, and manifests across all clusters and environments. The platform includes enterprise-ready compliance features such as built-in ITSM integrations, role-based access control (RBAC), and complete audit trails. Teams can use runbooks to control kubectl permissions while maintaining security, as deployment tasks can run directly on clusters without exposing cluster APIs to unauthorized users.

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Categories:

Next article
DevOps