Menu

Kubernetes YAML Kind: 22 types of Kubernetes objects explained

What is Kubernetes YAML?

Kubernetes YAML is the format for defining and managing resources within a Kubernetes cluster. It uses YAML (YAML Ain’t Markup Language), a human-readable data serialization standard, to describe applications and their dependencies. YAML files, also called manifests, communicate the desired state of objects like Pods, Services, and Deployments.

These definitions are declarative, meaning developers specify what they want, and Kubernetes works to make the cluster match that state. A typical Kubernetes YAML file contains fields such as apiVersion, kind, metadata, and spec. The kind field identifies the type of resource being defined, such as a Pod, Service, or ConfigMap.

For example, when a YAML file starts with the code below, it means that the object being described is a Deployment object:

apiVersion: apps/v1
kind: Deployment

In this article we’ll explain all the common YAML kind values, or object types, and provide some examples of common Kubernetes objects.

This is part of a series of articles about Kubernetes deployment

Common Kubernetes YAML ‘kind’ types

Workload resources

Workload resources describe how containers are deployed and managed in a cluster:

  1. Pod: The basic execution unit in Kubernetes. A pod encapsulates one or more containers, shared storage, network, and a specification for how to run the containers. Pods are ephemeral and typically managed by higher-level controllers like deployments.
  2. Deployment: Provides updates for stateless applications. It maintains the desired number of replicas, enables rolling updates, and allows rollback to previous versions if needed. Deployments are commonly used for web services and APIs.
  3. StatefulSet: Manages stateful applications requiring stable network identities and persistent storage. Unlike deployments, StatefulSets assign each pod a unique, consistent identity across restarts. Suitable for databases and distributed systems.
  4. DaemonSet: Ensures a pod runs on all (or selected) nodes. Common for system-level services like log collectors, monitoring agents, and network proxies.
  5. Job: Runs a finite task to completion. Once the task is completed successfully, the pod terminates. Useful for batch processing or data migrations.
  6. CronJob: Schedules jobs to run periodically using cron syntax. Commonly used for recurring tasks such as backups or report generation.

Service and networking resources

These resources manage connectivity and service discovery:

  1. Service: Provides a stable IP and DNS name to access a set of pods. It automatically performs load balancing across healthy pods. Types include ClusterIP (internal), NodePort (external via node IPs), and LoadBalancer (cloud-based external IP).
  2. Ingress: Acts as an HTTP reverse proxy, routing external requests to internal services based on rules like hostname or path. It supports TLS termination and integrates with ingress controllers for traffic management.
  3. IngressClass: Specifies the controller responsible for implementing an ingress. It allows different ingress controllers to coexist in the same cluster, each handling different ingress resources.
  4. NetworkPolicy: Defines rules to control traffic flow between pods or namespaces. It enables access control, restricting which pods or services can communicate with each other over the network.

Configuration and secret management

These resources separate configuration from container images:

  1. ConfigMap: Stores configuration data in plain text, such as environment variables, command-line arguments, or configuration files. Pods can reference ConfigMaps at runtime, allowing changes without rebuilding images.
  2. Secret: Stores sensitive data like passwords, OAuth tokens, and SSH keys. Secrets are base64-encoded and can be mounted as volumes or exposed as environment variables. They provide secure distribution of credentials.
  3. ResourceQuota: Sets limits on resource usage (CPU, memory, storage, number of objects) within a namespace. Helps prevent resource exhaustion and enforces fairness in multi-tenant clusters.
  4. LimitRange: Defines default, minimum, and maximum resource limits for pods or containers in a namespace. Ensures consistent resource allocation and prevents excessive usage.

Storage resources

Storage objects provide persistent data handling across pod restarts:

  1. PersistentVolume (PV): Represents a piece of networked storage provisioned by an admin or dynamically created. It abstracts the underlying storage technology (e.g., NFS, iSCSI, cloud volumes).
  2. PersistentVolumeClaim (PVC): A user’s request for storage, specifying size and access mode. Kubernetes binds the claim to an available PV that matches the request.
  3. StorageClass: Defines the provisioner and parameters for dynamically creating PVs. Different classes can represent performance tiers or backup policies, allowing flexible storage provisioning.
  4. Volume: Defined within the pod spec to mount storage. Volumes can be ephemeral (like emptyDir) or persistent (like those backed by PVCs). They provide shared storage between containers in a pod.

Access control resources

These manage permissions for users and workloads:

  1. ServiceAccount: An identity for pods to interact with the Kubernetes API. By default, every namespace has a default service account, but custom accounts can be created for more controlled access.
  2. Role and ClusterRole: Define sets of permissions using rules that specify allowed operations on resources. Role applies to a specific namespace, while ClusterRole applies across all namespaces or cluster-wide resources.
  3. RoleBinding and ClusterRoleBinding: Grant the permissions defined in a Role or ClusterRole to users, groups, or service accounts. Bindings control who can do what within or across namespaces.
  4. PodSecurityPolicy (deprecated): Previously used to enforce security settings on pod specifications, such as allowed volume types, privilege escalation, or host networking. It has been replaced by PodSecurity Admission in newer Kubernetes versions.

Learn more in our detailed guide to Kubernetes deployment YAML

Kubernetes YAML ‘kind’ examples

Below are basic YAML templates for common Kubernetes resource kinds. These examples highlight the consistent use of apiVersion, kind, and metadata, which appear in all manifest files, and provide a starting point for customization. Each of these examples can be modified to fit a particular environment. Using them as templates helps reduce syntax errors and supports consistent deployment practices.

Deployment object example

apiVersion: apps/v1
kind: Deployment
metadata:
    name: example-app
spec:
    selector:
        matchLabels:
            app: example-app
    template:
        metadata:
            labels:
                app: example-app
        spec:
            containers:
            - name: example-app
              image: <Image>
              resources:
                  limits:
                    memory: "128Mi"
                    cpu: "500m"
              ports:
                - containerPort: <Port>

This defines a deployment for a stateless application. It ensures pods are created and maintained with specified compute resources, container image, and exposed port.

Service object example

apiVersion: v1
kind: Service
metadata:
    name: example-app
spec:
    selector:
        app: example-app
    ports:
    - port: <Port>
      targetPort: <Target Port>

This manifest sets up a service that routes traffic to pods with the label app: example-app. The port is exposed by the service, and targetPort maps to the container’s port.

ConfigMap object example

apiVersion: v1
kind: ConfigMap
metadata:
    name: example-app
data:
    key: value

This file creates a ConfigMap named example-app with a key-value pair. The values can be injected into pods as environment variables or mounted as files.

Secret object example

apiVersion: v1
kind: Secret
metadata:
    name: example-secret
type: Opaque
data:
    password: <Password>

This manifest defines a secret used for storing sensitive data. Values must be base64-encoded. These can be consumed by pods as environment variables or mounted files.

Best practices for working with Kubernetes object YAML manifests

Here are some useful practices to consider when using various Kubernetes YAML kind types.

1. Use stable API versions

Kubernetes evolves quickly, and with each version, certain APIs may be deprecated or removed. Relying on beta or alpha APIs introduces risk because these versions can change without warning, affecting compatibility and breaking deployments. Stable APIs, designated without a beta or alpha suffix (e.g., apps/v1), are tested and supported across multiple Kubernetes versions.

For example, deployments should use apps/v1, not older APIs like extensions/v1beta1, which were deprecated and eventually removed. Using stable APIs ensures manifests remain functional across upgrades. Developers should routinely check the Kubernetes API reference and release notes to verify the version being used is current and supported.

2. Define resource requests and limits

Kubernetes schedules pods based on the requested amount of resources, not their actual usage. If no requests are defined, the scheduler may place multiple pods on the same node assuming minimal usage, potentially leading to resource contention. Limits prevent any single container from monopolizing CPU or memory. For example, a pod with requests: 200Mi and limits: 512Mi will always get at least 200Mi of memory, but not exceed 512Mi.

Defining these values also helps Kubernetes make better decisions under load or during autoscaling. In production clusters, it’s essential to benchmark application performance and set realistic requests and limits based on observed usage. These settings can also trigger Kubernetes mechanisms like eviction or throttling when a container exceeds its defined limit.

3. Implement semantic labels and annotations

Labels are critical for organizing and selecting Kubernetes resources. They are used by components like services, deployments, and monitoring tools to associate and manage resources logically. Semantic labeling (e.g., app=nginx, tier=frontend, env=prod) enables consistent naming schemes and makes querying and filtering through kubectl or other tools straightforward. Labels also influence service selectors, network policies, and auto-scaling rules.

Annotations, while not used for selection, add useful metadata that helps with auditability, observability, and automation. For instance, including a Git commit hash or build version as an annotation can help trace a deployment to its source. Many tools use annotations to store configuration or operational state (e.g., ingress controllers or service mesh configurations).

4. Validate manifests before deployment

Manifest validation is a safeguard against syntax errors and misconfigurations. Applying a malformed YAML file can result in silent failures, or worse, a partial deployment that’s difficult to debug. Running kubectl apply \--dry-run=client checks manifests locally without making changes to the cluster, catching many errors early.

Another option is to use kubectl explain to understand the structure and accepted fields for any resource type. Beyond kubectl, static analysis tools like kubeval, kube-linter, and Conftest validate against Kubernetes schemas and enforce policy compliance. These tools catch subtle issues like missing fields, deprecated API usage, or insecure configurations.

5. Avoid using naked pods

Naked pods—pods created without a controller like a deployment or job—do not benefit from Kubernetes’ self-healing capabilities. If a naked pod crashes or is deleted, it won’t be recreated automatically. This makes them unsuitable for production workloads where reliability and fault tolerance are critical.

Instead, use deployments, statefulsets, or daemonsets to manage pods, as these controllers monitor the pod lifecycle and ensure the desired number of replicas are always running. Using controllers also unlocks features such as rolling updates, automated rollbacks, and pod distribution strategies.

For example, a deployment allows zero-downtime updates and maintains application availability during upgrades. It also simplifies day-two operations like scaling, configuration updates, and canary deployments.

Automating Kubernetes deployment with Octopus

Reducing Kubernetes complexity at scale

Octopus Deploy tackles the mounting complexity that organizations face when managing Kubernetes deployments across hundreds or thousands of applications. What begins as straightforward scripts and a handful of YAML files inevitably grows into an unmanageable mess of “YAML sprawl” and custom tooling that’s difficult to maintain. Octopus consolidates this complexity into a single platform designed to handle software and AI workload delivery to Kubernetes at enterprise scale, freeing developers from the frustration of switching between multiple tools during troubleshooting.

Consistent deployments across environments

A major advantage of Octopus is its native environment modeling capability, which is notably absent from many Kubernetes tools. Rather than manually editing manifest files and managing error-prone release promotions, teams can define their deployment process once and apply it consistently across all environments. This approach minimizes the need for custom scripts and builds confidence in production releases, since the identical process has already been tested and proven in development and staging environments.

Complete visibility with enterprise security

Octopus delivers comprehensive oversight of Kubernetes applications through a centralized dashboard that shows real-time status, deployment history, logs, and manifests across every cluster and environment. The platform comes equipped with enterprise-level compliance capabilities including integrated ITSM tools, role-based access controls, and detailed audit logging. Teams can leverage runbooks to manage kubectl access permissions securely, with deployment tasks executing directly within clusters without requiring broader API exposure.

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Categories:

Next article
DevOps