What is Docker?
Docker is an open-source platform to automate the deployment, scaling, and management of applications. It uses containerization to package software and its dependencies, ensuring that an application runs consistently across different computing environments.
Each container is an isolated unit, providing everything needed to run the application, including the code, runtime, libraries, and system tools. Containers are lightweight and efficient, sharing the operating system kernel while maintaining isolation from one another.
This approach reduces system resource consumption compared to traditional virtual machines. Docker simplifies development workflows by allowing developers to build, test, and deploy applications in a controlled environment, promoting a more efficient process.
This is part of a series of articles about CI/CD.
How does Docker support CI/CD processes?
Docker improves continuous integration and continuous deployment (CI/CD) pipelines by providing a consistent environment throughout the development lifecycle. Containers eliminate the “it works on my machine” problem by ensuring that applications run the same in development, testing, and production environments.
Docker’s agility allows for quick creation and teardown of instances, which is advantageous in CI/CD workflows. Automated tests can be run in isolated containers, ensuring that code changes do not introduce new bugs. This leads to faster release cycles and improved software quality. Docker also integrates well with various CI/CD tools.
Tutorial: How to create a CI/CD pipeline with Docker
This tutorial explains how to create and configure a Java application using Docker. These instructions are adapted from the Docker documentation.
Containerize a Java application
Containerizing a Java application involves packaging the application and its dependencies into a Docker container, ensuring consistency across different environments. Here, we use the Spring PetClinic sample application to demonstrate the process. Before you begin, install Docker Desktop and a Git client on your machine.
Step 1: Clone the sample application
First, clone the Spring PetClinic repository to your local development machine:
git clone https://github.com/spring-projects/spring-petclinic.git
The repository contains a Spring Boot application built with Maven. For detailed information, refer to the readme.md
file in the repository.
Step 2: Initialize Docker assets
With the application cloned, you need to create the necessary Docker assets. Docker Desktop offers a docker init
feature that simplifies this process. Alternatively, you can create these assets manually. For this tutorial, we’ll use docker init
.
-
Inside the
spring-petclinic
directory, run the following command:docker init
This command walks you through creating the following files with sensible defaults for your project:
.dockerignore
Dockerfile
compose.yaml
README.Docker.md
-
You will be prompted to provide details about your application. Use the following answers for the prompts:
- Do you want to overwrite them? Yes
- What application platform does your project use? Java
- What’s the relative directory (with a leading .) for your app? ./src
- What version of Java do you want to use? 17
- What port does your server listen on? 8080
-
You should now have the following files in your
spring-petclinic
directory:Dockerfile
.dockerignore
docker-compose.yaml
Step 3: Run the Application
To build and run the application, navigate to the spring-petclinic
directory and execute:
docker compose up --build
The first time you run this command, Docker will download dependencies and build the application. This may take several minutes depending on your network connection. Once completed, open a browser and visit http://localhost:8080
to see the application.
To stop the application, press Ctrl + C in the terminal.
Step 4: Run the application in the background
To run the application detached from the terminal, use the -d
option:
docker compose up --build -d
Again, open a browser and go to http://localhost:8080
to view the application. To stop the application running in the background, execute:
docker compose down
Set Up CI/CD for your Java application
Configuring continuous integration and continuous deployment for a Java application using Docker involves setting up automated workflows to build, test, and deploy your Docker images. This tutorial will guide you through setting up CI/CD using GitHub Actions to push your Docker image to Docker Hub.
Prerequisites
You must have a containerized Java application. Ensure you have a GitHub account and a Docker account.
Step One: Create the Repository
First, create a GitHub repository, configure Docker Hub credentials, and push your source code.
-
Open GitHub and create a new repository.
-
Go to your repository’s Settings, navigate to Secrets and variables and then click on Actions.
-
Create a new repository variable named
DOCKER_USERNAME
and set its value to your Docker ID. -
Generate a personal access token (PAT) for Docker Hub, name it
docker-example
, and ensure it has read and write permissions. -
Add the PAT as a repository secret named
DOCKERHUB_TOKEN
. -
In the local repository, run the following commands to push your code to the new GitHub repository:
git remote set-url origin https://github.com/your-username/your-repository.git
git add -A
git commit -m "Initial commit"
git push -u origin main
Step Two: Set up the workflow
Set up your GitHub Actions workflow to automate building, testing, and pushing your Docker image to Docker Hub:
-
In your GitHub repository, click on the Actions tab.
-
Disable any existing Maven build workflow if not required.
-
Click New workflow and select the set up a workflow yourself option.
-
Copy and paste the following YAML configuration into the editor:
name: CI on: push: branches: - main jobs: build: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3 - name: Docker Hub Login uses: docker/login-action@v3 with: username: ${{ secrets.DOCKER_USERNAME }} password: ${{ secrets.DOCKER_PASSWORD }} - name: Docker Build Setup uses: docker/setup-buildx-action@v3 - name: Build and test uses: docker/build-push-action@v6 with: target: test load: true - name: Build and push uses: docker/build-push-action@v6 with: platforms: linux/amd64,linux/arm64 push: true target: final tags: example-image-name:latest
-
This configuration logs into Docker Hub, sets up Docker Buildx, builds and tests the Docker image, and then pushes the final image to Docker Hub.
Step Three: Run your workflow
To run the new job:
-
Save the workflow file and commit the changes:
git add .github/workflows/main.yml
git commit -m "Add CI workflow"
git push
-
Navigate to the Actions tab in your GitHub repository.
-
Observe the CI workflow running automatically after the push.
-
Check the workflow logs for any errors and ensure all steps complete successfully.
-
After the workflow completes, check your Docker Hub repository. You should see the new Docker image tagged with
latest
.
Best practices for CI/CD with Docker
Here are some of the ways to ensure your Docker-driven CI/CD pipelines are effective.
Create a consistent environment with Docker containers
Docker ensures that the application environment is consistent by encapsulating all dependencies and libraries within the Docker container. This isolation reduces discrepancies between developers’ machines and production, leading to fewer runtime errors. Using Docker ensures that each stage of the pipeline, from development to deployment, mirrors the production environment.
Effectively managing Docker images is also essential for maintaining consistency. By versioning Docker images and using tags, teams can recreate environments reliably. This practice is especially useful during rollbacks or when auditing changes to the deployment environment.
Implement version control for Dockerfiles
Version control is essential for tracking changes and ensuring consistency across different application versions. By storing Dockerfiles in a version control system like Git, developers can track modifications, roll back to previous versions if necessary, and maintain a history of changes.
Using branches and pull requests for Dockerfile changes allows teams to review updates before integrating them into the main branch. This review process helps catch potential issues early and ensures that changes are deliberate and well-understood.
Create a process to remove unused images
Over time, as new images are built and deployed, older ones can accumulate and take up valuable disk space. Docker does not provide an automatic image retention policy out of the box, so teams need to configure their own cleanup procedures.
A common approach is to automate the removal of unused images using Docker’s built-in commands. For example, the command docker image prune -a
removes all images not associated with a running or stopped container. This can be integrated into the CI/CD process to run at regular intervals, ensuring that unused images are cleaned up systematically.
Additionally, it’s useful to apply filters to prune images based on criteria such as their age. For example , the following command removes containers older than 24 hours:
docker container prune --filter "until=24h"
Automate the build process
By defining build steps in a Dockerfile and using CI/CD tools to automate these steps, developers can ensure that builds are consistent and reproducible. Automation reduces manual intervention, decreasing the chance of human error and speeding up the development pipeline.
CI/CD tools can be configured to trigger builds automatically when code is pushed to the repository. This ensures that every code change goes through a standardized build process, including compilation, testing, and packaging into Docker images.
Create optimized images
This involves creating lean Docker images to improve efficiency and performance. Start by using minimal base images and only including necessary dependencies and binaries. This reduces the image size, leading to faster deployment times and lower storage requirements.
Multi-stage builds can further optimize images by separating the build environment from the runtime environment. Regularly cleaning up unused images and containers also contributes to optimal Docker image management. Tools like Docker’s image pruning and lifecycle management features can automate this cleanup process.
Establish monitoring and logging for Docker containers
Implementing monitoring and logging for Docker containers ensures that applications are running smoothly and helps in diagnosing issues quickly. Use tools like Prometheus, Grafana, and Docker’s built-in logging drivers to collect and analyze performance metrics and logs. These tools provide visibility into the container’s behavior, enabling proactive resolution of issues.
Centralized logging is particularly beneficial, as it aggregates logs from various containers, making it easier to search and correlate events. Monitoring the health and performance of containers ensures that they are functioning correctly and helps maintain the reliability of the CI/CD pipeline.
Related content: Read our guide to CI/CD tools
CI/CD in containerized environments with Octopus
Octopus Deploy provides a simple but powerful way to manage environment promotion for Docker. Octopus natively supports deployments to Docker by treating Docker images as immutable build artifacts. These artifacts are moved through each stage of deployment by running them as containers with deploy-time specific configuration.
Octopus enables you to run multiple replicas of your applications in an environment and minimize service downtime with a progressive update during deployment.
With Octopus runbooks, you can also automate maintenance tasks like container retention or Docker updates.
Learn more about deploying to Docker with Octopus
Help us continuously improve
Please let us know if you have any feedback about this page.