The DevOps engineer's handbook The DevOps engineer's handbook

Software delivery principles, methods, and 7 tips for success

What is software delivery?

Software delivery is the process of creating and updating software and getting it to users. It involves discovering user needs, creating helpful features, and ensuring they make the intended difference. Your software delivery performance can be measured using the DORA metrics, which define good performance in terms of throughput and stability.

Software delivery is collaborative, with developers, operations, business representatives, and other specialists contributing to the outcome. Within the software delivery process, you’ll find activities like discovery, planning, coding, building, testing, deploying, and monitoring. These stages are often sequential, with a feature or fix progressing through them before being delivered. This requires the free flow of information and high levels of collaboration to encourage flow and to find and work on bottlenecks.

The goal is to deliver high-quality software swiftly so you can gather feedback to inform the discovery and planning steps. Working in small batches makes it easier to adapt to what you learn, making it possible to change direction instead of continuing to execute a bad plan. In the modern business environment, mastering software delivery is crucial for maintaining competitiveness and customer satisfaction.

This is part of a series of articles about software deployment.

3 principles of effective software delivery

1. Frequent and reliable deployments

Frequent and reliable deployments are crucial for successful software delivery outcomes. By breaking down work into small and manageable increments, you can address user feedback rapidly and often discover that some of the work you planned isn’t necessary. Working in smaller batches also lowers the effort spent on code integration and reduces the risk of introducing significant issues.

To deploy more often, you’ll automate the process, which makes it more reliable, too. This reduces deployment anxiety and changes stakeholders’ perception of the technical team. Delivering frequent small batches also reduces the pressure to ensure a feature of fix joins the release train.

2. Collaboration and transparency

Agile encouraged cross-functional teams, and DevOps told us to extend this beyond development to operations, security, and other specialized roles. It’s often necessary to revisit goals and align them where there’s a conflict to encourage high levels of collaboration. This is essential as information flow predicts performance, and traditional hand-offs create queues that delay software delivery, causing teams to work in larger batches that are more likely to introduce faults.

When all roles involved in software delivery and operations work together, they can streamline the needs of individual specialists. For example, they can integrate security checks in the deployment pipeline or produce a database migration report to ease the approval task for database administrators. In return, change approvals can be lighter and more effective.

3. Continuous improvement and feedback

Creating a high trust and low blame culture makes it easier for everyone involved in software delivery to find ways to improve the process, tools, and outcomes. Using retrospectives or the improvement kata model, teams can learn from metrics, past experiences, and near misses and make changes that improve work throughput and software reliability. This benefits everyone as users are more satisfied, and the software delivery team has higher morale.

While continuous improvement focuses on improving software delivery, feedback loops help teams deliver the right features. Feedback should be baked into your discovery process to ensure you approach features with the users in mind. You should validate the features produce the expected outcomes. User-centric products are more successful and long-lasting, even with fewer features than competitors.

Software delivery models

Phased models, such as Waterfall

Phased software delivery models are traditional approaches that deliver a software project through sequential stages. Each stage has a quality gate that must be passed before the work can proceed to the next stage. Examples of phased software delivery include waterfall and the Lincoln Labs model.

While many in the software industry believe phased models are appropriate for managing large-scale software delivery, authors of influential papers describing phased models, such as Winston Royce and Herbert Benington, agreed that creating a smaller working system and evolving it was a more successful approach.

Agile methods

Lightweight methods emerged during the 1990s and coalesced into the Agile movement, emphasizing adaptability over following a fixed plan. Instead of handing work between specialized teams, Agile encouraged creating collaborative cross-functional teams who could be responsible for the whole software delivery process. By delivering working software often, the team could learn from the users and change direction quickly to increase the usefulness of the software they created.

Agile methods accept that the software you plan may not be what people want. The ability to quickly change direction reduces the waste of time and effort refining specifications for features that turn out to be unwanted. Within Agile, deadlines are managed by adjusting the scope, and changing requirements are welcomed as they reduce the waste of creating features nobody wants.

DevOps culture and practices

DevOps took Agile’s collaborative aspects to the next level, encouraging goal alignment across development, operations, and other specialized roles like security and compliance. Traditionally, organizations wanted throughput from developers and stability from operations, which put the teams in conflict. By making both of these a shared concern, it’s possible to increase throughput and stability simultaneously, often assisted by automation.

DevOps combines cultural practices, such as Lean product management and generative culture, with technical practices, like deployment automation and monitoring. Each of these capabilities contributes to improvements in software delivery performance, and the cultural aspects are fundamental to success.

Stages of the software delivery pipeline

1. Discovery

Working closely with customers and users to deeply understand their needs is the most effective way to create high-performing software products. Teams that are highly user-centric create products that outperform competitors who focus on throughput, as more of their time is spent on features users value.

User-centricity doesn’t mean giving users exactly what they ask for. Instead, you develop strong opinions on how their problems can be solved, which are valuable in their own right and through their implementation in the software. The discovery process should be continuous, including user feedback and usage metrics to guide your roadmap.

2. Feature development and Continuous Integration

Once you know what to build, the feature development process can happen. Developers apply precision and creativity as they design the implementation, with version control systems like Git providing a quick way to revert to a known good state if they take a wrong turn.

Instead of working with long-lived branches, developers should frequently commit changes to the main branch, using techniques like keystoning or feature flags to hide in-progress work from users. This prevents significant changes from being hidden from other developers for long periods of time, which is the common cause of large merge conflicts.

3. Build and test

While the feature development stage is a creative endeavor, everything after the developer commits a change should be like a production line. The build and test stage transforms source code into executable software, compiling, linking, and packaging the output as appropriate.

Once built, a set of fast-running tests should validate that the software version is good enough to continue through the deployment pipeline. These tests should alert developers within 5 minutes if there’s a problem. When an issue is detected, developers can spend 10 minutes trying to fix it and revert their changes if they are unsuccessful. This keeps the software ready to deploy at all times.

4. Acceptance

The acceptance stage encompasses everything you need to do to confidently deploy the software to production. This varies by organization, with some automating this process end-to-end while others use manual checks to increase their confidence in deployment. This may include security scans, end-to-end tests, visual checks, and other mechanisms that ensure the software version is good.

Improving your deployment pipeline through automation and eliminating hand-offs and wait times significantly improves software delivery performance. If any stage of your pipeline delays software delivery, developers add more changes to the batch, starting the downward spiral of large batches and their side effects.

5. Deployment and release

Deployment moves the software version through each environment, eventually placing it in the production environment where users will access it. The deployment process configures the software for each environment by replacing configuration variables. New features may be released as part of the deployment or controlled by switching on a feature toggle, which decouples release from deployment.

You can use progressive delivery techniques to minimize or even eliminate deployment downtime. Strategies such as blue/green deployments or canary releases provide ways to prepare environments offline and gradually expose users to new versions. Used alongside real-time monitoring, progressive delivery significantly reduces deployment risk while minimizing disruptions.

6. Monitoring and maintenance

Monitoring provides an essential view into the health of your software and systems. With a robust alerting strategy, you can help you spot problems and resolve them before they impact services. You should design alarms to sound for all real issues, with minimal false alarms.

Ongoing maintenance battles entropy in your software system, returning the software to the desired levels of performance and security or adjusting features to handle industry or regulatory changes. Performing maintenance tasks more regularly reduces the issues caused by infrequent large leaps. When you don’t regularly take care of maintenance tasks, you end up chasing compatibility issues in situations where maintenance has become urgent, either because support has ended or a security issue has been found.

Tools and technologies in software delivery

Version control systems

Version control systems (VCS) are the backbone of software delivery. They are vital for coordinating changes across many developers and tracking activity history. When a bad change is introduced, they allow reverting to a known good version.

Committing changes to a central shared branch supports Continuous Integration practices, which reduce merge conflicts and errors introduced when different developers make semantically incompatible changes.

Automation and orchestration tools

Continuous Delivery pipelines automate practically everything. Build tools like Jenkins or GitHub Actions create and test software artifacts, while deployment automation platforms like Octopus provide reliable and repeatable automation for deployments and operations tasks. High-performing teams create a toolchain of best-in-class tools as these outperform general-purpose DevOps platforms.

Automation is crucial for increasing software delivery performance. It enables short feedback cycles and allows for operations at scale or with high complexity. Orchestration tools provide visibility across the delivery process, increasing collaboration and highlighting improvement opportunities.

Containerization technologies

Containerization technologies like Docker revolutionize software delivery by encapsulating applications and their dependencies into isolated, portable containers. By abstracting platform-specific variations, containers ensure consistency across different environments, from development to production, improving reliability and portability.

Containers support microservice architectures, allowing distinct application components to be independently deployed, scaled, and maintained. Container orchestration platforms, such as Kubernetes, automate deployment, scaling, and management of containerized applications, improving efficiency and flexibility. Containerization enables quicker iterations and innovation.

Monitoring and analytics tools

Monitoring tools provide real-time insights into application behavior and user interactions. You can use this information to respond quickly to incidents, plan capacity, and assess successful feature adoption. Tools like Prometheus and Grafana combine information from many sources and display it on a single dashboard. This can help with early detection of issues and identifying the causes.

Without dependable monitoring, teams often fail to identify changes that would make their service more reliable, as the incident recovers naturally before they can work out the cause. With the right tools, you can investigate historic causes and use the insights to identify optimization opportunities and inform your product strategy.

Best practices for successful software delivery

These best practices help organizations ensure the success of their software delivery pipelines.

1. Improve collaboration through organization design

If you follow a change through your software delivery process, you’ll discover all the places work must cross between silos to reach production. Pay attention to these handoffs and decide whether to form people into a cross-functional team, design the collaboration modes, or monitor the inevitable queues. Team Topologies can provide a helpful view of organizational design.

You may have conflicting goals that mean each silo works against the others. Aligning the goals of each team will reduce friction and encourage collaboration. When different specialists collaborate, they often find ways to improve the end-to-end process by simplifying or automating each other’s tasks.

2. Focus on confidence

The stages in a deployment pipeline are all about confidence. While stakeholders care about different system qualities, such as functionality, performance, security, or compliance, they all share the need for confidence that a deployment will succeed. You can automate most of the acceptance tests for these stakeholders, ensuring the checks occur every time.

Using an appropriate type of test for each acceptance criterion will help balance fast feedback with adequate coverage. Developer feedback should take no more than 5 minutes, so you want to prioritize the most likely issues in the initial tests, with less likely and longer-running tests happening in further stages. If you still have manual checks, deferring them until after the automated tests succeed prevents people from wasting time testing a bad software version.

3. Prioritize security throughout the pipeline

Security has traditionally been handled out-of-band with software delivery, often through scheduled testing that uncovers many non-conformances. A more effective strategy is to integrate security practices into the deployment pipeline. You can automate secure coding standard checks, static and dynamic analysis, vulnerability assessments, and security testing.

When you embed security into the deployment pipeline, you reduce the risk of breaches and compliance issues. It also shrinks the size of remedial work, as you’ll be notified of issues in real-time instead of being presented with a large batch of problems that all need to be fixed urgently.

4. Use feature flags to decouple release from deployment

Feature flags let you deploy a software version without exposing new features. You can decide when to toggle on a new feature separately and quickly switch it off if you discover a problem. You can also use feature flags to control the availability of resource-intensive features. For example, you can turn off a feature with a heavy database load while dealing with a traffic spike.

With feature flags, you can encourage developers to commit changes early and often rather than holding them back in a long-lived branch. This reduces batch size, which increases software delivery performance and reduces time wasted on integration work, merge conflicts, and bugs caused by incompatible changes.

5. Manage technical debt

Technical debt was traditionally seen as a way to deliver a feature earlier. Some shortcuts would be taken to speed delivery, and a note would be taken to clear it up later. The reality of technical debt is that teams who can’t afford to do the work properly today never have time to fix it later. As a result, teams accumulating technical debt see their ability to deliver software diminishing over time.

A more disciplined approach is to avoid technical debt whenever possible. If you must take on technical debt, there should be an agreement that you get time to resolve it before starting the next feature. If you discover technical debt as part of a development task, you should fix it as part of the work. Mercilessly eliminating technical debt is the only way to work with sustainable speed over time.

6. Monitor and respond to metrics

You can monitor software delivery performance using deployment frequency and lead time for changes to represent throughput, and failure rate and recovery times for stability. High-performing teams increase throughput while improving stability. You can use these metrics to generate ideas for improvement. DORA’s Core Model has ideas for techniques and practices to improve one or more of these metrics.

You can also collect metrics more relevant to the business, such as transaction rates or values, for reporting and to correlate with changes in software delivery performance. Unless software delivery isn’t a business constraint, improving software delivery should improve the business metrics. If you track business metrics in your monitoring tools, you can assess the impact of new software versions on these in the same way you track infrastructure metrics.

7. Deliver iteratively and incrementally

No matter how much time you spend designing a feature, the only way to find out if it works is to give it to users. You can iterate toward a high-quality feature by delivering a working feature and revisiting it to improve it based on user feedback. In some cases, your first version works perfectly, so you’ve saved considerable time. In other cases, you’ll find you’re off the mark and gain a deeper understanding from the feedback.

Incremental delivery seeks to break work down into small batches, which helps reduce overhead, increase the number of feedback cycles, and reduce conflicts caused by running concurrent initiatives. Combining incremental delivery (small batches) with an iterative approach (showing users your work to improve the final version) enhances the usefulness and value of software.

Accelerating software delivery with Octopus

Developers want to solve problems and are happier when they can be productive. They need frequent, high-quality feedback loops, manageable cognitive load, and substantial focus time to achieve a flow state. When developers experience friction, it’s a sign that they can’t deliver as much value as they feel they could.

The CD pipeline is the element of their workflow that has the most significant influence on these factors. Deployment automation improves the DORA 4 keys and impacts all 3 DevEx dimensions of feedback, flow, and cognitive load.

Octopus significantly impacts software delivery performance, increasing deployment frequency while reducing deployment-related failures. You can also provide self-service deployments and runbooks for developers so they don’t have to raise tickets for deployments or day-2 operations tasks, further boosting productivity and performance.

Find out more or start a trial to see how it works.

Help us continuously improve

Please let us know if you have any feedback about this page.

Send feedback

Categories:

Next article
Kubernetes deployments