Prior to joining Octopus Deploy, I was the lead developer on the pilot team that automated database deployments and took deployments from 2 to 4 hours per deployment down to 10 minutes per deployment. When I started, I thought we’d automate the steps in our existing process, and I started implementing the tooling to do just that. Little did I know, the entire process was going to change.
This article is a continuation of the previous article: How to design an automated database deployment process, and if you like, you can jump ahead to the next article, where I provide a step-by-step guide for implementing a database deployment process with Octopus Deploy.
All of our database deployment articles can be found here.
In this article
Quick overview of designing a database deployment process
In the event you skipped the previous article, here is a TL;DR;
- Create a small team or work group to define the process. Include representatives from each stage of deployment (developers, DBAs, etc.). The work group should be no more than 4 to 6 people.
- Identify the pilot team or application to include in the work group.
- Kick off the work group with a 1 to 2 day meeting.
- Write down the existing process, identify key people, pain points, and what needs to change.
- Draft the ideal deployment process.
- Research the tooling.
- By the end of kick-off, the pilot team should know what needs to be implemented and what tools to use.
- The pilot team implements the new process.
- Deploy all the way to production.
- Iterate on the process.
- After the process has been successful for a period of time, see if anyone would like to be an early adopter.
- Iterate on the process with early adopter teams.
- General adoption.
- Focus on building trust with the process.
- Roll out to multiple teams.
- Iterate when pain points are found.
Forming the work group
One of the DBAs in the company summed it up best when they said, “Our database deployment process is the wild west.” We had more developers joining the company every month, new teams were constantly forming, more code was being deployed every day, and the issues with database deployments had to stop.
It was time to form a work group.
The DBAs and database architect all liked using Redgate’s tooling, so they reached out to Redgate for help. Redgate agreed to help out in exchange for a case study. They would fly out two people to meet with our work group for a couple of days. As I write this, I am 99% sure they don’t do that all the time. We might have caught them at the right moment in time.
The DBAs identified the DBA who would represent them at the work group. The final decision was the pilot team. This is when I get involved, as my team was selected as the pilot team.
Being naive, I expected Redgate to help us implement their tooling within our existing process, so I set about doing that. However, the work group threw away about 75% of my work. That wasn’t a big deal; it was a good learning experience as it gave me a better fundamental understanding of how database deployment tools work, which helped me contribute to the kick-off meeting.
The database developer from my team, a DBA, a database architect, two Redgaters, and I met for two days. The first day was focused on designing our ideal process. The end goal was to have a process to deploy to
staging, and all the way into
First, we walked through the existing process, which looked like this:
- A developer makes a change in
test. All developers have sysadmin rights in
test. They should make their changes in
testhas all the data to verify their changes, so that’s the server their code points to.
- The developer changes the connection in SSMS and makes a change to
development. All developers have sysadmin rights in
- The database developer or lead developer runs Redgate SQL Compare to generate a delta script between
staging. Any complex database changes (move columns, combine columns, etc.) are removed and manually scripted. Scripts are saved to a shared folder. Everyone except DBAs have read-only rights for
staging. The DBAs have to run the scripts.
- DBAs are notified via email to run scripts in the shared folder on
staging. They run the scripts and send the output to the requester.
- Multiple changes can be pushed to
stagingprior to going to
production. Because of that, a new Redgate SQL Compare delta script between
productionis generated by the database developer or lead developer. Just like before, any complex database changes (move columns, combine columns, etc.) are removed and manually scripted. Scripts are saved to a shared folder. Everyone except DBAs have read-only rights for
- DBAs are notified via a change request to run a set of scripts in
production. They run the scripts, save the results to the change request system, which emails the requester.
We tackled these questions next:
- Who are the people involved in the process? Answer: Developers, database developer, lead developer, and DBAs.
- What permissions do they have? Answer: Developers, database developers, and lead developers all have sysadmin rights for
test. DBAs have sysadmin rights for
- Why are they involved? Answer: Developers, lead developers, and database developers make the changes to
test. Database developers and lead developers create the delta scripts using Redgate SQL Compare. DBAs deploy the delta scripts to
- Which environments have different processes? Answer: There are three processes, one for deployments to
productionhave similar but slightly different process.
Productioninvolves a change request system.
- Why are they different? Answer: Permissions and auditing;
stagingis refreshed from
productionperiodically and is used for staging and final verification, so
stagingneeds to be as close to
productionas possible to help eliminate surprises. Any change to
productionrequires a change request because that is what the auditors check.
- What happens when the script fails to run? Answer: In
test, the person who wrote the script ran the script, and they make the necessary adjustments and re-run it. In
productionthe DBA notifies the requester of failure. The requester debugs the script and makes the necessary tweaks. They then ask the DBA to run the script again.
- Why do scripts typically fail? Answer: Failures happen because each environment has different delta scripts. A schema change or migration script is missed.
- Who reviews the scripts and when? Answer: The database developer or lead developer review the changes prior to going to
staging. Because of the different delta scripts, the DBAs review the scripts prior to going to
stagingas well as
- Who needs to be involved with each deployment? Answer: Deployments to
testonly involve the person making the change. Deployments to
staginginvolve the requester, a database developer or lead developer, and the DBA. Deployments to
productionneed everyone because each environment has a unique delta script, and any issues require immediate fixing.
- What isn’t working, and what needs to change? Answer: See below.
What needed to change
The astute reader will notice a recurring theme in those answers.
- Three different processes.
- Unique delta scripts per environment.
- Unique delta scripts meant it was difficult or nearly impossible to test.
- Shared development environment.
- Reviews didn’t happen until it was time to go to
- “All hands on deck” during
- No history of who made what changes, when they were made, or why they were made.
- Changes were manually tracked via a document.
- No auditing in
staging. A little bit of auditing for
The application my team was responsible for had 700 tables. All database access was done via stored procedures, including CRUD operations. The database had roughly 5000 objects (tables, stored procedures, functions, etc.). All development was done on the same database. When we did a release, some changes were pushed, while other changes were excluded. We kept track of which changes to include on a piece of paper. This wasn’t 20 years ago; this was happening in 2014.
Those issues resulted in 2 to 4 hour
production deployments, and we didn’t trust the process, which led to an intensive verification process. The actual deployment could be done in 30 minutes, but the remaining 90 minutes were spent verifying the deployment. This included QA, the business owner, business analyst, developers, lead developer, and a manager running various scenarios. Even with all that effort, we still missed some random thing that only 0.5% of our users encountered.
Roughly 60% of the time, we had to do an emergency fix the next day due to a missed schema change.
Draft of the ideal process
Drafting the ideal process took quite a bit of time. This was caused by a lack of knowledge of what the tooling could provide. At the time, we knew about source control, build servers, and Redgate tooling, but we were all unfamiliar with what deployment tooling could provide. Thankfully, Redgate was there to help educate us.
First, we listed the various tools and the functionality they provide:
- Source control
- It stores all the SQL scripts.
- It is the truth center of changes.
- Includes the ability to create branches for new features.
- It provides the ability to review changes prior to merging.
- Database tooling
- It provides a way to run scripts stored in source control on the destination database.
- Includes some sort of preview functionality which can generate a file for people to review during deployments. The file could be the actual delta script or an HTML file summarizing the changes.
- Build server
- Takes SQL scripts from source control and packages them.
- It pushes packages to the deployment server.
- It can monitor multiple branches and build for each branch.
- Deployment tool
- Invokes database tooling to deploy database changes.
- Uses database tooling preview functionality to create a file to be reviewed during deployments.
- It provides auditing and approvals.
- Used to deploy to all environments.
- Has security features to allow for scenarios such as only allowing DBAs to deploy to
With the tooling responsibilities out of the way, we spent a great deal of time discussing a shared database model vs. a dedicated database model. A dedicated database model means each developer runs the database server on their own machine. Using a shared database model, we found:
- Database changes were made, and code changes took an hour to several days to use the new database change. Some of the time that stopped other developers and QA from using a specific feature or area in the application.
- There were two truth centers, source control and the shared database which all changes were made to and then saved to source control. If there was a conflict between source control and the shared database, which won?
- We were unable to leverage branches effectively. There can be 1 to N branches, but only one database.
- Changes were made to a central database prior to review, but when should the review occur?
- Everyone used the same test data. One person changing data affected multiple people on the team.
We decided to switch to dedicated databases. In our case, it made a lot of sense. We knew how the tooling worked and where changes would be made.
It was time to outline the ideal process:
- A developer/database developer/lead developer created a branch.
- All database changes and code changes are made on that branch.
- Changes are completed and checked into the branch.
- A merge request is created, which kicks off a build. The build verifies the changes are valid SQL.
- The database developer or lead developer reviews database changes in the merge request and provides feedback for fixes.
- The branch is approved and merged.
- The build server kicks off a build, verifies the changes are valid SQL, and if they are, packages them and pushes to the deployment server. The build server tells the deployment server to deploy to
- The deployment server deploys to
- A developer/database developer/lead developer tells the deployment server to deploy to
- The deployment server deploys to
- Changes are verified in
- A developer/database developer/lead developer tells the deployment server to deploy to
staging. The deployment server uses database tooling to generate the review file for the DBA to use for approval.
- The deployment server notifies the DBA of the deployment request to
staging. They review the changes and provide feedback for fixes.
- DBAs approve changes to
- The deployment server finishes deployment to
- Changes are verified in
stagingby the developer/database developer/lead developer.
- A change request is submitted to the DBAs to promote a specific package in the deployment server to
- After hours, DBAs tell the deployment server to deploy to
production. The deployment server uses database tooling to generate the review file for the DBA to review.
- DBAs review the file as a final sanity check.
- The deployment server finishes deployment to
- The team responsible for the application verifies the changes in
When coming up with that process, we purposely avoided tooling. Tooling wasn’t discussed until after we had a draft of the process. First up was the tooling already being used:
- Build server: TeamCity was being piloted as a replacement for TFS 2012.
- Source control: Teams piloting TeamCity had moved to Git.
- Database deployments: Redgate’s SQL Change Automation (as it was called then).
- Deployment server: None, although at the time, I thought the build server = deployment server.
TFS 2012 was on its way out, so it didn’t make sense to continue using it. My team was part of the group who switched over to Git and TeamCity.
Redgate had flown out to help us. We had a prototype working with their tooling. We discussed SSDT and RoundhousE, but they failed for roughly the same reason. 95% of the people making the database changes did so in SQL Server Management Studio, and too many people forgot to migrate those changes over to SSDT or RoundhousE. Our discussion, along with the process we designed, led to the following key requirements for the tooling:
- Can save database changes from SSMS.
- Automatic detection of database changes.
- Majority of changes handled by the tool (add a column, delete a column, change stored procedure) with the ability to have complex changes manually written and stored (move column to a new table).
RoundhousE met 1 out of the 3 (can save database changes from SSMS), and SSDT met 1 out of the 3 (majority of complex changes handled by the tool). It made sense for us to continue using Redgate with the pilot.
We didn’t have a deployment server, but after the Redgate folk explained the benefits and features of Octopus Deploy, we did a quick POC using TeamCity, Redgate, and Octopus Deploy. The POC took about an hour to put together and showed a great deal of promise, so we decided to use Octopus Deploy for the pilot.
Implementing the process
The agreed-upon process was put into place using the tooling. Getting it going in
staging took very little time. We did run into a couple of hiccups we didn’t anticipate in the kick-off meeting:
- Permissions: What can the automated process do vs. what can’t it do. We landed on preventing the process from creating new users and adding them to roles. This way, someone couldn’t give themselves db_owner permissions in
- Resolving the delta between all environments: There were schema changes in
development. The first time we tried to run the process in
production, we almost wiped them out. We quickly added that change into source control, rebuilt the package, and pushed it through the environments to
After resolving those minor hiccups we were able to get the process deploying to
Speeding up deployments
Immediately, the number of emergency fixes due to a missing schema change dropped to zero. That alone was a massive win.
Because of that, the amount of time we spent verifying started dropping.
- 30 minute deployment, 90 minute verification.
- 25 minute deployment, 80 minute verification.
- 20 minute deployment, 70 minute verification.
- 15 minute deployment, 60 minute verification.
- 10 minute deployment, 50 minute verification.
- 5-8 minute deployment, 40 minute verification.
- 5-8 minute deployment, 30 minute verification.
That, in turn, made us want to deploy more often. Frequent deployments meant smaller changes. Smaller changes meant less verification. Less verification meant faster deployments. Faster deployments meant we wanted to deploy more often. The cycle continued until verifications took 5 to 8 minutes as well.
Early adopters and iterations
A couple of other teams (out of 9 remaining teams) saw what we were doing, and they came on board as early adopters. Having three teams use the process identified pain points and what needed to improve:
- Not deploying what was approved. I wasn’t using the Redgate tooling correctly. I used a command to generate a preview for approval. Then I ignored that preview and used a different command to deploy the package directly. 1 out of every 30 deployments ended up having an unexpected change made. With a bit of research, the correct commands were found and implemented.
- Saving the preview file to a file share. I didn’t know Octopus had the ability to save files as artifacts. Artifacts can be downloaded by the approver from Octopus. That was better than finding the right file in the file share. A small tweak to the script fixed that.
- DBAs being the bottleneck. The three teams were deploying to
stagingso often the DBAs couldn’t keep up. They also got tired of having to be online during deployments to approve the deployment to
production. We changed the process to generate two files when deploying to
staging, one for
staging, and one for
production. The deployment would occur on
staging, and the DBAs approved everything after the fact. We added a step to page them when a deployment failed in
- Developers not using their dedicated database. There was no testing data in the developer’s database, so they pointed to
testto test their changes. We solved this by creating a backup of the database in
testafter each deployment. Developers could restore that database to their local instance and have all the data they needed.
No process is 100% perfect from the start. Expect to iterate multiple times.
Eventually, the time came for general adoption. I was very surprised by the push back, specifically from the database developers. So much of their time was spent generating deployment scripts they believed that was how they brought value to the team. The fear was this process would automate them out of a job. That wasn’t the case. The process was designed to eliminate all that wasted time generating and tweaking deployment scripts, which freed them up to focus on important things like database structure, performance, reviewing changes, and working on complex changes.
Hindsight being what it is, I should have:
- Scheduled a meeting with the database developer and lead developer on the team to walk through the process.
- Scheduled another meeting for the two of them to get an application’s database into the process.
- Scheduled a final meeting for them to teach their team how to use the process, with me in the background to answer questions.
But I didn’t do that. I combined all of that into one big meeting. Live and learn.
We had to change quite a bit to automate database deployments. In the end, it was worth it. Deployments to
production became a non-event. The last
production deployment I did with that company involved myself, the business owner, and my manager. We were online for 30 minutes. 25 of those minutes were spent telling funny stories and bad jokes. The deployment went through with minimal fuss.
In this article, I focused on how we designed our automate database deployment process. I didn’t focus on what we actually did, but I'll cover that in the next article.
Until next time, Happy Deployments!
If you enjoyed this article, great news, we have a whole series on automated database deployments.