June 24, 2024

Automating database builds for CI/CD pipelines

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo

Table of contents

This is one section of our complete 12-part guide to CI/CD for databases - get your copy.

The classic CI cycle is a setup where, whenever new code is added to a source repository, a tool automatically does an integration build of the codebase. If the build is successful, it is put in the stack of ‘testable’ builds. If the build is not successful, it notifies everyone that the codebase is ‘broken’ and the team has to correct that break before it can proceed. 

So, no changes can get into the testing pipeline without passing the CI process successfully and everyone knows it. That is a powerful paradigm and creates tremendous leverage around the definition of ‘successful build’.

The core CI cycle can therefore be broken down into three basic steps.

  • Get the code
  • Process the code
  • Deliver the code

Retrieving the batch from source control

In most application code CI scenarios, the process of retrieving the batch of changes is usually an exercise in a source control system. The simplest answer is the newest versions of files in this branch. That is fine for application code that can be recompiled at will. In fact, you can have many branches of application code — each with its own build — and not worry about one breaking another until you try to merge the code.

Unfortunately, batch retrieval for database changes is significantly more rigid. As discussed in the previous section, you need to have a mechanism for isolating individual changes and automatically determining whether or not they have been executed against a specific database instance. 

This behavior is one of the main reasons why teams use database change management automation tools rather than just running raw SQL files. Without the ability to automatically determine whether or not to run (or re-run) a specific change, you have to resort to manual inspection and potentially error-prone human opinion.

Therefore, if you intend to automate change handling, you will likely not be able to just use the latest/newest versions of files in the branch. You will also need additional, scripted intelligence and batch definition structures to manage and enforce your batch definitions. There are two fundamental reasons for this: the limitations of branching and the challenges of defining batch dividing lines.

Branching limitations

Database code is a lot less forgiving when it comes to branching in an SCM system. Because database changes are always cumulative over time, it is very difficult to create and test changes in a branch without impacting the work of others. This is possible if the team has the resources to wholly segregate a database for just the branch, but this is not always possible. 

The cumulative nature of database changes also creates problems when it is time to merge changes from one branch into the mainline. The changes from the branch have to each be evaluated for sequencing relative to the changes in the mainline, checked for potential change collisions, properly mixed in with inbound changes from any other branches, and so on.

Even in resource-rich environments, branching should be undertaken carefully. Just like application code, database code in a long-lived branch will become stale very quickly. This increases the chance of rework and fitment problems when it finally comes time to merge it into the mainline which can eliminate the benefit of having the branch in the first place.

Batch dividing lines

The dividing lines in batches are very important and need to be strongly enforced if database change automation is going to flow quickly and easily. While an individual batch is being processed, the system must NOT pick up anything from a subsequent batch. 

That does not mean it can’t process more than one batch during an automated job, but due to the nature of database changes, they must be handled serially. While there is some inefficiency here — for example, some would argue that “if they are on different objects, they could be run in parallel” — the insight required to determine if that is safe for each possible operation would likely require human intervention. Such manual intervention would likely introduce far more wait time and delay than a computer would spend unnecessarily serializing a few steps.

Best stated, the principle here is: Never ingest a partial batch. Changes and batches are serially cumulative and order matters.

Expert insight:

This need to serialize changes is why database change management tools all have a specific ordering structure. Be it a simplistic naming convention or a more sophisticated construct like Liquibase’s ChangeLog, which supports adding metadata for grouping changes. Each approach provides a clean sequence as well as a means for programmatically identifying batch beginning and end.

Processing database changes

Processing batches of database changes involves running through each included change in each batch in the prescribed order and doing some level of verification on each in order to certify them as good enough to test for production readiness. 

The simplest way to process them is to run them in their designated order into the first database in the pipeline. This run is the first rehearsal for deploying those changes in production. If that deployment works and that database is nominally production-representative, you know that the change should work equally well in Production. This is about as close an equivalent process to ‘make sure it compiles’ as exists in the database world.

At first, this sounds like a simple, iterative process, but databases have more going on than simple structural changes. In addition to the structural changes, databases also have programmed logic (stored procedures, functions, triggers, etc.) and the data that gives them their name. 

These three broad categories of changes must be processed in order and then each individual change within each category must be processed in a designated order.

Assuming the type and order of the changes were specified during batch creation, then the automation process should be able to follow its instructions to run through the batch relatively easily. Such specifications can come in the form of best practices, file system folder structures, explicitly including files in a designated order, or anything that fits with a team’s work style while enforcing the type order required for the database.

Database batch problems

The last thing the Processing phase must handle is when there is a problem with the batch. 

For example, what if one of the changes in a batch fails with an error when applied to the database. There must be a way to reset the database to its state before the Processing stage. This is crucial — in order for batch deployments in lower environments to be rehearsals for the eventual deployment to Production, they have to be redone from the starting point of the batch. That implies the test database in the lower environment is at the “before” state prior to the batch run - because Production will be in the “before'' state. Handling situations like this is another example of why smaller batches are preferable.

Reset, rollback, and fix forward

There are three basic strategies for dealing with a partially deployed batch of database changes – reset the database to a snapshot, roll back the individual changes, or fix it forward to the desired state. Each has its own pros and cons. It is also important to consider that, in a complex environment, they might be used in combination.

Whichever strategy or strategies you choose, the most important thing is that the corrective process be tested as much as the deployment process. You would not, for example, want the first time you tried your rollback steps to be just after you had a problem with your production deployment. This is yet another example of the CI/CD principle of using work in the lower environments as a “rehearsal for production”. It is also sometimes referred to as “testing the process just as much as the changes”.

Reset to Snapshot

This strategy uses some underlying system or cloud technology to reset the database storage to a prior point in time. 

  • Pros: 
    • Deterministic – guaranteed to get to prior state
    • Typically fast
  • Cons:
    • Limited to environments where this is technically possible
    • “All or nothing” nature limits flexibility
    • Potentially expensive to maintain a large number of snapshots
    • Invalid strategy for use in Production environments, so not testable

Rollback

This strategy works by having pre-defined reversing changes for each new change in a batch that can be used in case of problems

  • Pros:
    • Testable in lower environments for as-needed use in production
    • Explicitly definable and inspectable by developers or other change creators
  • Cons:
    • Not always possible for all changes
    • Still represent changes to the database that are just the opposite of other recent changes
    • Cannot always predict the state of a database after a problem for the rollback to always work
    • Stepping back does not accomplish the goal of getting new features to users

Fix forward

This strategy works by saying that problems will be resolved by submitting corrective changes through some subset of the pipeline to fix the problem and push forward with getting the new application features to users

  • Pros:
    • Testable in lower environments for as-needed use in production
    • Explicitly definable and inspectable by developers or other change creators
    • Prioritizes feature delivery
  • Cons:
    • Not always possible for all changes
    • Not always aligned with legacy management doctrine

Dive deeper → Database rollbacks: The DevOps approach to rolling back and fixing forward

At the end of this guide, we’ll explain how these strategies come to life in an automated database CI/CD pipeline. 

Delivering the post-processing results

The last step in a CI workflow is to publish the productized batch or batches from the processing run to a location where it can be consumed in downstream environments for testing and, eventually, production deployment. 

This is best done by creating a single artifact containing the full batch or batches in the executable form for your deployment tool and then putting the artifact in a repository where it can be guaranteed to be immutably versioned over time just like an application binary. Both parts of managing the post-CI artifact are crucial. 

First, ensuring that the artifact is in an executable form is crucial to ensuring your deployments are repeatable in whatever environment you target. Again, an important part of rehearsing for production. It is also important so that you can gain the benefit of specialized tools. Bespoke database change automation tools, for example, allow you to: 

  • Have a specific snapshot point of your changes
  • Ensure that batches are not skipped on any target databases
  • Enable you to reconcile change records in any given target database with the expected list from your batches

Second, the immutable storage of the artifact is important to ensure traceability as well as simplifying subsequent deployment processes by avoiding the need to re-process the batch. This is often accomplished by simply using the same artifact repository as you are using for your application binaries.  Respecting these two techniques, you can ensure that your database changes respect the core CI/CD principle of “build once/deploy many”.

By accommodating the specific nature of database code, exploiting good batch definition, and changing a few paradigms, a CI process for database changes can reliably deliver deployable groups of database changes that we can control and track through our pipelines just like we do our application code changes. 

However, just as application code developers learned, CI is a lot more than getting working packages of code (which are binaries in their case). It also provides an opportunity to improve the overall quality of the changes — getting them right the first time — which reduces rework and improves the productivity of everyone involved.

Ready to dive into all 12 parts of the complete CI/CD for databases guide? It covers:

  1. The goals & challenges of designing CI/CD pipelines
  2. Batching database changes
  3. Automating database “builds”
  4. Ensuring quality database changes
  5. Reliable and predictable database deployments
  6. Trustworthy database environments
  7. Sandbox databases
  8. Database governance for consistency, quality, and security
  9. Observability for database CI/CD
  10. Measuring database CI/CD pipeline performance
  11. Handling and avoiding problems with database changes
  12. Bring CI/CD to the database with Liquibase

Share on:

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo