May 4, 2016

Do You Really Need Rollbacks for Your Database Changes?

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo

Table of contents

Every Valentine’s Day, I check the batteries in my smoke detectors because I love my family. Now, the last thing I want to hear is a smoke detector alarm, but I would be a really bad parent and spouse if I didn’t make sure they were in good working order.

The same goes for all my other safety equipment. From my kid’s car seat to fire extinguishers in the kitchen to my zealot-like focus on safety when I fry a turkey, I surround my family with things that I should never have to use. The same should be done for database changes.

During application deployments, almost 100 percent of the time, you will change the database schema to support application changes. You know this. What you also know is that sometimes the application isn’t quite right and needs to be rolled back. That’s easy for the application but hard for the database. That’s why we have rollback scripts.

Just like the application, database changes should have a lower chance of a rollback as they get closer to production. By definition, the application and database changes proposed should become less and less buggy. (After all, that’s why they were pushed to QA in the first place!)

Unfortunately, we see our customers (prior to using Datical DB) rolling back as many as half of their deployments in production. HALF! That means they have a coin flip’s chance of success. That’s a good success rate for a baseball hitter, but a horrible one for IT.

This is primarily due to two reasons: The first is that configuration drift over time has led to each database environment, from dev to production, being out of synch. That means that the database environment in QA is not emblematic of the production database. Thus, when database changes are tested in QA, there is a good chance it is wasted effort. What’s the point of testing if the environments are not in synch? With Datical DB, our customers rely less on individual heroics and opaque scripts for change. Datical DB automation brings consistency to deployments and completely eliminates configuration drift.

The second reason pre-Datical DB customers rely too heavily on rollbacks is that they don’t understand the impact of a change in its entirety. Most of these scripts are executed late at night and in large numbers. Therefore, DBAs are unable to completely review and understand the whole impact of each change. It’s not until later that the DBAs will be notified of performance issues or damage caused by the changes. At that point, the DBAs will seek to roll back the change. However, the challenge there is that data has been collected post-change, so a simple rollback isn’t going to work. The DBA will need to peel apart the change and figure out a way to protect the data gathered since the change. That translates to a lot of unexpected work for the DBAs. Other tasks will inevitably be delayed.

Datical DB provides a “Forecast” function to allow review of proposed changes well before they are pushed to production. Furthermore, the Datical DB Rules Engine allows you to stop risky non-standard changes based on your standards. Both of these eliminate the need for rollbacks well after a change has persisted to the database.

I would argue that rollbacks are inherently a good thing to have in your back pocket, but if you are using them a lot, you have deeper issues. As discussed, the two biggest ones are out-of-synch environments and not fully understanding change impact. Luckily, Datical DB can help you with both.

If you would like to learn more about Datical and the top five database change rules every company should enforce to avoid too many costly, complicated database rollbacks, check out this whitepaper.

Share on:

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo