Using Liquibase in Kubernetes

October 29, 2020
Using Liquibase in Kubernetes

Embedding Liquibase into the startup process of an application is a very common pattern for good reason. Once you set up Liquibase to deploy on app startup, your database state will always match what your code expects. Liquibase even ships with built-in support for this with the Spring Boot and Servlet integrations.

Database Change Lock

Another way Liquibase supports this pattern is to include a locking mechanism. This lock ensures that if multiple servers start simultaneously, they won’t run into problems as they both try to apply the same database changes. 

The locking system uses a DATABASECHANGELOGLOCK table as the synchronization point, with the service setting the LOCKED column to 1 when it takes the lock and setting it to 0 when it finishes. 

As long as the process that has set the LOCKED column to 1 isn’t killed before it has a chance to set back to 0, this works fine. When it’s not working fine, all of the other Liquibase processes (including a newly restarted process on the same machine) will just continue to wait for a 0 value which will never come. The only way to recover from this scenario is by running the Liquibase unlock command or updating the DATABASECHANGELOGLOCK table manually.

Kubernetes: “When in doubt, kill the process”

Historically, the stuck locks have not been a problem because the Liquibase process was rarely killed. If it was killed, it was done manually and easier to recover from. Tools like Kubernetes have taken a philosophy of “when in doubt, kill the process” and this causes problems.

While we are working on better handling killed processes, Kubernetes provides additional options to better fit Liquibase into the application startup process.

At the heart of this issue is Kubernetes’ expectations of pods starting quickly. Rightly so. You WANT to deal with slow startup issues quickly. However, you also need to give your pods the time they need to do potentially time-consuming initialization work, such as migrating your database.

The Solution: Init Containers

Fortunately, since this is a common problem for many Kubernetes users, there is direct support for handling this issue with their init containers. Init containers can be added to pods and are exactly like regular containers except that they always run to completion. Because they must run to completion, they don’t have the same liveness/readiness/startup probes and timeouts that cause Kubernetes to kill your regular containers when startup takes too long.

For Liquibase, this means that you should pull the Liquibase execution out of your main application and move it to a separate init container. How you choose to do so will depend on how you want to manage your source code, artifacts, and containers.

For many people, the easiest way to create an init container is to use the standard Liquibase Docker container

In your build process, create an image based on our Docker image:

FROM liquibase/liquibase

ADD path/to/your/changelog/source /liquibase/changelog

CMD ["sh", "-c", " --url=${URL} --username=${USERNAME} --password=${PASSWORD} --classpath=/liquibase/changelog --changeLogFile=relative/changelog.xml update"]

Add this image as an init container in your pod configuration:

apiVersion: v1
kind: Pod
 name: myapp-pod
   app: myapp
   - name: myapp-container
     image: main-artifact:1.28
   - name: liquibase
     image: liquibase-container:1.28

Notice that the URL/username/password pulls from an environment variable in the above Dockerfile so you can configure your init container to set those variables directly, pull them from secrets, or whatever you would like.

Don’t forget to remove the Liquibase update operation from your main application startup.

If this setup doesn’t work for you, there are many other ways to create and configure the init container.


By taking advantage of Kubernetes’ init phase, you can sidestep the cause of stuck Liquibase locks AND better fit into the infrastructure Kubernetes provides. Win-win. Give it a try and let us know what you think or if we should add any other details that will be helpful for other users.

Article author
Nathan Voxland Project Founder