Something big is coming to Liquibase OSS 5.0 — Learn more!
Blog Post

Best Practices: Managing Data Engineering DDL Changes

Managing DDL changes in ETL jobs: Beyond "Can We?," The Critical Question of "Should We?"

July 10, 2025

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo

Table of contents

Managing Data Engineering DDL Changes

We frequently encounter the question of whether to use an ETL/ELT tool for DDL changes in data automation or ETL/ELT jobs. While technically feasible, the more crucial question, to paraphrase Jeff Goldblum in Jurassic Park, isn't "whether you could do something," but "whether you should." This distinction lies at the heart of many fundamental pains within data engineering: the Problem of Shared Resources and Visibility.

A core lesson from early software engineering is the necessity of high visibility for changes to common resources—those upon which many individuals' work depends. In API development, for example, changes necessitate a clear contract outlining support for older versions and a parallel transition period.

The data engineering landscape, particularly with ETL/ELT tools, mirrors an earlier era of corporate application systems where integration occurred at the database level, lacking formal APIs. These systems shared a common database as modules of a larger application. Data engineering similarly operates with a database-centric or data store-centric view, with numerous teams relying on its structure, expecting data in specific locations and formats.

This reliance highlights the critical problem: How do downstream consumers learn about DDL changes, and how can these changes be made visible to them to allow for necessary adjustments and prevent disruption? Ignoring this vital communication leads to significant issues in data engineering: rework, delays, unproductive meetings, ill feelings, and burnout as teams scramble to react to unforeseen changes.

There isn't a single "right" way to manage DDL changes, but many "wrong" ways exist. Convenience for an individual or team, while seemingly productive in isolation, can be detrimental to the broader organization. The organization must collectively ask: "How do we ensure everyone dependent on a shared resource, like a data store, is aware of changes?"

Lessons Learned and Best Practices:

Software engineering offers valuable lessons here. They grappled with similar challenges during database-level integration and continue to do so with API-level integration. Their solution lies in:

  • Breaking out concerns: Separating different types of changes.
  • Publishing changes: Making changes transparent through practices like Continuous Integration (CI). In CI/CD, the CI component makes batched and applied changes visible.
  • Defining areas of impact: For instance, infrastructure code is often managed in a separate repository from mainline code, making changes to it immediately obvious within its own distinct area rather than being buried deep within a larger codebase.

Similarly, our customers in application databases implement a similar strategy by placing database changes in their own dedicated space. This makes DDL changes a top-level concern, highly visible to all affected teams, and easy to identify when and where they occurred.The Power of Communication and Specialized Tools

As organizations mature, having dedicated spaces for changes to shared resources allows for the creation of alerts, further enhancing awareness and reducing stress. Ultimately, this all boils down to a fundamental human factor: communication. The ability to communicate that a change is being made to a shared resource, enabling downstream teams to react proactively and effectively, is paramount.

Parochial views—where teams operate in silos, solely focused on their immediate tasks and tools—are detrimental to organizational health and hinder scalability, agility, and profitability. In a rapidly evolving world, particularly with AI, understanding and utilizing data quickly is crucial. This necessitates looking beyond individual productivity to foster seamless team collaboration and communication.

Avoiding a ‘One Tool Fits All’ Mentality For Database DevOps

This is precisely where tools like Liquibase come into play. Originating from the DevOps world, where software engineering teams manage changes to shared resources (large or small) with visibility and control, these tools enable synchronized teamwork and maximize productivity. These same principles are directly applicable to data engineering today. As data engineering teams strive for scalability and acceleration, adopting agile and DevOps practices, including those facilitated by tools like Liquibase, will bring these critical issues to the forefront.

Therefore, when a data engineering team over-focuses on the "one tool does it all" mentality, it's vital for managers of data engineers, data stores, or model delivery to ask:

  • How do people stay aware of changes to shared resources?
  • How are these changes documented and published?
  • How can we prevent individual productivity from inadvertently breaking things for a wider group downstream?

Answering these questions sooner rather than later will lead to significant productivity gains. Contact us to learn how Liquibase can assist your organization in addressing these challenges. We have extensive experience working with companies implementing precisely these types of solutions. 

Frequently Asked Questions

Q: What best practices does Liquibase recommend for multiple teams using database change management tools?

A: Liquibase can effectively manage database changes across multiple teams through various strategies. These strategies focus on organizing changelogs, structuring repositories, and automating deployments to ensure consistency and collaboration. Key approaches include using a single, shared changelog with pull requests for changes or employing multiple schemas with dedicated directories for each team. Visit this resource for a list of best practices.

Q: How does Liquibase Pro support logs and auditing of database changes?

A: Liquibase Pro uses structured Logging to make Liquibase operation data easily available and machine-readable. You can use monitoring and analysis tools to read, query, and act upon this data in automated workflows. Liquibase not only does the tricky work of database schema versioning and management, it also helps you understand the data around these operations and how they fit into your overall DevOps and CI/CD performance.

Q: What tools can I use with Liquibase structured logging?

A: Tools you can use with Liquibase Structured Logging include AWS Cloudwatch, Grafana, Opensearch, Sematext, Splunk, ElasticSearch and other analysis instruments. Monitoring and analysis tools can easily determine and act upon both real-time and long-term trend data for Liquibase usage with Structured Logging. Other data that Liquibase can use are performance, errors, security, tracking for auditablility and outcomes, and even standard DORA DevOps metrics.

Liquibase uses a Mapped Diagnostic Context (MDC) to generate structured logs in a JSON format.

Q: Why is Database DevOps important?

A: Database DevOps extends DevOps principles, tools, and workflows to database management, keeping updates to data stores synchronized with application delivery. It connects database management to the collaborative DevOps model, aligning database updates with software releases for a more consistent and efficient lifecycle.

The key shift is treating database code like application code—using version control, automated testing, and CI/CD pipelines—to enable a repeatable, structured approach to change management.

Dan Zentgraf
Dan Zentgraf
Director of Solutions Architecture
Share on:

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo