Something big is coming to Liquibase OSS 5.0 — Learn more!
Blog Post

Liquibase Pro and Mongo DB: A Match Made for AI Acceleration, Part 1

June 18, 2025

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo

Table of contents

Organizations face governance challenges in today's AI-driven landscape. The rapid increase in enterprise data offers significant innovation potential, but many struggle to leverage it effectively due to weak data governance. A strong governance framework is essential for future readiness and staying competitive.

Effective data governance is built upon four key principles:

  • Clarity: Providing clarity on available data assets to support informed decision-making.
  • Control: Strategically balancing data accessibility with robust security measures.
  • Quality assurance: Guaranteeing data reliability for dependable analytics and insights.
  • Ownership: Cultivating leadership engagement and fostering organization-wide adoption.

By focusing on these pillars, organizations can build trust in data, effectively utilize it, and ensure its protection, ultimately contributing to a stronger competitive edge. Liquibase Pro helps support these pillars by making governance possible at scale when working with flexible database architectures like the one provided by MongoDB.

The combination of Liquibase Pro's enterprise-grade schema evolution control with MongoDB's flexible document database architecture creates a unique solution that directly addresses AI's fundamental tension between rapid experimentation and production stability. 

Where traditional approaches force organizations to choose between governance and agility, this pairing delivers both simultaneously through a cohesive framework built around four core capabilities.

  • Accelerated Development Without Compromise - MongoDB's flexible document model and APIs naturally accommodate the diverse, evolving data structures that AI applications require, while LiquibasePro's automated change management pipelines eliminate the traditional database bottlenecks that slow AI iteration cycles. Teams can rapidly experiment with new data schemas, feature sets, and model inputs without waiting for lengthy approval processes or manual deployment procedures.

  • Enterprise Reliability at AI Scale - The combination leverages MongoDB's built-in replication and sharding capabilities alongside LiquibasePro's controlled change management to eliminate the delivery delays and outages that plague AI deployments. This ensures that as AI models scale from experimental prototypes to production systems handling massive datasets, the underlying data infrastructure remains stable and performant.

  • Comprehensive Governance and Compliance - LiquibasePro brings automated policy enforcement, structured logging, and comprehensive audit trails to MongoDB's flexible environment, creating the deep observability and regulatory confidence that enterprises require. This means organizations can enforce things like consistent naming conventions and data quality standards across all MongoDB collections while maintaining complete visibility into every change made to support AI applications.

  • Future-Ready Architecture for AI Evolution - Perhaps most critically, this pairing provides seamless data evolution and schema management that's specifically optimized for AI-era applications. As AI models evolve, require new data sources, or need different data structures, the combined platform ensures these changes can be implemented safely, consistently, and with full governance oversight across the entire organizational ecosystem.

Why Data Governance is Critical for Artificial Intelligence

The quality and consistency of the data being fed into AI models often directly correlates to the accuracy and value of the outcomes they produce, making it critical to understand not just what that data represents but its complete lineage across the organizational ecosystem. When poor data governance allows biased or inaccurate demographic information to flow into AI systems, the financial consequences can cascade through every customer touchpoint - from fundamentally flawed targeting decisions that alienate profitable segments to massive marketing budget misallocations that waste resources on low-converting audiences.

According to McKinsey's latest research published in March, The state of AI: How organizations are rewiring to capture value, the adoption of generative AI has accelerated greatly, with 71 percent of respondents saying their organizations regularly use gen AI in at least one business function, up from 65 percent in early 2024 . Gartner predicts that "by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously." Agentic AI – Ongoing coverage of its impact on the enterprise This explosive trend has exposed and emphasized the criticality of data governance's role in being able to reap the rewards of these AI applications.

Consider how this plays out in practice: An AI model tasked with customer segmentation for a retail company draws from product data, customer profiles, transaction histories, and demographic information that spans multiple systems and database platforms. If the underlying demographic data carries historical biases - perhaps from legacy systems that systematically under-represented certain customer groups or from data collection practices that favored specific channels - the AI model doesn't just inherit these biases, it amplifies them across every decision it makes. The model might consistently recommend premium products only to certain demographic segments while steering others toward lower-margin offerings, not because of actual purchasing patterns, but because the training data reflected outdated or incorrect assumptions about customer preferences.

The compounding effect becomes particularly damaging when these AI-driven decisions influence resource allocation across different business units. Marketing budgets get systematically misdirected, customer acquisition strategies favor less lucrative segments, and product development priorities shift based on skewed insights. What makes this especially insidious is that unlike traditional reporting where bias might be contained to a single dashboard or analysis, AI models actively apply their learned biases to every piece of data they encounter, creating a feedback loop that perpetuates and magnifies the original data quality problems across the entire customer ecosystem.


Why Traditional Data Governance Fails in the AI Era

This cascade of bias and amplification happens because traditional data management and governance approaches, designed for predictable business intelligence and transactional systems, fundamentally break down when confronted with AI workloads that demand flexibility and scale. These legacy approaches were typically applied in a very siloed fashion to specific data domains or granular datasets, focused on ensuring a particular data delivery artifact was "good" - often by applying targeted data repair scripts or pipelines before rendering that data in its final static form, like a dashboard or report.

But AI models operate fundamentally differently from these traditional use cases. Whether agentic or generative in nature, the model actively applies its learning to all data it can "see" across the organizational ecosystem. For AI to deliver on its intended value proposition, these models often need access to live, uncurated datasets that span multiple business domains. Consider a retail company selling diverse products through different channels - you have product catalogs, customer profiles, distributor networks, manufacturing data, and transactional records for purchases and returns. The product data attributes are likely common and shared to some degree across all these domains, but they traverse many different systems throughout the enterprise, each running on different database platforms with their own data structures and conventions.

(This blog is the first in a series of posts discussing governance challenges, schema management, and the dynamic structures required to support AI. Look for our next post where we share real examples of these considerations in action.)

Next Action

Liquibase Pro helps organizations speed development and respond to the rapid pace of AI by delivering database change faster with the right controls in place.

Book a live demo today.

Frequently Asked Questions

What does data governance in an AI era require?

Effective data governance to support AI is built upon four key principles:

  • Clarity: Providing clarity on available data assets to support informed decision-making.
  • Control: Strategically balancing data accessibility with robust security measures.
  • Quality assurance: Guaranteeing data reliability for dependable analytics and insights.
  • Ownership: Cultivating leadership engagement and fostering organization-wide adoption.

When combined, these principles serve as the cornerstone to supporting AI and developer efficiency.

Why is traditional data governance so hard in the AI Era?

Traditional data management and governance approaches are designed for predictable business intelligence and transactional systems. These systems fundamentally break down when confronted with AI workloads that demand flexibility and scale. AI models operate fundamentally differently from these traditional use cases. Whether agentic or generative in nature, the model actively applies its learning to all data it can "see" across the organizational ecosystem. For AI to deliver on its intended value proposition, these models often need access to live, uncurated datasets that span multiple business domains. As a result, these models require a different approach - similar to the one supported by the Liquibase Pro and MongoDB use case.

What is needed to maintain the flexibility required to support flexible governance models built for today’s emerging AI needs?

Where traditional approaches force organizations to choose between governance and agility, this pairing delivers both simultaneously through a cohesive framework built around four core capabilities.

  • Accelerated Development Without Compromise - To support the diverse, evolving data structures that AI applications require, LiquibasePro's automated change management pipelines eliminate the traditional database bottlenecks that slow AI iteration cycles. Teams can rapidly experiment with new data schemas, feature sets, and model inputs without waiting for lengthy approval processes or manual deployment procedures.

  • Enterprise Reliability at AI Scale - LiquibasePro's controlled change management to eliminate the delivery delays and outages that plague AI deployments. This ensures that as AI models scale from experimental prototypes to production systems handling massive datasets, the underlying data infrastructure remains stable and performant.

  • Comprehensive Governance and Compliance - LiquibasePro brings automated policy enforcement, structured logging, and comprehensive audit trails to complex environments, creating the deep observability and regulatory confidence that enterprises require. This means organizations can enforce things like consistent naming conventions and data quality standards while maintaining complete visibility into every change made to support AI applications.
  • Future-Ready Architecture for AI Evolution - By providing seamless data evolution and schema management that's specifically optimized for AI-era applications, Liquibase Pro can help support your needs now and in the future. As AI models evolve, require new data sources, or need different data structures, the combined platform ensures these changes can be implemented safely, consistently, and with full governance oversight across the entire organizational ecosystem.
Jennifer Lewis
Jennifer Lewis
Senior Technical Sales Engineer, Liquibase
Share on:

See Liquibase in Action

Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Watch a Demo