Database Observability


Table of contents

What’s happening to the databases and data pipelines your business relies on? What is their current state? Which changes have been deployed?

How can we use these insights to improve the database change pipeline?

Database observability uncovers these insights by analyzing the logs created throughout workflows, presenting them in your DeVOps observability dashboards, and enabling continuous optimization of the database change process.

After all, you can’t fix what you can’t see. This guide dives into the concept of observability for databases: what it is, how it helps your teams and business, and what it means for your DevOps performance. 

What is database observability?

Database observability (AKA data/data store/data pipeline observability) is the ability to measure the current state of a system based on its output or the data it generates, which is sent to a team’s observability or analytics platforms to track key workflow performance indicators. 

Database observability insights provide the feedback teams need to improve CI/CD pipeline performance, consistency, efficiency, and agility as databases change over time. Observability can also:

  • Improve security and compliance
  • Enable trend analysis
  • Reduce remediation time
  • Anticipate issues before they occur
  • Identify drifts in database state 

Visibility into database change management gives teams – primary application development teams or data science/business intelligence teams – the ability to better gauge reliability, security, and efficiency in their deployment workflows. Teams can identify issues, such as policy violations, and run root cause analyses ]= o0with specific context that ensures quick remediation and reduces downtime. Observability can also give insight into database performance, including which apps are using it and how. 

Observability also helps teams go after the weak links in their CI/CD processes. Database environments tend to include slow, manual processes for schema migration. In an otherwise fast, automated pipeline, database evolution remains a largely manual process. 

In database DevOps, observability branches into two categories: change operation monitoring and pipeline analytics. Change operation monitoring tracks database changes, detailing the what, when, why, and who of each change and assessing its effect on the database's overall state. This level of observability ensures that teams can trace every modification back to its origin, providing a clear audit trail and enabling swift responses to any issues that arise.

Database pipeline analytics measures workflow performance across the database change management process. This includes the application of DORA metrics to database operations. By evaluating these metrics, teams can gain insights into the efficiency and effectiveness of their database change pipelines, identifying areas for improvement and driving toward optimal performance. Together, change operation monitoring and pipeline analytics offer a comprehensive view to inform decision-making, enhance operational efficiency, and support continuous improvement in database DevOps practices.

Database observability is crucial to closing the velocity gap, improving security, and extending DevOps to the database

Data observability vs database observability

As with many infrastructure terms, there’s nuance between data and database (or data store). 

Data observability is the ability to understand the components, structure, and integrity of the data kept within its storage systems. How the data itself is structured is a core part of data observability (along with quality, freshness, volume, and lineage). 

Both data and database observability can speak to the quality, performance, reliability, and state of the data pipeline. Data observability will inform insights related to data integrity, while database observability will inform insights related to the infrastructure. 

Database monitoring vs database observability

Monitoring provides the ability to know the current state of your systems or one-time event with the ability to give real-time alerts and time-bound reports. It can tell you the “what” for an isolated event but not the “who, where, why, how” – those questions are for observability. Observability is not a constant monitor – it is a time-bound package of reports (logs) that are presented and leveraged as business intelligence.  

Database monitoring is an external capability that looks inwards on your database environments to gauge query performance, resourcing, and end-user errors or outages. 

Database observability looks at the database schema change process to understand structural evolutions over time. Instead of data management, it is concerned with the DevOps workflows that impact the application and data pipelines. 

Database observability metrics: what to observe

Providing visibility into the state and change of data stores and pipelines, database observability taps log data to feed DevOps dashboards with metrics on change management and workflow performance. 

Available metrics are more a matter of technical capability – what is being logged – and team/DevOps priorities. To start, think about “state of the database” change operation monitoring metrics like:

  • Deployment count
  • Application count
  • Database endpoint count
  • Teams

Then, most DevOps teams will include at least some of the four DORA metrics, and perhaps its fifth reliability-focused metric, to serve as pipeline (workflow) analytics:

  1. Deployment Frequency
    How often an organization successfully releases to production
  2. Lead Time for Changes
    The amount of time it takes a commit to get into production
  3. Change Failure Rate
    The percentage of deployments causing a failure in production
  4. Time to Restore Service
    How long it takes an organization to recover from a failure in production
  5. Reliability
    The ability of an organization to keep its services operational and accessible to users

The first priority when a dashboard is created is to take baseline measurements of these KPIs, so they can be improved upon and measured continuously. 

Types of questions answered by observability insights

DORA metrics often serve as the basis of database observability and can be expanded or refined to suit the needs of the DevOps team. The team can ask itself, “which metrics are critical to ensure the health and improvement of database change management and deployment automation?” Or, they might ask specific questions about their workflows and look to observability insights to solve them. Examples of these kind of questions include:

  • Why does it cost more to deploy database A than database B?
  • Why does database team C deploy 5 times faster than teams A and B? (And how can we level up A and B?)
  • How do changes in the database schema affect application performance?
  • What is the frequency and impact of database rollbacks on production stability?
  • How does the change failure rate for database updates compare across different environments or teams?
  • What database(s) are drifting from the norm or from what is expected?
  • What users/system users have been added to have access to the database?

Metrics might also be filtered by development stage, team, database type, application connections, and other elements. Essentially, this depends on what kind of categorical and identifying tags the DevOps team can add to its structured logs. Example database observability metrics beyond the standard DORA set could include:

  • Regulatory compliance-related errors
  • Rollbacks
  • Success/failure rate
  • Schema change count
  • Critical and high policy violation count
  • Backup and recovery times
  • Data growth rate
  • Audit log volume

Observing database drift

Database drift is less a metric and more of a detailed analysis. Is the database in the state it should be, or has the current state somehow veered from what it’s supposed to be? In complex data store environments with numerous teams and business units crossing paths, the possibility of unauthorized or unintentional database change persists. 

Drift can be a symptom of process breakdowns, breaches, technical issues, and more. Observability helps teams detect, understand, rectify, and prevent drift. 

Observability metrics for database security (DevSecOps)

Database DevSecOps observability measurements can bring database security and compliance into focus, raising the bar on data security. Depending on technical and logging capability, a database observability dashboard may have security-related metrics such as:

  • Vulnerability discovery time
  • Time to remediate vulnerabilities
  • Open security issue count
  • Frequency of security scan
  • Incident response time
  • Mean time to recovery for security incidents
  • Failed security builds
  • High-risk vulnerabilities outstanding
  • Automated security test coverage (%)

Database observability can improve security and threat detection, so DevSecOps teams can quickly identify and respond to potential security breaches.

Observability metrics for database performance

The above metrics are workflow-focused. Yet database observability extends to the performance of the database itself, too. Performance, utilization, and user-experience metrics could include:

  • Query response time
  • Database connection times
  • CPU utilization
  • Memory usage
  • Disk I/O throughput
  • Query errors
  • Connection errors
  • Transaction throughput
  • End-user latency

These performance metrics can be combined with DevOps metrics above to provide a complete picture of database CI/CD and complete the feedback loop necessary for constant improvement and innovation. 

Benefits of database observability

Database observability is critical to ensuring smooth operations, but these insights can be even more valuable in optimizing technology and workflows, as well as maximizing the business value of data and infrastructure. DevOps teams find that enabling and leveraging database observability leads to:

  1. Improvements in efficiency for more productive pipelines
  2. Trend analysis to uncover broader issues
  3. Proactive mitigation and forecasting capabilities
  4. Stronger, yet simplified database compliance
  5. More robust data integrity throughout the pipeline

Explore these benefits in detail: 5 benefits of database observability that help you unlock the future, faster

Challenges to database observability

The biggest challenge to database observability is actually having the capability to tap into database change logs and transform that data into actionable insights. And without people and processes in place to analyze and act on observability insights, finding the value in observability is even more challenging. 

Capturing the right data

What’s happening throughout your database change management pipeline? That, of course, is the question at the core of observability. But iIf workflows in the database change process are manual or outside of the CI/CD pipeline, how are they being documented and measured?

Asking teams and tools to put out the kind of data that’s needed to fuel database observability is typically the first and most cumbersome challenge. 

Integrating observability into complex database workflows

The more complex an organization’s database change process, the more difficult it can be to bring observability into the mix. With multiple types of databases, multitudes of individual environments, and even more corresponding application connections – spread across teams and business units – the change process itself can be suboptimal and slow. That doesn’t bode well for observability or quality insights. 

Enabling end-to-end pipeline observability means uniting all of these teams and technologies into a streamlined, automated workflow that captures the right data from the right tools and teams.

Managing large volumes of observability data

Whichever tools and processes provide the data that fuels database observability, that data has to live somewhere. In enterprise settings, the amount and frequency of database change happening can lead to a massive amount of observability data that needs to be properly managed before it becomes unwieldy or its integrity gets put at risk. 

Storage and analysis challenges can hinder progress on observability initiatives, then lead to inefficient programs later on. Establishing efficient observability data management strategies is key to pulling this off without overwhelming resources. 

Protecting database security and compliance

Just like the rest of the application and data pipelines, database observability practices must adhere to regulations and compliance standards. Collecting and analyzing workflow data introduces another element to consider as part of broader data security strategies. 

Logs and metrics can also expose internal processes, which could put an organization’s competitive advantage at risk. While observability brings benefits to security and innovation, it must also be handled carefully to protect those same elements. 

Leveling up DevOps skills and culture 

Who’s going to manage, monitor, and draw insights from database observability dashboards? How will these learnings be applied, measured, expanded, and continuously optimized? Achieving database observability means bringing DevOps all the way through to the database, specifically change management. That can challenge the skills and culture within teams, requiring training and persuasion to shore up alignment and readiness. 

Fostering advanced database DevOps skill sets and empowering pervasive, progressive DevOps culture is crucial to uniting development, operations, database, and other teams to prioritize observability. Without communicating the business value of database DevOps and observability to stakeholders up and down the data pipeline, getting the cultural buy-in can prove tricky.  

Resource allocation and cost efficiency

If a database observability initiative requires large investments in tools, teams, training, and integrations to make it a reality, it can be a hard sell – or at least a slow one. Another way to look at this challenge is to consider extracting database observability from the tools already in teams’ arsenal. That circles back to the challenge of “capturing the right data.” But, if observability can come from the data collected throughout the existing pipeline, concerns about devoting resources are diminished. 

While challenges can stress any team’s push for database observability, these hurdles are hardly insurmountable. In fact, advancements in database change management automation can put observability at a team’s fingertips while actually simplifying and accelerating their workflows. 

Enable observability with database change automation

Liquibase brings DevOps to the database by automating schema change management. Instead of manual reviews and deployments, Liquibase enables configurable Quality Checks and other commands to enforce your organization’s data store policies and deploy changes to database targets. As these checks and deployments are logged, those logs can be transformed into actionable insights. 

Liquibase’s Structured Logging gives it the standout ability to make database change and state data easily available and machine-readable – and that’s how automating database change management enables database observability. 

Structured Logging

Whenever someone wants to push a change to their database, they create a Changeset in Liquibase. Multiple Changesets are packaged in Changelogs. The Liquibase user can then run a Quality Check against that Changelog to validate it and then choose to deploy it. 

That Changelog contains information about this specific change. As these Changelogs stack up with deployments, they represent workflow data over time – which can be analyzed to identify trends and deduce insights. 

Structured logging

In Liquibase, the user must turn on Structured Logging – which turns Liquibase’s standard, human-readable logs into formatted, machine-readable logs (see the before and after image above) – by setting the log property to JSON. These Structured Logs can then be ingested into an observability platform, such as ElasticSearch, AWS Cloudwatch, Splunk, Datadog, etc. 

The screenshot below shows a sample database deployment dashboard built with ElasticSearch.

This deployment dashboard is fed by Liquibase’s observability data/logs, combined with the organization’s custom tagging to identify applications, targets, and teams. It showcases the state of the database CI/CD pipeline during the set timeframe with:

  • Deployment frequency total and by stage
  • Application count total and by category
  • Database endpoint count total and by database type
  • Team segments

From these observability inputs, this dashboard can represent DORA and other DevOps metrics to inform optimization insights. In this example, these include:

  • Deployment frequency 
  • Change failure rate (success/failure rate)
  • Lead time to change

The metrics included on any team’s database observability dashboard would align with their workflows and DevOps goals. 

Drift and Operation Reports

Even without a dashboard connected, one of Liquibase’s out-of-the-box observability capabilities are its Drift Reports. Drift Reports are generated by comparing the current state of a database schema against a predefined baseline or the expected state defined in the Liquibase Changelogs. This feature enables teams to detect "drift" — unauthorized, unexpected, or unplanned changes to the database schema that have occurred outside of the managed process.

This report highlights discrepancies between the expected and actual schema configurations, lending database observability insights into why, when, and how the drift occurred. 

Drift reports also:

  • Identify unauthorized changes, such as those due to errors, improper access, workflow breakdowns, or security breaches
  • Facilitate easier compliance and faster auditing 
  • Improve security, by providing an automated method of catching suspicious activity
  • Allow teams to get a pulse on the success of their deployments and the overall health of the data infrastructure

Liquibase integrates with various CI/CD tools, allowing Drift Reports to be generated automatically as part of the deployment pipeline. 

Similarly, the other Liquibase Operation Reports enable teams to automatically understand and share the latest updates to their databases. They include a summary of state plus a deep look at execution details for a concise report of database changes without the need to analyze logs. For example, Update Reports visualize the latest changes made to your database and Checks Run Reports show which Quality Checks have run, including detailed results. 

Powerful database observability capabilities

In addition to the Drift and Operation Reports enabled by Liquibase’s Structured Logging, database observability enables teams to optimize and streamline compliance, governance, and auditing. By automatically collecting log data throughout the automated pipeline, Liquibase makes it easier to get the visibility database DevOps teams need.

Liquibase also helps teams improve reliability by accelerating root cause analysis and reducing time to remediate errors. More broadly, it gives the insights needed to optimize database operations from end to end, maximizing the value of automation and continuously improving workflows. 

Teams can access Liquibase database observability data through observability and analysis platforms they’re already using, like ElasticSearch, CloudWatch, Splunk, and others. 

For more database observability wisdom, watch the on-demand webinar: Bringing Observability to Database DevSecOps 

Get started on your database observability initiative by learning more about How Liquibase Works or requesting a demo.