AI Agents as Contractors: What the Offshoring Revolution Teaches Us About AI Governance
December 4, 2025
See Liquibase in Action
Accelerate database changes, reduce failures, and enforce governance across your pipelines.

Everyone's racing to deploy AI agents across their organizations, granting these autonomous systems unprecedented access to corporate databases and critical infrastructure. If this feels familiar, it should. We've been here before.
The offshoring revolution of the 1990s and 2000s created similar governance challenges that caught most organizations unprepared. The companies that succeeded weren't the ones who moved fastest. They were the ones who implemented proper governance frameworks from the start.
AI agents should be treated like external contractors brought into your organization. They possess basic knowledge and task completion capabilities, but they require explicit guidance, carefully defined boundaries, and ongoing oversight. Just as no responsible organization would grant a contractor unrestricted access to all systems and data on day one, AI agents need governance frameworks that enable appropriate autonomy while preventing catastrophic mistakes.
What AI agents need from governance frameworks:
- Restricted access that matches their role and responsibilities
- Clear policies defining what operations they can and cannot perform
- Approval gates for high-risk activities like production database changes
- Audit trails capturing every action for accountability and compliance
- Rollback capabilities when mistakes inevitably occur
The Offshoring Parallel: A Cautionary Tale
When companies began moving operations overseas in the late 1990s and early 2000s, the promise was irresistible. Lower costs, access to global talent, and the ability to scale operations quickly. Many organizations rushed in, eager to capture these benefits before their competitors.
What they encountered were unexpected governance challenges that turned potential advantages into expensive lessons. Organizations had to develop entirely new processes for managing distributed operations. Access control across geographically separated teams became complex. Quality assurance with remote workers required new frameworks. Audit trails for compliance across different jurisdictions demanded careful planning.
Critical governance gaps organizations discovered too late:
- Access control systems weren't designed for distributed teams across time zones
- Quality standards that worked locally failed when applied to remote operations
- Audit requirements multiplied when operations crossed jurisdictional boundaries
- Security protocols designed for on-premises operations couldn't protect data in transit
- Standardized workflows broke down when applied to varied operational contexts
- Oversight mechanisms provided insufficient visibility into remote activities
The companies that rushed into offshoring without proper governance frameworks faced serious consequences. In 2008, Heartland Payment Systems experienced a massive data breach that compromised as many as 100 million debit and credit cards. The breach occurred through an SQL injection attack that exploited vulnerabilities in their payment processing systems. Heartland lost its PCI DSS compliance for four months, lost hundreds of customers, and the total monetary loss exceeded $200 million. The company's stock price fell 50% within days of announcing the breach, eventually sinking more than 77% in the following months.
What makes this particularly relevant to AI governance is how the breach happened and how long it went undetected. The compromise came through a SQL injection attack on the company's website. Heartland immediately found out about it and thought they had eradicated the malware. Roughly six months later, in mid-May 2008, the malware made the leap from the corporate network to the payment processing network.
The attackers spent almost six months moving through Heartland's systems, installing sniffer software that captured payment card data in transit. Perhaps most striking is this detail: Two weeks prior to the date the payment system was compromised, Heartland was approved by their Qualified Security Assessor as PCI compliant.
Key failures in Heartland's governance approach:
- Compliance frameworks provided false sense of security without comprehensive controls
- Insufficient monitoring allowed attackers to move laterally across systems undetected
- No effective drift detection to identify unauthorized changes to the environment
- Weak segmentation between corporate and payment processing networks
- Inadequate visibility into what was actually happening across distributed infrastructure
- Response protocols failed to verify complete malware eradication
The company believed it had adequate controls in place. They had passed compliance audits. But the governance framework wasn't comprehensive enough to prevent unauthorized access from spreading across interconnected systems.
The organizations that succeeded with offshoring took a different approach. They implemented robust governance early, treating offshore teams as valuable contributors who required clear guidelines, appropriate access controls, and continuous oversight. These companies invested in proper authentication systems, established change management protocols, created detailed audit trails, and maintained visibility into all operations across their distributed infrastructure.
Common Governance Failures and Their Consequences
The research on outsourcing and offshoring risks reveals patterns that apply directly to AI governance. Risks associated with outsourcing typically fall into four general categories: loss of control, loss of innovation, loss of organizational trust, and higher-than-expected transaction costs. Each of these manifests when AI agents operate without proper governance.
Loss of control occurs when tasks previously performed by company personnel move to external parties over whom the organization has limited oversight. When outsourcing contracts inappropriately or incorrectly detail work specifications, outsourcers may be tempted to behave opportunistically by using subcontractors or by charging unforeseen price increases.
How loss of control manifests with AI agents:
- Agents granted broad database access make schema changes without proper review
- Modifications to critical data occur outside established change management processes
- Destructive operations execute because boundary definitions were never established
- Multiple agents make conflicting changes without coordination mechanisms
- Dependencies on AI-generated code create vendor lock-in similar to outsourcing relationships
With AI agents, loss of control looks different but feels the same. An agent granted broad database access can make schema changes, modify critical data, or execute destructive operations without proper review simply because the governance framework never defined appropriate boundaries.
Compliance risks in vendor outsourcing revolve around the potential non-adherence to legal and regulatory requirements. Failure to comply with industry standards and governmental regulations can result in legal repercussions and reputational damage. AI agents accessing databases containing personally identifiable information, payment card data, or protected health information create immediate compliance exposure if governance controls don't enforce data protection standards.
The hidden costs of inadequate governance compound over time. In 2024, the Financial Conduct Authority fined three banks £42 million for failures with IT services managed by third parties, leaving customers unable to access their bank accounts. These weren't breaches in the traditional sense. They were failures of governance and oversight that cascaded into operational disruptions affecting millions of customers.
Technical Implementation for AI Governance
The specific technical requirements for governing AI agents at the database level mirror what successful offshoring operations learned to implement. Context-aware access controls adapt based on what the agent is attempting to do. An agent might have read access to certain tables for answering customer questions but require explicit approval to modify schemas. Another agent could propose database changes but lack permission to deploy them directly to production without human review.
Policy enforcement must happen before execution, not after damage occurs. This means blocking destructive statements in production before they run, validating that AI-generated queries comply with data access policies before they touch live systems, and requiring rollback strategies for any schema changes regardless of whether they originate from human developers or autonomous agents.
Essential technical capabilities for database-level AI governance:
- Context-aware access controls that adapt permissions based on attempted operations
- Pre-execution policy validation that blocks non-compliant changes before they run
- Tamper-evident audit logs capturing who, what, when, and which SQL statements executed
- Integration with existing SIEM tools for anomaly detection and alerting
- Surgical rollback capabilities that reverse specific changes without full restores
- Scheduled drift detection comparing actual state against approved baselines
Observability transforms from optional to mandatory. Every database interaction from an AI system should generate a structured, tamper-evident log capturing what happened, why it happened, and which agent or system initiated the action. These audit trails need to integrate with existing security information and event management tools so security teams can identify anomalies in real time. When something goes wrong, and eventually it will, precise rollback capabilities can surgically reverse problematic changes without affecting everything else that happened in the same timeframe.
Drift detection must run continuously. AI agents excel at making changes that seem reasonable in isolation but introduce inconsistencies when viewed across the entire system. Automated drift checks compare production databases against the last approved baseline and flag unauthorized differences before they cascade into larger problems. This provides an opportunity to reconcile discrepancies when they're still manageable instead of discovering them during a compliance audit or major incident.
The architecture must support rapid iteration while maintaining control. AI agents operate at speeds that make manual review of every action impractical. The governance framework needs intelligence built into the approval gates. Routine operations that comply with all policies can proceed automatically. Operations that touch sensitive data, modify critical schemas, or violate established patterns trigger human review before execution. This approach maintains velocity for safe operations while adding appropriate friction for risky ones.
Why Proactive Governance Matters More for AI
The offshoring revolution unfolded over years, giving organizations time to recognize problems and implement corrections. AI adoption is happening faster. Agents can make thousands of database interactions per hour, and the consequences of ungoverned access compound at machine speed.
A single misconfigured agent with excessive privileges could execute more destructive operations in minutes than a team of offshore contractors could accomplish in months.
Why AI governance requires immediate action:
- AI agents operate at machine speed, making thousands of decisions per hour
- Consequences of mistakes compound faster than human oversight can catch them
- Agents operate with less supervision by design, reducing opportunities for intervention
- Regulatory frameworks now impose specific requirements for AI system accountability
- Breach remediation costs and reputational damage often exceed the cost of governance
- Retrofitting controls after incidents costs significantly more than proactive implementation
The stakes are also higher because AI agents operate with less human supervision by design. That's the entire point of autonomous systems. An offshore team member who encounters an unfamiliar situation typically asks for clarification. An AI agent encountering the same situation might make its best guess and proceed, potentially making dozens of related decisions based on that initial assumption before anyone realizes something went wrong.
Regulatory scrutiny creates additional pressure. Compliance frameworks like the EU AI Act, NIST's AI Risk Management Framework, SOX, HIPAA, GDPR, and PCI DSS all impose specific requirements around data governance, audit trails, and accountability. Organizations deploying AI agents without database-level governance controls will struggle to demonstrate compliance when auditors start asking detailed questions about who or what accessed sensitive data, when those accesses occurred, and whether appropriate controls were enforced.
The good news is that we don't need to learn these lessons the hard way again. The offshoring revolution already taught us what happens when organizations prioritize speed over governance. Some companies lost millions in breach remediation costs. Others faced regulatory penalties. Many damaged customer relationships that took years to rebuild.
The pattern is clear: implementing governance proactively costs less and delivers better outcomes than attempting to retrofit controls after incidents occur.
Liquibase Secure and AI Governance
Database governance for AI workloads requires purpose-built tooling designed for both the velocity of modern development and the unique risks that autonomous systems introduce. Liquibase Secure makes database change fast, governed, and recoverable even when AI is in the loop.
Policy checks run automatically on every changeset before deployment, blocking forbidden operations and enforcing naming conventions, data validation rules, and security standards consistently across the entire database estate. When an AI agent attempts a destructive change, Liquibase Secure stops it before reaching production. When a model proposes a schema modification violating PII policies, the system catches it at build time, not during the next audit.
How Liquibase Secure enables safe AI agent operations:
- Automated policy enforcement blocks destructive operations before they reach production
- Integration with CI/CD pipelines and access control systems ensures agents can propose changes that require appropriate human approvals before deployment
- Scheduled drift detection catches unauthorized modifications across all environments
- Tamper-evident audit trails provide compliance evidence for SOX, HIPAA, GDPR, and AI regulations
- Targeted rollback capabilities reverse specific changes without full environment restores
- Support for 60+ database platforms ensures consistent governance across the entire data estate
Role-based access control integrates with CI/CD pipelines alongside Liquibase Secure so AI agents can propose changes but cannot deploy them without appropriate approvals. Production database alterations are treated like controlled substances, requiring explicit sign-off from the right people with the right context. This doesn't slow down legitimate development work because the approval gates are automated and intelligent, but it does prevent ungoverned changes from sneaking through when no one is looking.
Drift detection runs on a regular schedule across all environments, comparing what actually exists in databases against what should be there. When Liquibase Secure finds unauthorized changes from an overeager AI agent, a manual hotfix, or a configuration error, it alerts teams immediately and blocks downstream deployments until the inconsistency gets resolved. This scheduled visibility means catching problems while they're still manageable instead of discovering them after they've propagated across environments.
The audit trail captures critical details about every change. Who made the change, what exactly changed, when it happened, and the specific SQL that was executed. That evidence streams directly into existing monitoring tools and provides the documentation needed for SOX compliance, HIPAA audits, GDPR verification, and emerging AI regulations. When auditors ask how you govern AI interactions with sensitive data, you have detailed, tamper-evident logs showing your controls are working.
Perhaps most critically, Liquibase Secure provides targeted rollback capabilities that can surgically reverse problematic changes without affecting everything else. When an AI agent makes a mistake, and eventually it will, you can freeze further writes, identify exactly what changed, roll back only the destructive operations, and then re-promote corrected changes through the normal approval process. This turns what could be a multi-hour outage with massive data loss into a contained incident resolved in minutes.
The Path Forward
Organizations that learn from the offshoring revolution and implement governance proactively will reap AI's benefits. Those that grant AI agents unrestricted database access and hope for the best will face the same painful lessons that early offshoring adopters experienced, just faster and with higher stakes.
The question isn't whether to implement AI governance at the database layer. The question is whether you'll implement it before the first major incident or after. History suggests that proactive governance costs less, delivers better outcomes, and avoids the reputational damage that comes from explaining to customers, regulators, and shareholders how an ungoverned AI agent caused a breach or outage.
Critical steps for implementing AI governance:
- Audit current AI agent access to database systems and identify excessive privileges
- Implement context-aware access controls that match agent capabilities to actual needs
- Deploy automated policy enforcement that validates changes before execution
- Establish drift detection to catch unauthorized modifications in real time
- Create comprehensive audit trails that capture all AI database interactions
- Build rollback capabilities that can reverse specific changes without full restores
Treat AI agents like the contractors they are. Give them the access they need to be productive, but not a bit more. Implement approval gates that maintain velocity while preventing catastrophic mistakes. Create audit trails that demonstrate compliance and provide accountability. Build rollback capabilities that turn potential disasters into minor incidents.
See Liquibase Secure in Action
The technology to govern AI agents at the database layer exists today. The only question is whether you'll deploy it before or after learning why you needed it.
See how Liquibase Secure provides the database governance controls that make AI adoption safe, compliant, and sustainable at enterprise scale. Our team can show you how to prevent AI-generated database changes from breaking production, enforce policies automatically, and maintain complete audit trails across all your database platforms.
Request a demo to see how Liquibase Secure protects your databases from ungoverned AI agent access.
References
- Proofpoint. "Lessons from the 2008 Heartland Data Breach." April 21, 2023. https://www.proofpoint.com/us/blog/insider-threat-management/throwback-thursday-lessons-learned-2008-heartland-breach
- Twingate. "Heartland Payment Systems Data Breach: What & How It Happened?" https://www.twingate.com/blog/tips/Heartland%20Payment%20Systems-data-breach
- SmarterMSP. "The 3 worst data breaches of all time (and the lessons learned)." July 13, 2020. https://smartermsp.com/3-worst-data-breaches-time-learned/
- Barclay Simpson. "The benefits and risks of outsourcing." November 4, 2023. https://www.barclaysimpson.com/the-benefits-and-risks-of-outsourcing/
- Helpware. "10 Most Critical Risks of Outsourcing in 2025." https://helpware.com/blog/top-risks-of-outsourcing
- Pirani Risk. "Risks in the outsourcing of services." February 17, 2025. https://www.piranirisk.com/blog/risks-in-the-outsourcing-of-services
- Saylor Academy. "Risks Associated With Outsourcing." https://saylordotorg.github.io/text_fundamentals-of-global-strategy/s10-04-risks-associated-with-outsourc.html
- Corporate Compliance Insights. "Greatest Compliance Risks Surrounding Third-Party Outsourcing." January 29, 2014. https://www.corporatecomplianceinsights.com/greatest-compliance-risks-surrounding-third-party-outsourcing/
.png)

.png)

.png)


