CI/CD pipelines are the backbone of modern software development, but they often have a hidden bottleneck: the database. Flaky tests, slow pipeline runs, and the dreaded "it worked on my machine" can often be traced back to a shared, overloaded staging database. State pollution from one test run bleeds into the next, developers wait in line for their turn, and schema migrations become a high-stakes gamble.
What if you could give every single test run, every pull request, a brand new, perfectly clean, and completely isolated database? This is the power of ephemeral databases: temporary databases created on-demand for a specific task and destroyed immediately after.
This pattern isn't new, but implementing it has historically been a chore involving complex shell scripts, messy Docker Compose files, and juggling cloud credentials.
Enter databases.do. We're going to show you how to use the .do Agentic Workflow Platform to implement a robust, fast, and cost-effective ephemeral database strategy in your CI/CD pipeline with just a few lines of code.
Before we dive into the code, let's appreciate the "why." Shifting from a shared staging database to an ephemeral model is a paradigm shift with massive benefits:
Traditionally, you might script this with the AWS CLI or gcloud. With databases.do, you're not just running commands; you're interacting with a high-level agent that understands the concept of a "fleet" of databases. You treat your entire collection of data infrastructure as a single, programmable service.
Here's how to build a CI/CD pipeline that provisions, uses, and destroys a database on the fly.
First, ensure you have the SDK in your project. In your CI testing script (e.g., run-e2e-tests.js), you'll initialize the .do client and access the databases.do agent. This agent is your control plane for all database operations.
import { Do } from '@do-sdk/core';
// Initialize the .do client (it securely uses credentials from your environment)
const ado = new Do();
// Access the agent designed to manage database fleets
const myDatabases = ado.use('databases.do');
console.log('Successfully connected to the databases.do agent.');
Inside your CI script, the first step is to create a new database instance. We'll use a CI environment variable like the commit SHA to ensure a unique name, which is crucial for parallel runs.
// Inside an async function for your test runner
// Use a unique identifier from your CI environment
const uniqueId = process.env.CI_COMMIT_SHORT_SHA || `local-test-${Date.now()}`;
const dbName = `test-db-${uniqueId}`;
console.log(`Provisioning new database: ${dbName}`);
const newDb = await myDatabases.create({
type: 'postgres',
version: '15',
provider: 'aws',
plan: 'micro', // Use a small, cost-effective plan for testing
name: dbName,
});
console.log(`Database ${newDb.name} is ready!`);
The create() call abstracts away all the complexity of dealing with a specific cloud provider. You just declare what you want, and the agent handles the how. The returned newDb object contains everything you need, including the all-important connection string.
Now that your database exists, you can use the connection string to prepare it for your tests. This is where you'd integrate your existing ORM or migration tool (like Prisma, TypeORM, Knex, etc.).
// The `create` method returns the connection string directly
const { connectionString } = newDb.credentials;
// 1. Run your database schema migrations
await runMigrations(connectionString);
console.log('Migrations applied successfully.');
// 2. Seed the database with necessary test data
await seedTestData(connectionString);
console.log('Test data seeded.');
// 3. Set the connection string as an environment variable for your test runner
process.env.DATABASE_URL = connectionString;
Your test suite can now run against a perfectly clean, isolated, and correctly seeded database.
// Execute your test command (e.g., jest, mocha, vitest)
await runTestRunner();
console.log('All tests passed!');
This is the most critical step for both hygiene and cost savings. To ensure the database is always destroyed—even if the tests fail—we wrap our logic in a try...finally block. The finally block guarantees that the destroy() call is executed no matter what.
// Putting it all together in a robust block
const uniqueId = process.env.CI_COMMIT_SHORT_SHA || `local-test-${Date.now()}`;
const dbName = `test-db-${uniqueId}`;
try {
// Step 2: Provision
const newDb = await myDatabases.create({ name: dbName, /* ...other options */ });
const { connectionString } = newDb.credentials;
// Step 3: Migrate & Seed
await runMigrations(connectionString);
await seedTestData(connectionString);
// Step 4: Run tests
process.env.DATABASE_URL = connectionString;
await runTestRunner();
} catch (error) {
console.error('CI pipeline failed:', error);
process.exit(1); // Exit with a non-zero code to fail the pipeline
} finally {
// Step 5: Always destroy the database
console.log(`Tearing down database: ${dbName}`);
await myDatabases.destroy({ name: dbName });
console.log('Cleanup complete.');
}
This try...finally pattern is the heart of a resilient ephemeral database strategy. You get all the benefits of isolation without the risk of leaving expensive, orphaned resources behind.
You've just seen how databases.do transforms a complex infrastructure challenge into a simple, reliable, and powerful agentic workflow. Ephemeral databases are just one of many patterns you can implement when you start managing your data collections as code.
Stop wrestling with staging databases. Start building faster, more reliable CI/CD pipelines.
Get Started with Databases.do and Turn Your Fleet into Code
Q: What is the difference between database.do and databases.do?
A: database.do is designed to manage a single database instance, handling its schema, queries, and specific operations. databases.do is a higher-level agent that orchestrates a collection or 'fleet' of multiple database instances, perfect for managing data across microservices, multi-tenant systems, or different environments (dev, staging, prod) like in this CI/CD example.
Q: Can I manage different types of databases across multiple clouds?
A: Yes. The databases.do agent acts as a control plane, allowing you to provision, configure, and manage diverse database types (e.g., PostgreSQL, Redis, MongoDB) from different cloud providers (AWS, GCP, Azure) through one unified API.
Q: How do I provision a new database using the databases.do agent?
A: As shown in the tutorial, you can programmatically provision a new database instance by calling create() on the agent, specifying the desired type, configuration, and provider. This turns complex infrastructure setup into a single, repeatable line of code.
Q: How is security handled when managing multiple databases?
A: All interactions are secured through the .do platform's centralized authentication and authorization. You can define fine-grained policies to control which users or services have the permission to list, create, or destroy database instances within your fleet.