- From the left-hand side menu, click on Account
- From the top navigation bar, select Projects
- Click the desired project; everything will be scoped to that project
- As part of this lab, we will switch between modules several times
In order to switch modules:
- Click on the nine dot menu icon
- Select the module relevant to the step
- The lab begins in the code repository
- No friction for developers
- Automated changes
In this lab, the user sets up a pipeline integrating a schema source, a target database, change application steps, and built-in change management.
The user then pushes a new database changelog to Git (e.g., adding a column). This triggers the pipeline and creates a pull request review step for the database change, just like any other code change.
- In the Harness UI, navigate to the Database DevOps module
- From the left menu, select Pipelines.
- Click Create a Pipeline, enter a name
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Setup | Inline | |
|
|
|
- Click Start.
- Click Add Stage and choose Custom Stage
| Input | Value | Notes |
|---|---|---|
| Stage Name | |
|
|
|
|
- Click Add Step and select Add Step Group
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Enable | Containerized Execution | |
| Kubernetes Cluster | k8s-prod | |
|
|
|
- Inside the step group, click Add Step.
- From the available out-of-the-box steps, select Apply Schema
- Configure accordingly
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Select DB Schema | DB | |
| Database Instance | DB1 | |
|
|
|
- Click Apply Changes.
- Click Save to finalize the pipeline.
- Click Run to manually execute it.
Verify that:
- The schema changes are applied to the target database.
- The pipeline completes successfully.
- From the left menu, select Overview
- The changes should be visible for DB1
- While in the pipeline studio, use the top right navigation bar
- Click Triggers, then New Trigger.
- Choose Harness as the trigger type.
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Repository | db_changes | |
| Event | Push | |
|
|
|
- Click Continue
- Under Conditions
| Input | Operator | Value |
|---|---|---|
| Branch name(s) | Equals | |
| Changed File | Equals | |
|
|
|
- Click Continue, then Create Trigger.
- Navigate to Code Repository Module.
- Click into repo db_changes.
- Click the Edit button.
In your configured Git repo, add a changeSet after the existing changeset to introduce a new change:
- changeSet:
id: add-second-email-column
author: harness-lab
changes:
- addColumn:
tableName: users
columns:
- column:
name: second_email
type: varchar(255)
constraints:
nullable: trueThe pipeline will automatically trigger: The schema change will be applied to the target database.
| Aspect | Description |
|---|---|
| Familiar Workflow | Developers stay in Git. |
| Single Source of Truth | All changes are versioned together. |
| Lower Learning Curve | GitOps-aligned workflows. |
| Accelerated Velocity | No manual DB scripts or tickets. |
| Compliance by Design | Automated, policy-driven changes. |
- Safe failure handling
- Pre-validated rollback plans
- Confidence in change velocity
In this lab, the user intentionally deploys a database changelog that introduces a breaking or invalid change (for example, dropping a required column). The pipeline detects the failure during deployment to a non-production environment and automatically triggers a rollback using a predefined backout script.
This simulates a real-world scenario where a change fails validation or breaks application behavior, and highlights how Harness enables fast recovery without manual intervention or firefighting.
- Navigate to your pipeline and locate the step group containing your Schema Apply step.
- Hover to the right of the Schema Apply step and click the ➕ icon to add a new step to the right of the existing step.
- From the list of steps find and select the Rollback Schema step.
- Configure accordingly:
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Select DB Schema | DB | |
| Select DB Instance | DB1 | |
| Rollback Count | |
|
|
|
|
Add Conditional Execution
- Within the same step configuration, scroll to the top
- From the navigation bar, select Advanced
- Click Conditional Execution.
- Choose If the previous step fails.
- Click Apply Changes, then Save the pipeline.
In your configured Git repository, add a changeSet that attempts to make an invalid change:
- changeSet:
id: add-duplicate-column
author: harness-lab
changes:
- addColumn:
tableName: users
columns:
- column:
name: id
type: int
constraints:
rollback:
- sql:
comment: This is a no-op rollback for the invalid column addition.
sql: SELECT 1;Commit and push the change to the main branch.
The pipeline will automatically trigger:
- The breaking change will attempt to apply and fail. Can you identify why?
- The rollback plan will be executed automatically.
- The environment will be restored to a stable state.
- Navigate to Overview under the Database DevOps module and review the change sets versions and status
| Aspect | Description |
|---|---|
| Automatic Rollbacks | Backout plans are pre-validated and ready to run, reducing mean time to recovery (MTTR) |
| No Manual Intervention: | Developers do not need to SSH into environments or locate old scripts—rollback is built into the pipeline. |
| Resilience as Default | Pipelines are designed to fail gracefully, keeping environments stable and deploy-ready. |
- Targeted policy-as-code enforcement
- Standardized review and approval gates
- Risk mitigation before deployment
In this lab, the user attempts to push a database changelog that drops a table—an action disallowed in this environment. The pipeline evaluates the change using integrated policy-as-code (OPA) rules and immediately blocks execution.
The user reviews the policy failure, understands the violation, and updates the changelog to meet organizational standards before resubmitting.
- In the left-hand panel, go to Project Settings.
- Select Policies, then click New Policy.
You may have to click the X in the upper right to dismiss a pop-up.
| Input | Value | Notes |
|---|---|---|
| Name | |
|
|
|
|
- Let's block dropping data
- Copy the REGO code below into the policy editor
package db_sql
rules := [
{
"types": ["mssql","oracle","postgres","mysql"],
"environments": ["prod"],
"regex": [
"drop\\s+table",
"drop\\s+column",
"drop\\s+database",
"drop\\s+schema"
]
},{
"types": ["oracle"],
"environments": ["prod"],
"regex": ["drop\\s+catalog"]
}
]
deny[msg] {
some i, k, l
rule := rules[i]
regex.match(concat("",[".*",rule.regex[k],".*"]), lower(input.sqlStatements[l]))
msg := "dropping data is not permitted"
}- Click Save
- From the top right navigation bar click Policy Sets
- Click + New Policy Set
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Entity Type | Custom | |
| On what event | On Step | |
|
|
|
- Click Continue.
- Under Policy to Evaluate, click Add Policy.
- Select the `Block Destructive SQL` policy.
- Click Apply, then Finish.
- Toggle Enforce.
Add the policy set in the pipeline
- Navigate to the pipeline studio (in edit mode)
- Open the Apply Change step
- Within the step configuration and from the top navigation bar select Advanced
- Expand the Policy Enforcement section
- Click on Add/Modify Policy Set
- From the navigation bar select your project
- Select the Prevent Destructive Changes policy set
- Click Apply Changes
- Save Pipeline
The new Policy Set is now active and linked to the pipeline runtime.
- In your configured Git repo, add a `changeSet` that violates the policy (attempting to drop a table):
- changeSet:
id: 2025-05-21-drop-users-table
author: harness-lab
changes:
- dropTable:
tableName: users- Commit and push the change to the monitored branch (main).
- Notice that the changeset attempts to drop the users table
The pipeline will automatically trigger. Observe that:
- The policy is triggered as part of the pipeline execution.
- The execution fails before deployment.
- The pipeline halts with the error:
dropping data is not permitted
Review pipeline output and logs

| Aspect | Description |
|---|---|
| Guardrails, Not Roadblocks | Policies surface issues early without slowing down developers who follow best practices. |
| Standardized Governance | Approval workflows and checks are consistent across teams, environments, and databases. |
| Policy-as-Code | Governance is defined in code and version-controlled—just like everything else. |
| Risk Reduction | Disallowed or unsafe changes are caught before they affect production. |
| Scalable Compliance | Teams can move fast while meeting security and audit requirements at scale. |
- Single pipeline for multi-env DB deployments
- Environment-specific guardrails
- Reduced handoffs and manual coordination
In this lab, the user promotes a database change through multiple environments — dev, staging, and production — using a single orchestrated pipeline. Each stage applies the change to a different target database with environment-specific configurations, policies, and approval steps.
As changes progress, the user can view which schema versions are deployed in each environment directly in the Harness UI — removing the need to manually track or document status.
- In your existing pipeline from Lab 1, after the existing stages click Add Stage
| Input | Value | Notes |
|---|---|---|
| Stage Name | |
|
|
|
|
- Click Add Step and select Add Step Group
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Enable | Containerized Execution | |
| Kubernetes Cluster | k8s-prod | |
|
|
|
- Inside the step group, click Add Step.
- From the available out-of-the-box steps, select Apply Schema
- Configure accordingly
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Select DB Schema | DB | |
| Database Instance | DB2 | |
|
|
|
- Click Apply Changes.
- In your existing pipeline from Lab 1, after the existing stages click Add Stage
| Input | Value | Notes |
|---|---|---|
| Stage Name | |
|
|
|
|
- Click Add Step and select Add Step Group
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Enable | Containerized Execution | |
| Kubernetes Cluster | k8s-prod | |
|
|
|
- Inside the step group, click Add Step.
- From the available out-of-the-box steps, select Apply Schema
- Configure accordingly
| Input | Value | Notes |
|---|---|---|
| Name | |
|
| Select DB Schema | DB | |
| Database Instance | DB3 | |
|
|
|
- Click Apply Changes.
- Click Save to finalize the multi-stage pipeline.
- In Git, remove the table drop change and the broken change from Lab 2, then commit to kick off the pipeline.
- Observe the deployment of the previously committed schema change through:
- Stage 1: DB1 (Dev)
- Stage 2: DB2 (QA)
- Stage 3: DB3 (Production)
- Verify successful execution in all stages.
- From the left-hand nav, go to Overview.
- Observe the green checkmarks next to Dev, QA, and Production, indicating:
- Where the change has been applied
- That each deployment completed successfully
- This provides visibility across all environments from a single pane of glass.
| Feature | Description |
|---|---|
| Unified Workflow | One pipeline governs the full lifecycle of a database change from dev to prod. |
| Environment-Specific Control | Each stage can have its own policies, approvers, and rollback settings. |
| Schema Visibility | The Harness UI shows which schema changes have been applied where, so teams always know the current state across environments. |
| Reduced Toil | No need to manually coordinate between environments or hand off to DBAs. |
| Production Readiness by Design | Staged rollouts and approvals ensure only validated changes reach production. |