| Crates.io | pgmold |
| lib.rs | pgmold |
| version | 0.19.13 |
| created_at | 2025-11-21 07:54:27.067794+00 |
| updated_at | 2026-01-25 00:49:52.072531+00 |
| description | PostgreSQL schema-as-code management tool |
| homepage | |
| repository | https://github.com/fmguerreiro/pgmold |
| max_upload_size | |
| id | 1943222 |
| size | 1,994,280 |
PostgreSQL schema-as-code management tool. Define schemas in native PostgreSQL DDL, diff against live databases, plan migrations, and apply them safely.
PARTITION BY and PARTITION OF syntax┌─────────────────────┐ ┌─────────────────────┐
│ Schema Files │ │ Live Database │
│ (Desired State) │ │ (Current State) │
└──────────┬──────────┘ └──────────┬──────────┘
│ │
└───────────┬───────────────┘
▼
┌─────────────────┐
│ pgmold diff │
│ (compare) │
└────────┬────────┘
▼
┌─────────────────┐
│ Generated SQL │
│ (only changes) │
└─────────────────┘
Example:
Your schema file says:
CREATE TABLE users (
id UUID PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL, -- NEW
created_at TIMESTAMP
);
Database currently has:
CREATE TABLE users (
id UUID PRIMARY KEY,
name TEXT NOT NULL,
created_at TIMESTAMP
);
pgmold generates only the delta:
ALTER TABLE users ADD COLUMN email TEXT NOT NULL;
cargo install pgmold
For the latest version with partitioned table support (until the sqlparser fork is merged upstream):
cargo install --git https://github.com/fmguerreiro/pgmold
# 1. Create a schema file
cat > schema.sql << 'EOF'
CREATE TABLE users (
id BIGINT PRIMARY KEY,
email TEXT NOT NULL,
created_at TIMESTAMP DEFAULT now()
);
EOF
# 2. See what would change
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb
# 3. Apply the migration
pgmold apply --schema sql:schema.sql --database db:postgres://localhost/mydb
# Compare SQL schema to live database
pgmold diff --from sql:schema.sql --to db:postgres://localhost/mydb
# Generate migration plan
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb
# Generate rollback plan (reverse direction)
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb --reverse
# Apply migrations (with safety checks)
pgmold apply --schema sql:schema.sql --database db:postgres://localhost/mydb
# Apply with destructive operations allowed
pgmold apply --schema sql:schema.sql --database db:postgres://localhost/mydb --allow-destructive
# Dry run (preview SQL without executing)
pgmold apply --schema sql:schema.sql --database db:postgres://localhost/mydb --dry-run
# Lint schema
pgmold lint --schema sql:schema.sql
# Monitor for drift
pgmold monitor --schema sql:schema.sql --database db:postgres://localhost/mydb
# Detect drift (returns JSON report with exit code 1 if drift detected)
pgmold drift --schema sql:schema.sql --database db:postgres://localhost/mydb --json
Organize your schema across multiple files using directories or glob patterns:
# Load all SQL files from a directory (recursive)
pgmold apply --schema sql:./schema/ --database db:postgres://localhost/mydb
# Use glob patterns
pgmold apply --schema "sql:schema/**/*.sql" --database db:postgres://localhost/mydb
# Multiple sources
pgmold apply --schema sql:types.sql --schema "sql:tables/*.sql" --database db:postgres://localhost/mydb
Example directory structure:
schema/
├── enums.sql # CREATE TYPE statements
├── tables/
│ ├── users.sql # users table + indexes
│ └── posts.sql # posts table + foreign keys
└── functions/
└── triggers.sql # stored procedures
Duplicate definitions (same table/enum/function in multiple files) will error immediately with clear file locations.
Filter which objects to include in comparisons using name patterns or object types.
Filter by name pattern:
# Include only objects matching patterns
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb \
--include 'api_*' --include 'users'
# Exclude objects matching patterns
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb \
--exclude '_*' --exclude 'pg_*'
Filter by object type:
# Only compare tables and functions (ignore extensions, views, triggers, etc.)
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb \
--include-types tables,functions
# Exclude extensions from comparison
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb \
--exclude-types extensions
Combine type and name filters:
# Compare only functions matching 'api_*', excluding internal ones
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb \
--include-types functions \
--include 'api_*' \
--exclude '_*'
Filter nested types within tables:
# Compare tables without RLS policies
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb \
--exclude-types policies
# Compare only table structure (no indexes, constraints, or policies)
pgmold plan --schema sql:schema.sql --database db:postgres://localhost/mydb \
--exclude-types policies,indexes,foreignkeys,checkconstraints
Available object types:
extensions, tables, enums, domains, functions, views, triggers, sequences, partitionspolicies, indexes, foreignkeys, checkconstraintsBy default, pgmold automatically excludes objects owned by extensions (e.g., PostGIS functions, pg_trgm operators). This prevents extension-provided objects from appearing in diffs.
# Include extension objects if needed (e.g., for full database dumps)
pgmold dump --database db:postgres://localhost/mydb --include-extension-objects -o full_schema.sql
If you have a live database with existing schema (and possibly a migration-based workflow), use pgmold dump to create a baseline:
# Export current database schema to SQL files
pgmold dump --database "db:postgres://localhost/mydb" -o schema/baseline.sql
# For specific schemas only
pgmold dump --database "db:postgres://localhost/mydb" --target-schemas public,auth -o schema/baseline.sql
# Split into multiple files by object type
pgmold dump --database "db:postgres://localhost/mydb" --split -o schema/
The --split option creates separate files for extensions, types, sequences, tables, functions, views, triggers, and policies.
This exports your live database schema as SQL DDL. Now your schema files match the database exactly, and pgmold plan will show 0 operations.
pgmold plan --schema sql:schema/ --database db:postgres://localhost/mydbpgmold apply --schema sql:schema/ --database db:postgres://localhost/mydbpgmold is declarative (like Terraform) - it computes diffs and applies directly rather than generating numbered migration files. If you need to maintain compatibility with an existing migration system:
# Generate a numbered migration file automatically
pgmold migrate generate \
--schema sql:schema/ \
--database db:postgres://localhost/mydb \
--migrations ./migrations \
--name "add_email_column"
# Creates: migrations/0044_add_email_column.sql
# Or manually capture output
pgmold diff --from "db:postgres://localhost/mydb" --to "sql:schema/" > migrations/0044_my_change.sql
The migrate generate command auto-detects the next migration number by scanning existing files.
This lets you use pgmold for diffing while keeping your existing migration runner.
pgmold includes a GitHub Action for detecting schema drift in CI/CD pipelines. This catches when manual database changes drift from your schema files.
- name: Check for schema drift
uses: fmguerreiro/pgmold/.github/actions/drift-check@main
with:
schema: 'sql:schema/'
database: ${{ secrets.DATABASE_URL }}
target-schemas: 'public,auth'
fail-on-drift: 'true'
Inputs:
schema (required): Path to schema SQL file(s). Can be a single file or multiple files (space-separated).database (required): PostgreSQL connection string.target-schemas (optional): Comma-separated list of schemas to introspect. Default: public.version (optional): pgmold version to install. Default: latest.fail-on-drift (optional): Whether to fail the action if drift is detected. Default: true.Outputs:
has-drift: Whether drift was detected (true/false).expected-fingerprint: Expected schema fingerprint from SQL files.actual-fingerprint: Actual schema fingerprint from database.report: Full JSON drift report.See .github/workflows/drift-check-example.yml.example for a complete example. Basic usage:
name: Schema Drift Check
on:
schedule:
- cron: '0 8 * * *' # Daily at 8am UTC
workflow_dispatch:
jobs:
drift-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check for schema drift
uses: fmguerreiro/pgmold/.github/actions/drift-check@main
with:
schema: 'sql:schema/'
database: ${{ secrets.DATABASE_URL }}
For local or custom CI environments, use the drift command directly:
# Get JSON report with exit code 1 if drift detected
pgmold drift --schema sql:schema/ --database postgres://localhost/mydb --json
# Example output:
# {
# "has_drift": true,
# "expected_fingerprint": "abc123...",
# "actual_fingerprint": "def456...",
# "differences": [
# "Table users has extra column in database: last_login TIMESTAMP"
# ]
# }
The drift detection compares SHA256 fingerprints of normalized schemas. Any difference (new tables, altered columns, changed indexes) triggers drift.
pgmold is available as a Terraform provider for infrastructure-as-code workflows.
terraform {
required_providers {
pgmold = {
source = "fmguerreiro/pgmold"
version = "~> 0.3"
}
}
}
provider "pgmold" {}
resource "pgmold_schema" "app" {
schema_file = "${path.module}/schema.sql"
database_url = var.database_url
allow_destructive = false # Set true to allow DROP operations
}
When you change schema.sql, Terraform will diff against the live database and apply only the necessary migrations.
| Name | Type | Required | Description |
|---|---|---|---|
schema_file |
string | yes | Path to SQL schema file |
database_url |
string | yes | PostgreSQL connection URL |
target_schemas |
list(string) | no | PostgreSQL schemas to manage (default: ["public"]) |
allow_destructive |
bool | no | Allow DROP operations (default: false) |
Computed attributes:
id - Resource identifierschema_hash - SHA256 hash of schema fileapplied_at - Timestamp of last migrationmigration_count - Number of operations appliedGenerate numbered migration files instead of applying directly:
resource "pgmold_migration" "app" {
schema_file = "${path.module}/schema.sql"
database_url = var.database_url
output_dir = "${path.module}/migrations"
prefix = "V" # Flyway-style prefix
}
By default, pgmold blocks destructive operations:
DROP TABLE requires --allow-destructiveDROP COLUMN requires --allow-destructiveDROP ENUM requires --allow-destructiveSET NOT NULL produces warnings (may fail on existing NULLs)Set PGMOLD_PROD=1 to enable production mode, which blocks table drops entirely.
These tools share pgmold's approach: define desired state, compute diffs automatically.
| Feature | pgmold | Atlas | pg-schema-diff | pgschema |
|---|---|---|---|---|
| Language | Rust | Go | Go | Go |
| Schema Format | Native SQL | HCL, SQL, ORM | Native SQL | SQL |
| Multi-DB Support | PostgreSQL | ✅ Many | PostgreSQL | PostgreSQL |
| Drift Detection | ✅ | ✅ | ❌ | ❌ |
| Lock Hazard Warnings | ✅ | ✅ | ✅ | ❌ |
| Safety Linting | ✅ | ✅ | ❌ | ❌ |
| RLS Policies | ✅ | ✅ | ❌ | ❌ |
| Partitioned Tables | ✅ | ✅ | ✅ | ? |
| Cloud Service | ❌ | Atlas Cloud | ❌ | ❌ |
| Library Mode | ❌ | ❌ | ✅ | ❌ |
Traditional tools where you write numbered migration files manually.
| Feature | pgmold | Flyway | Liquibase | Sqitch |
|---|---|---|---|---|
| Approach | Declarative | Versioned | Versioned | Plan-based |
| Auto-generates Migrations | ✅ | ❌ | ❌ | ❌ |
| Multi-DB Support | PostgreSQL | ✅ Many | ✅ Many | ✅ Many |
| Drift Detection | ✅ | ✅ (preview) | ✅ | ❌ |
| Rollback Scripts | Auto (reverse diff) | Manual | Manual | Required |
| Enterprise Features | ❌ | Teams edition | Pro edition | ❌ |
# Build
cargo build
# Test
cargo test
# Run integration tests (requires Docker)
cargo test --test integration
MIT