| Crates.io | diesel-guard |
| lib.rs | diesel-guard |
| version | 0.5.0 |
| created_at | 2025-12-06 03:08:43.634645+00 |
| updated_at | 2026-01-20 04:52:46.904292+00 |
| description | Catch unsafe PostgreSQL migrations in Diesel and SQLx before they take down production |
| homepage | https://github.com/ayarotsky/diesel-guard |
| repository | https://github.com/ayarotsky/diesel-guard |
| max_upload_size | |
| id | 1969567 |
| size | 443,967 |
Catch dangerous PostgreSQL migrations before they take down production.
✓ Detects operations that lock tables or cause downtime
✓ Provides safe alternatives for each blocking operation
✓ Works with both Diesel and SQLx migration frameworks
✓ Supports safety-assured blocks for verified operations
cargo install diesel-guard
diesel-guard analyzes your migration SQL and catches dangerous operations before they reach production.
diesel-guard check migrations/2024_01_01_create_users/up.sql
When it finds an unsafe operation, you'll see:
❌ Unsafe migration detected in migrations/2024_01_01_create_users/up.sql
❌ ADD COLUMN with DEFAULT
Problem:
Adding column 'admin' with DEFAULT on table 'users' requires a full table rewrite on PostgreSQL < 11,
which acquires an ACCESS EXCLUSIVE lock. On large tables, this can take significant time and block all operations.
Safe alternative:
1. Add the column without a default:
ALTER TABLE users ADD COLUMN admin BOOLEAN;
2. Backfill data in batches (outside migration):
UPDATE users SET admin = <value> WHERE admin IS NULL;
3. Add default for new rows only:
ALTER TABLE users ALTER COLUMN admin SET DEFAULT <value>;
Note: For PostgreSQL 11+, this is safe if the default is a constant value.
diesel-guard supports both Diesel and SQLx PostgreSQL migrations. The framework is configured via diesel-guard.toml (see Configuration).
Diesel's directory-based migration structure:
migrations/
└── 2024_01_01_000000_create_users/
├── up.sql
├── down.sql
└── metadata.toml (optional)
SQLx supports multiple migration file formats. diesel-guard handles all of them:
Most common SQLx format with separate up/down files:
migrations/
├── 20240101000000_create_users.up.sql
└── 20240101000000_create_users.down.sql
Single migration file without rollback:
migrations/
└── 20240101000000_create_users.sql
Single file with both up and down sections:
-- migrations/20240101000000_create_users.sql
-- migrate:up
CREATE TABLE users (
id SERIAL PRIMARY KEY,
email VARCHAR(255) NOT NULL
);
-- migrate:down
DROP TABLE users;
Similar to Diesel but with SQLx timestamp format:
migrations/
└── 20240101000000_create_users/
├── up.sql
└── down.sql
diesel-guard requires explicit framework configuration in diesel-guard.toml:
# Framework configuration (REQUIRED)
framework = "diesel" # or "sqlx"
Generate a config file with:
diesel-guard init
See the Configuration section for all available options.
SQLx uses comment directives for migration metadata. diesel-guard recognizes these and validates their usage:
-- no-transaction
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
diesel-guard will warn you if you use CONCURRENTLY operations without the -- no-transaction directive.
In PostgreSQL versions before 11, adding a column with a default value requires a full table rewrite. This acquires an ACCESS EXCLUSIVE lock and can take hours on large tables, blocking all reads and writes.
ALTER TABLE users ADD COLUMN admin BOOLEAN DEFAULT FALSE;
Add the column first, backfill the data separately, then add the default:
-- Migration 1: Add column without default
ALTER TABLE users ADD COLUMN admin BOOLEAN;
-- Outside migration: Backfill in batches
UPDATE users SET admin = FALSE WHERE admin IS NULL;
-- Migration 2: Add default for new rows only
ALTER TABLE users ALTER COLUMN admin SET DEFAULT FALSE;
Note: For PostgreSQL 11+, adding a column with a constant default value is instant and safe.
Dropping a column acquires an ACCESS EXCLUSIVE lock and typically triggers a table rewrite. This blocks all operations and can cause errors if application code is still referencing the column.
ALTER TABLE users DROP COLUMN email;
Remove references from application code first, then drop the column in a later migration:
-- Step 1: Mark column as unused in application code
-- Deploy application code changes first
-- Step 2: (Optional) Set to NULL to reclaim space
ALTER TABLE users ALTER COLUMN email DROP NOT NULL;
UPDATE users SET email = NULL;
-- Step 3: Drop in later migration after confirming it's unused
ALTER TABLE users DROP COLUMN email;
PostgreSQL doesn't support DROP COLUMN CONCURRENTLY, so the table rewrite is unavoidable. Staging the removal minimizes risk.
Dropping a primary key removes the critical uniqueness constraint and breaks foreign key relationships in other tables that reference this table. It also acquires an ACCESS EXCLUSIVE lock, blocking all operations.
-- Breaks foreign keys that reference users(id)
ALTER TABLE users DROP CONSTRAINT users_pkey;
If you must change your primary key strategy, use a multi-step migration approach:
-- Step 1: Identify all foreign key dependencies
SELECT
tc.table_name, kcu.column_name, rc.constraint_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.referential_constraints rc ON tc.constraint_name = rc.unique_constraint_name
WHERE tc.table_name = 'users' AND tc.constraint_type = 'PRIMARY KEY';
-- Step 2: Create the new primary key FIRST (if migrating to a new key)
ALTER TABLE users ADD CONSTRAINT users_new_pkey PRIMARY KEY (uuid);
-- Step 3: Update all foreign keys to reference the new key
-- (This may require adding new columns to referencing tables)
ALTER TABLE posts ADD COLUMN user_uuid UUID;
UPDATE posts SET user_uuid = users.uuid FROM users WHERE posts.user_id = users.id;
ALTER TABLE posts ADD CONSTRAINT posts_user_uuid_fkey FOREIGN KEY (user_uuid) REFERENCES users(uuid);
-- Step 4: Only after all foreign keys are migrated, drop the old key
ALTER TABLE users DROP CONSTRAINT users_pkey;
-- Step 5: Clean up old columns
ALTER TABLE posts DROP COLUMN user_id;
Important considerations:
Limitation: This check relies on PostgreSQL naming conventions (e.g., users_pkey). It may not detect primary keys with custom names. Future versions will support database connections for accurate verification.
Dropping a table permanently deletes all data, indexes, triggers, and constraints. This operation acquires an ACCESS EXCLUSIVE lock and cannot be undone after the transaction commits. Foreign key relationships in other tables may block the drop or cause cascading deletes.
DROP TABLE users;
DROP TABLE IF EXISTS orders CASCADE;
Before dropping a table in production, take these precautions:
-- Step 1: Verify the table is no longer in use
-- Check application code for references to this table
-- Monitor for queries against the table
-- Step 2: Check for foreign key dependencies
SELECT
tc.table_name, kcu.column_name, rc.constraint_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.referential_constraints rc ON tc.constraint_name = rc.constraint_name
WHERE rc.unique_constraint_schema = 'public'
AND rc.unique_constraint_name IN (
SELECT constraint_name FROM information_schema.table_constraints
WHERE table_name = 'users' AND constraint_type IN ('PRIMARY KEY', 'UNIQUE')
);
-- Step 3: Ensure backups exist or data has been migrated
-- Step 4: Drop the table (use safety-assured if intentional)
-- safety-assured:start
DROP TABLE users;
-- safety-assured:end
Important considerations:
users_deprecated) and waiting before droppingDropping a database permanently deletes the entire database including all tables, data, and objects. This operation is irreversible. PostgreSQL requires exclusive access to the target database—all active connections must be terminated before the drop can proceed. The command cannot be executed inside a transaction block.
DROP DATABASE mydb;
DROP DATABASE IF EXISTS testdb;
DROP DATABASE should almost never appear in application migrations. Database lifecycle should be managed through infrastructure automation or DBA operations.
-- For local development: use database setup scripts
-- For production: use infrastructure automation (Terraform, Ansible)
-- For test cleanup: coordinate with DBA or use dedicated test infrastructure
-- If absolutely necessary (e.g., test cleanup), use a safety-assured block:
-- safety-assured:start
DROP DATABASE test_db;
-- safety-assured:end
Important considerations:
Note: PostgreSQL 13+ supports DROP DATABASE ... WITH (FORCE) to terminate active connections automatically, but this makes the operation even more dangerous and should be used with extreme caution.
Dropping an index without CONCURRENTLY acquires an ACCESS EXCLUSIVE lock on the table, blocking all queries (SELECT, INSERT, UPDATE, DELETE) until the drop operation completes.
DROP INDEX idx_users_email;
DROP INDEX IF EXISTS idx_users_username;
Use CONCURRENTLY to drop the index without blocking queries:
DROP INDEX CONCURRENTLY idx_users_email;
DROP INDEX CONCURRENTLY IF EXISTS idx_users_username;
Important: CONCURRENTLY requires PostgreSQL 9.2+ and cannot run inside a transaction block.
For Diesel migrations: Add a metadata.toml file to your migration directory:
# migrations/2024_01_01_drop_user_index/metadata.toml
run_in_transaction = false
For SQLx migrations: Add the no-transaction directive at the top of your migration file:
-- no-transaction
DROP INDEX CONCURRENTLY idx_users_email;
Note: Dropping an index concurrently takes longer than a regular drop and uses more resources, but allows concurrent queries to continue. If it fails, the index may be left in an "invalid" state and should be dropped again.
Reindexing without CONCURRENTLY acquires an ACCESS EXCLUSIVE lock on the table, blocking all operations until complete. Duration depends on index size.
REINDEX INDEX idx_users_email;
REINDEX TABLE users;
Use CONCURRENTLY to reindex without blocking operations:
REINDEX INDEX CONCURRENTLY idx_users_email;
REINDEX TABLE CONCURRENTLY users;
Important: CONCURRENTLY requires PostgreSQL 12+ and cannot run inside a transaction block.
For Diesel migrations: Add a metadata.toml file to your migration directory:
# migrations/2024_01_01_reindex_users/metadata.toml
run_in_transaction = false
For SQLx migrations: Add the no-transaction directive at the top of your migration file:
-- no-transaction
REINDEX INDEX CONCURRENTLY idx_users_email;
Note: REINDEX CONCURRENTLY rebuilds the index without locking out writes. If it fails, the index may be left in an "invalid" state—check with \d tablename and run REINDEX again if needed.
Creating an index without CONCURRENTLY acquires a SHARE lock, blocking all write operations (INSERT, UPDATE, DELETE) for the duration of the index build.
CREATE INDEX idx_users_email ON users(email);
CREATE UNIQUE INDEX idx_users_username ON users(username);
Use CONCURRENTLY to allow concurrent writes during the index build:
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
CREATE UNIQUE INDEX CONCURRENTLY idx_users_username ON users(username);
Important: CONCURRENTLY cannot run inside a transaction block.
For Diesel migrations: Add a metadata.toml file to your migration directory:
# migrations/2024_01_01_add_user_index/metadata.toml
run_in_transaction = false
For SQLx migrations: Add the no-transaction directive at the top of your migration file:
-- no-transaction
CREATE INDEX CONCURRENTLY idx_users_email ON users(email);
Adding a UNIQUE constraint via ALTER TABLE acquires an ACCESS EXCLUSIVE lock, blocking all reads and writes during index creation. This is worse than CREATE INDEX without CONCURRENTLY.
ALTER TABLE users ADD CONSTRAINT users_email_key UNIQUE (email);
ALTER TABLE users ADD UNIQUE (email); -- Unnamed is also bad
Use CREATE UNIQUE INDEX CONCURRENTLY, then optionally add the constraint:
-- Step 1: Create the unique index concurrently
CREATE UNIQUE INDEX CONCURRENTLY users_email_idx ON users(email);
-- Step 2 (Optional): Add constraint using the existing index
-- This is instant since the index already exists
ALTER TABLE users ADD CONSTRAINT users_email_key UNIQUE USING INDEX users_email_idx;
Important: Requires metadata.toml with run_in_transaction = false (same as CREATE INDEX CONCURRENTLY).
Changing a column's type typically requires an ACCESS EXCLUSIVE lock and triggers a full table rewrite, blocking all operations.
ALTER TABLE users ALTER COLUMN age TYPE BIGINT;
ALTER TABLE users ALTER COLUMN data TYPE JSONB USING data::JSONB;
Use a multi-step approach with a new column:
-- Migration 1: Add new column
ALTER TABLE users ADD COLUMN age_new BIGINT;
-- Outside migration: Backfill in batches
UPDATE users SET age_new = age::BIGINT;
-- Migration 2: Swap columns
ALTER TABLE users DROP COLUMN age;
ALTER TABLE users RENAME COLUMN age_new TO age;
Safe type changes (no rewrite on PostgreSQL 9.2+):
VARCHAR(50) → VARCHAR(100)VARCHAR(255) → TEXTAdding a NOT NULL constraint requires scanning the entire table to verify all values are non-null. This acquires an ACCESS EXCLUSIVE lock and blocks all operations.
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
For large tables, use a CHECK constraint approach that allows concurrent operations:
-- Step 1: Add CHECK constraint without validating existing rows
ALTER TABLE users ADD CONSTRAINT users_email_not_null_check CHECK (email IS NOT NULL) NOT VALID;
-- Step 2: Validate separately (uses SHARE UPDATE EXCLUSIVE lock)
ALTER TABLE users VALIDATE CONSTRAINT users_email_not_null_check;
-- Step 3: Add NOT NULL constraint (instant if CHECK exists)
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
-- Step 4: Optionally drop redundant CHECK constraint
ALTER TABLE users DROP CONSTRAINT users_email_not_null_check;
The VALIDATE step allows concurrent reads and writes, only blocking other schema changes. On PostgreSQL 12+, NOT NULL constraints are more efficient, but this approach still provides better control.
Adding a primary key constraint to an existing table acquires an ACCESS EXCLUSIVE lock, blocking all operations (reads and writes). The operation must also create an index to enforce uniqueness, which compounds the lock duration on large tables.
-- Blocks all operations while creating index and adding constraint
ALTER TABLE users ADD PRIMARY KEY (id);
ALTER TABLE users ADD CONSTRAINT users_pkey PRIMARY KEY (id);
Use CREATE UNIQUE INDEX CONCURRENTLY first, then add the primary key constraint using the existing index:
-- Step 1: Create unique index concurrently (allows concurrent operations)
CREATE UNIQUE INDEX CONCURRENTLY users_pkey ON users(id);
-- Step 2: Add PRIMARY KEY using the existing index (fast, minimal lock)
ALTER TABLE users ADD CONSTRAINT users_pkey PRIMARY KEY USING INDEX users_pkey;
Important: The CONCURRENTLY approach requires metadata.toml with run_in_transaction = false:
# migrations/2024_01_01_add_primary_key/metadata.toml
run_in_transaction = false
Why this works:
Note: This approach requires PostgreSQL 11+. For earlier versions, you must use the unsafe ALTER TABLE ADD PRIMARY KEY during a maintenance window.
Creating an extension in migrations often requires superuser privileges, which application database users typically don't have in production environments.
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION uuid_ossp;
Install extensions outside of application migrations:
-- For local development: add to database setup scripts
CREATE EXTENSION IF NOT EXISTS pg_trgm;
-- For production: use infrastructure automation
-- (Ansible, Terraform, or manual DBA installation)
Best practices:
Common extensions that require this approach: pg_trgm, uuid-ossp, hstore, postgis, pg_stat_statements.
Adding a GENERATED ALWAYS AS ... STORED column acquires an ACCESS EXCLUSIVE lock and triggers a full table rewrite because PostgreSQL must compute and store the expression value for every existing row.
ALTER TABLE products ADD COLUMN total_price INTEGER GENERATED ALWAYS AS (price * quantity) STORED;
-- Step 1: Add a regular nullable column
ALTER TABLE products ADD COLUMN total_price INTEGER;
-- Step 2: Backfill in batches (outside migration)
UPDATE products SET total_price = price * quantity WHERE total_price IS NULL;
-- Step 3: Optionally add NOT NULL constraint
ALTER TABLE products ALTER COLUMN total_price SET NOT NULL;
-- Step 4: Use a trigger for new rows
CREATE FUNCTION compute_total_price() RETURNS TRIGGER AS $$
BEGIN
NEW.total_price := NEW.price * NEW.quantity;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_total_price
BEFORE INSERT OR UPDATE ON products
FOR EACH ROW EXECUTE FUNCTION compute_total_price();
Note: PostgreSQL does not support VIRTUAL generated columns (only STORED). For new empty tables, GENERATED STORED columns are acceptable.
Adding constraints without explicit names results in auto-generated names from PostgreSQL. These names vary between databases and make future migrations difficult.
-- Unnamed UNIQUE constraint
ALTER TABLE users ADD UNIQUE (email);
-- Unnamed FOREIGN KEY constraint
ALTER TABLE posts ADD FOREIGN KEY (user_id) REFERENCES users(id);
-- Unnamed CHECK constraint
ALTER TABLE users ADD CHECK (age >= 0);
Always name constraints explicitly using the CONSTRAINT keyword:
-- Named UNIQUE constraint
ALTER TABLE users ADD CONSTRAINT users_email_key UNIQUE (email);
-- Named FOREIGN KEY constraint
ALTER TABLE posts ADD CONSTRAINT posts_user_id_fkey FOREIGN KEY (user_id) REFERENCES users(id);
-- Named CHECK constraint
ALTER TABLE users ADD CONSTRAINT users_age_check CHECK (age >= 0);
Best practices for constraint naming:
{table}_{column}_key or {table}_{column1}_{column2}_key{table}_{column}_fkey{table}_{column}_check or {table}_{description}_checkNamed constraints make future migrations predictable:
-- Easy to reference in later migrations
ALTER TABLE users DROP CONSTRAINT users_email_key;
Renaming a column breaks running application instances immediately. Any code that references the old column name will fail after the rename is applied, causing downtime.
ALTER TABLE users RENAME COLUMN email TO email_address;
Use a multi-step migration to maintain compatibility during the transition:
-- Migration 1: Add new column
ALTER TABLE users ADD COLUMN email_address VARCHAR(255);
-- Outside migration: Backfill in batches
UPDATE users SET email_address = email;
-- Migration 2: Add NOT NULL if needed
ALTER TABLE users ALTER COLUMN email_address SET NOT NULL;
-- Update application code to use email_address
-- Migration 3: Drop old column after deploying code changes
ALTER TABLE users DROP COLUMN email;
Important: The RENAME COLUMN operation itself is fast (brief ACCESS EXCLUSIVE lock), but the primary risk is application compatibility, not lock duration. All running instances must be updated to reference the new column name before the rename is applied.
Renaming a table breaks running application instances immediately. Any code that references the old table name will fail after the rename is applied. Additionally, this operation requires an ACCESS EXCLUSIVE lock which can block on busy tables.
ALTER TABLE users RENAME TO customers;
Use a multi-step dual-write migration to safely rename the table:
-- Migration 1: Create new table
CREATE TABLE customers (LIKE users INCLUDING ALL);
-- Update application code to write to BOTH tables
-- Migration 2: Backfill data in batches
INSERT INTO customers
SELECT * FROM users
WHERE id > last_processed_id
LIMIT 10000;
-- Update application code to read from new table
-- Deploy updated application
-- Update application code to stop writing to old table
-- Migration 3: Drop old table
DROP TABLE users;
Important: This multi-step approach avoids the ACCESS EXCLUSIVE lock issues on large tables and ensures zero downtime. The migration requires multiple deployments coordinated with application code changes.
Using SMALLINT or INT for primary keys risks ID exhaustion. SMALLINT maxes out at ~32,767 records, and INT at ~2.1 billion. While 2.1 billion seems large, active applications can exhaust this faster than expected, especially with high-frequency inserts, soft deletes, or partitioned data.
Changing the type later requires an ALTER COLUMN TYPE operation with a full table rewrite and ACCESS EXCLUSIVE lock.
-- SMALLINT exhausts at ~32K records
CREATE TABLE users (id SMALLINT PRIMARY KEY);
-- INT exhausts at ~2.1B records
CREATE TABLE posts (id INT PRIMARY KEY);
CREATE TABLE events (id INTEGER PRIMARY KEY);
-- Composite PKs with short integers still risky
CREATE TABLE tenant_events (
tenant_id BIGINT,
event_id INT, -- Will exhaust per tenant
PRIMARY KEY (tenant_id, event_id)
);
Use BIGINT for all primary keys to avoid exhaustion:
-- BIGINT: effectively unlimited (~9.2 quintillion)
CREATE TABLE users (id BIGINT PRIMARY KEY);
-- BIGSERIAL: auto-incrementing BIGINT
CREATE TABLE posts (id BIGSERIAL PRIMARY KEY);
-- Composite PKs with all BIGINT
CREATE TABLE tenant_events (
tenant_id BIGINT,
event_id BIGINT,
PRIMARY KEY (tenant_id, event_id)
);
Storage overhead: BIGINT uses 8 bytes vs INT's 4 bytes - only 4 extra bytes per row. For a 1 million row table, this is ~4MB of additional storage, which is negligible compared to the operational cost of changing column types later.
Safe exceptions: Small, finite lookup tables with <100 entries (e.g., status codes, country lists) can safely use smaller types. Use safety-assured to bypass the check for these cases.
Adding a SERIAL column to an existing table triggers a full table rewrite because PostgreSQL must populate sequence values for all existing rows. This acquires an ACCESS EXCLUSIVE lock and blocks all operations.
ALTER TABLE users ADD COLUMN id SERIAL;
ALTER TABLE users ADD COLUMN order_number BIGSERIAL;
Create the sequence separately, add the column without a default, then backfill:
-- Step 1: Create a sequence
CREATE SEQUENCE users_id_seq;
-- Step 2: Add the column WITHOUT default (fast, no rewrite)
ALTER TABLE users ADD COLUMN id INTEGER;
-- Outside migration: Backfill existing rows in batches
UPDATE users SET id = nextval('users_id_seq') WHERE id IS NULL;
-- Step 3: Set default for future inserts only
ALTER TABLE users ALTER COLUMN id SET DEFAULT nextval('users_id_seq');
-- Step 4: Set NOT NULL if needed (PostgreSQL 11+: safe if all values present)
ALTER TABLE users ALTER COLUMN id SET NOT NULL;
-- Step 5: Set sequence ownership
ALTER SEQUENCE users_id_seq OWNED BY users.id;
Key insight: Adding a column with DEFAULT nextval(...) on an existing table still triggers a table rewrite. The solution is to add the column first without any default, backfill separately, then set the default for future rows only.
In PostgreSQL, the json type has no equality operator, which breaks existing SELECT DISTINCT queries and other operations that require comparing values.
ALTER TABLE users ADD COLUMN properties JSON;
Use jsonb instead of json:
ALTER TABLE users ADD COLUMN properties JSONB;
Benefits of JSONB over JSON:
Note: The only advantage of JSON over JSONB is that it preserves exact formatting and key order, which is rarely needed in practice.
Lock type: None (best practice warning)
CHAR and CHARACTER types are fixed-length and padded with spaces. This wastes storage and can cause subtle bugs with string comparisons and equality checks.
ALTER TABLE users ADD COLUMN country_code CHAR(2);
CREATE TABLE products (sku CHARACTER(10) PRIMARY KEY);
Use TEXT or VARCHAR instead:
-- For ALTER TABLE
ALTER TABLE users ADD COLUMN country_code TEXT;
ALTER TABLE users ADD COLUMN country_code VARCHAR(2);
-- For CREATE TABLE
CREATE TABLE products (sku TEXT);
CREATE TABLE products (sku VARCHAR(10));
-- Or TEXT with CHECK constraint for length validation
ALTER TABLE users ADD COLUMN country_code TEXT CHECK (length(country_code) = 2);
CREATE TABLE products (sku TEXT CHECK (length(sku) <= 10));
Why CHAR is problematic:
'US' != 'US ')Lock type: None (best practice warning)
TIMESTAMP (or TIMESTAMP WITHOUT TIME ZONE) stores values without timezone context, which can cause issues in multi-timezone applications, during DST transitions, and makes it difficult to determine the actual point in time represented.
-- ALTER TABLE
ALTER TABLE events ADD COLUMN created_at TIMESTAMP;
ALTER TABLE events ADD COLUMN updated_at TIMESTAMP WITHOUT TIME ZONE;
-- CREATE TABLE
CREATE TABLE events (
id SERIAL PRIMARY KEY,
created_at TIMESTAMP,
updated_at TIMESTAMP WITHOUT TIME ZONE
);
Use TIMESTAMPTZ (TIMESTAMP WITH TIME ZONE) instead:
-- ALTER TABLE
ALTER TABLE events ADD COLUMN created_at TIMESTAMPTZ;
ALTER TABLE events ADD COLUMN updated_at TIMESTAMP WITH TIME ZONE;
-- CREATE TABLE
CREATE TABLE events (
id SERIAL PRIMARY KEY,
created_at TIMESTAMPTZ,
updated_at TIMESTAMP WITH TIME ZONE
);
Why TIMESTAMPTZ is better:
When TIMESTAMP without time zone might be acceptable:
safety-assured if you've confirmed timezone-naive timestamps are appropriateTRUNCATE TABLE acquires an ACCESS EXCLUSIVE lock, blocking all operations (reads and writes) on the table. Unlike DELETE, TRUNCATE cannot be batched or throttled, making it unsuitable for large tables in production environments.
TRUNCATE TABLE users;
TRUNCATE TABLE orders, order_items;
Use DELETE with batching to incrementally remove rows while allowing concurrent access:
-- Delete rows in small batches to allow concurrent access
DELETE FROM users WHERE id IN (
SELECT id FROM users LIMIT 1000
);
-- Repeat the batched DELETE until all rows are removed
-- (Can be done outside migration with monitoring)
-- Optional: Reset sequences if needed
ALTER SEQUENCE users_id_seq RESTART WITH 1;
-- Optional: Reclaim space
VACUUM users;
Important: If you absolutely must use TRUNCATE (e.g., in a test environment or during a maintenance window), use a safety-assured block:
-- safety-assured:start
-- Safe because: running in test environment / maintenance window
TRUNCATE TABLE users;
-- safety-assured:end
Indexes with 4 or more columns are rarely effective. PostgreSQL can only use multi-column indexes efficiently when filtering on the leftmost columns in order. Wide indexes also increase storage costs and slow down write operations (INSERT, UPDATE, DELETE).
-- 4+ columns: rarely useful
CREATE INDEX idx_users_search ON users(tenant_id, email, name, status);
CREATE INDEX idx_orders_composite ON orders(user_id, product_id, status, created_at);
Use narrower, more targeted indexes based on actual query patterns:
-- Option 1: Partial index for specific query pattern
CREATE INDEX idx_users_active_email ON users(email)
WHERE status = 'active';
-- Option 2: Separate indexes for different queries
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_status ON users(status);
-- Option 3: Covering index with INCLUDE (PostgreSQL 11+)
-- Includes extra columns for SELECT without adding them to index keys
CREATE INDEX idx_users_email_covering ON users(email)
INCLUDE (name, status);
-- Option 4: Two-column composite (still useful for some patterns)
CREATE INDEX idx_users_tenant_email ON users(tenant_id, email);
When wide indexes might be acceptable:
safety-assured if you've confirmed the index is necessaryPerformance tip: PostgreSQL can combine multiple indexes using bitmap scans. Two separate indexes often outperform one wide index.
diesel-guard check migrations/2024_01_01_create_users/up.sql
diesel-guard check migrations/
diesel-guard check migrations/ --format json
Add diesel-guard to your CI pipeline to automatically check migrations on pull requests.
Use the official GitHub Action:
name: Check Migrations
on: [pull_request]
jobs:
check-migrations:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Pin to specific version (recommended for stability)
- uses: ayarotsky/diesel-guard@v0.4.0
with:
path: migrations/
Versioning:
@v0.4.0 installs diesel-guard v0.4.0@main installs the latest versionAlternatives:
# Always use latest (gets new checks and fixes automatically)
- uses: ayarotsky/diesel-guard@main
with:
path: migrations/
This will:
For more control or custom workflows:
name: Check Migrations
on: [pull_request]
jobs:
check-migrations:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: stable
- name: Install diesel-guard
run: cargo install diesel-guard
- name: Check DB migrations
run: diesel-guard check migrations/
Create a diesel-guard.toml file in your project root to customize behavior.
Generate a documented configuration file:
diesel-guard init
Use --force to overwrite an existing file:
diesel-guard init --force
# Framework configuration (REQUIRED)
# Specify which migration framework you're using
# Valid values: "diesel" or "sqlx"
framework = "diesel"
# Skip migrations before this timestamp
# Accepts: YYYYMMDDHHMMSS, YYYY_MM_DD_HHMMSS, or YYYY-MM-DD-HHMMSS
# Works with any migration directory format
start_after = "2024_01_01_000000"
# Also check down.sql files (default: false)
check_down = true
# Disable specific checks
disable_checks = ["AddColumnCheck"]
AddColumnCheck - ADD COLUMN with DEFAULTAddIndexCheck - CREATE INDEX without CONCURRENTLYAddJsonColumnCheck - ADD COLUMN with JSON typeAddNotNullCheck - ALTER COLUMN SET NOT NULLAddPrimaryKeyCheck - ADD PRIMARY KEY to existing tableAddSerialColumnCheck - ADD COLUMN with SERIALAddUniqueConstraintCheck - ADD UNIQUE constraint via ALTER TABLEAlterColumnTypeCheck - ALTER COLUMN TYPECharTypeCheck - CHAR/CHARACTER column typesCreateExtensionCheck - CREATE EXTENSIONDropColumnCheck - DROP COLUMNDropDatabaseCheck - DROP DATABASEDropIndexCheck - DROP INDEX without CONCURRENTLYDropPrimaryKeyCheck - DROP PRIMARY KEYDropTableCheck - DROP TABLEGeneratedColumnCheck - ADD COLUMN with GENERATED STOREDReindexCheck - REINDEX without CONCURRENTLYRenameColumnCheck - RENAME COLUMNRenameTableCheck - RENAME TABLEShortIntegerPrimaryKeyCheck - SMALLINT/INT/INTEGER primary keysTimestampTypeCheck - TIMESTAMP without time zoneTruncateTableCheck - TRUNCATE TABLEUnnamedConstraintCheck - Unnamed constraints (UNIQUE, FOREIGN KEY, CHECK)WideIndexCheck - Indexes with 4+ columnsWhen you've manually verified an operation is safe, use safety-assured comment blocks to bypass checks:
-- safety-assured:start
ALTER TABLE users DROP COLUMN deprecated_column;
ALTER TABLE posts DROP COLUMN old_field;
-- safety-assured:end
-- safety-assured:start
ALTER TABLE users DROP COLUMN email;
-- safety-assured:end
-- This will be checked normally
CREATE INDEX users_email_idx ON users(email);
-- safety-assured:start
ALTER TABLE posts DROP COLUMN body;
-- safety-assured:end
Only use when you've taken proper precautions:
For DROP COLUMN:
For other operations:
-- safety-assured:start
-- Safe because: table is empty, deployed in maintenance window
ALTER TABLE new_table ADD COLUMN status TEXT DEFAULT 'pending';
-- safety-assured:end
Diesel Guard will error if blocks are mismatched:
Error: Unclosed 'safety-assured:start' at line 1
diesel-guard is on a roadmap to v1 with 3 major phases, each adding progressively more sophisticated migration safety capabilities.
Goal: Implement all linter checks that work through SQL parsing alone, without requiring database connection.
Goal: Add optional database connection to provide context-aware checking based on actual table state, relationships, and PostgreSQL version.
SHOW lock_timeout and SHOW statement_timeout to check effective valuesSET lock_timeout = '2s' and SET statement_timeout = '5s' in migrationsDROP PRIMARY KEY (currently limited) - Currently relies on naming conventions (e.g., users_pkey)
ADD COLUMN with DEFAULT (currently version-agnostic)
ALTER COLUMN TYPE (currently assumes all unsafe)
Short Integer Primary Keys (currently checks all tables)
CREATE EXTENSION (currently always warns)
ADD NOT NULL (currently always recommends CHECK constraint)
# diesel-guard.toml
# Optional: connect to database for enhanced checking
url = "postgresql://user:pass@localhost/dbname"
enabled = false # Default: false (static analysis only)
# Table size thresholds for warnings
large_table_rows = 100000
large_table_size_mb = 1024
Goal: Integration with Diesel and SQLx migration ecosystems to become the single tool for both safety checking and migration execution.
Native migration CLI replacement
# Instead of:
diesel migration run
diesel migration revert
# Users can:
diesel-guard migrate run # Checks safety, runs with sqlx migrate run
diesel-guard migrate revert # Checks safety, reverts migrations
Automatic safety checking before execution
Migration generation with safe templates
# Works for both Diesel and SQLx based on diesel-guard.toml config
diesel-guard migration generate add_user_email
# Generates migration with safety comments and safe patterns
# (format depends on configured framework)
#
# Example for Diesel:
# up.sql:
# -- Migration: Add email column to users
# -- Safe pattern: Add without default, backfill separately
#
# SET lock_timeout = '2s';
# SET statement_timeout = '5s';
#
# ALTER TABLE users ADD COLUMN email TEXT;
Interactive migration review
diesel-guard migrate review
# Shows:
# ✓ 2024_01_01_create_users - Safe
# ⚠ 2024_01_02_add_email - 1 warning (ADD COLUMN with DEFAULT)
# ✗ 2024_01_03_drop_column - 1 violation (DROP COLUMN)
#
# Run migrations? (y/N/review)
Diesel schema.rs validation
Migration splitting assistant
diesel-guard migrate split 2024_01_02_add_email
# Automatically splits unsafe migration into safe multi-step migrations:
# 2024_01_02_01_add_email_column
# 2024_01_02_02_add_email_default
# diesel-guard.toml
# Enforce timeout settings when running migrations
enforce_timeouts = true
# Timeouts applied if migration doesn't specify them
# These are set before each migration runs
lock_timeout = "2s"
statement_timeout = "5s"
idle_in_transaction_session_timeout = "10s"
# Include timeout settings in generated migrations
include_timeouts = true
We welcome contributions! See CONTRIBUTING.md for development setup and testing guide.
For AI assistants working on this project, see AGENTS.md for detailed implementation patterns.
Inspired by strong_migrations by Andrew Kane
MIT