| Crates.io | zesty-backup |
| lib.rs | zesty-backup |
| version | 1.0.2 |
| created_at | 2025-11-19 07:43:17.635627+00 |
| updated_at | 2025-11-19 09:02:48.542437+00 |
| description | A flexible, multi-provider backup utility for cloud storage |
| homepage | |
| repository | https://github.com/rc-basilisk/zesty-backup |
| max_upload_size | |
| id | 1939585 |
| size | 304,241 |
A flexible, multi-provider backup utility for cloud storage, written in Rust. Supports multiple cloud storage providers including AWS S3, Google Cloud Storage, Azure Blob Storage, Backblaze B2, and S3-compatible services.
provider = "aws" or provider = "s3"provider = "contabo"provider = "digitalocean"provider = "wasabi" ✅ Fully supported (S3-compatible)provider = "minio"provider = "r2"provider = "b2" or provider = "backblaze" ✅ Fully supportedprovider = "gcs" or provider = "google" ✅ Fully supportedprovider = "azure" ✅ Fully supportedprovider = "googledrive" or provider = "gdrive" ✅ Fully supportedprovider = "onedrive" ✅ Fully supportedprovider = "dropbox" ✅ Fully supportedprovider = "box" ✅ Fully supportedprovider = "pcloud" ✅ Fully supportedprovider = "mega" ✅ Fully supported (requires MEGAcmd)Note:
- Consumer-grade providers require OAuth2 access tokens (see configuration examples)
- MEGA requires MEGAcmd to be installed (handles client-side encryption automatically)
- GCS requires service account credentials (see configuration examples)
- Azure requires storage account name and access key (see configuration examples)
pg_dump installedgit clone https://github.com/rc-basilisk/zesty-backup.git
cd zesty-backup
cargo build --release
The binary will be at target/release/zesty-backup.
sudo cp target/release/zesty-backup /usr/local/bin/zesty-backup
zesty-backup generate-config
This creates config.toml.example with all available options.
Copy the example config and edit it:
cp config.toml.example config.toml
nano config.toml
Minimum required configuration:
project_path to the directory you want to backuplocal_backup_dir for local backup storagezesty-backup backup
zesty-backup upload
# Create an incremental backup
zesty-backup backup
# Create a full backup
zesty-backup backup --full
# List local backups
zesty-backup list
# List remote backups
zesty-backup list --remote
# Upload backups to cloud storage
zesty-backup upload
# Upload a specific backup file
zesty-backup upload --file ./backups/backup-20240101-120000.tar.zst
# Download a backup from cloud storage
zesty-backup download backup-20240101-120000.tar.zst --output ./restored
# Clean old backups (dry run)
zesty-backup clean --dry-run
# Clean old backups (actually delete)
zesty-backup clean
# Restore from a backup file
zesty-backup restore ./backups/backup-20240101-120000.tar.zst --target /path/to/restore
# Show backup system status
zesty-backup status
# Show recent logs
zesty-backup logs
# Generate example configuration
zesty-backup generate-config
Run as a background service with automatic scheduled backups:
zesty-backup daemon \
--backup-interval 6 \
--upload-interval 24 \
--pid-file /var/run/zesty-backup.pid
Access your backups from any machine without a full config file:
# List remote backups
zesty-backup client \
--provider s3 \
--endpoint https://s3.amazonaws.com \
--region us-east-1 \
--bucket my-backups \
--access-key YOUR_KEY \
--secret-key YOUR_SECRET \
list
# Download a backup
zesty-backup client \
--provider s3 \
--endpoint https://s3.amazonaws.com \
--region us-east-1 \
--bucket my-backups \
--access-key YOUR_KEY \
--secret-key YOUR_SECRET \
download backup-20240101-120000.tar.zst \
--output ./restored
Or use a config file:
zesty-backup client --config config.toml list
Consumer-grade providers (Google Drive, OneDrive, Dropbox, Box) require OAuth2 access tokens:
Note: Access tokens expire. For production use, implement token refresh or use long-lived tokens where available.
[storage]
provider = "aws"
region = "us-east-1"
bucket = "my-backups"
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
# endpoint can be omitted for AWS
Requires service account credentials. Get them from Google Cloud Console.
[storage]
provider = "gcs" # or "google"
bucket = "my-backups"
credentials_path = "/path/to/service-account-key.json" # Optional: uses GOOGLE_APPLICATION_CREDENTIALS env var if not set
Note: You can either:
- Set
credentials_pathin the config file, or- Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to point to your service account JSON key file, or- Use default credentials from
gcloudif you're running on a GCP instance
Requires storage account name and access key. Get them from Azure Portal.
[storage]
provider = "azure"
account_name = "mystorageaccount"
account_key = "your-account-key" # Optional: can also use AZURE_STORAGE_ACCOUNT_KEY env var
bucket = "my-container" # Azure uses "container" instead of "bucket"
Note: You can either:
- Set
account_keyin the config file, or- Set the
AZURE_STORAGE_ACCOUNT_KEYenvironment variableAzure Blob Storage uses "containers" instead of "buckets", but the config uses
bucketfor consistency with other providers.
[storage]
provider = "b2"
account_id = "your-account-id"
application_key = "your-application-key"
bucket_id = "your-bucket-id"
bucket = "my-backups"
Requires OAuth2 access token. Get one from Google Cloud Console.
[storage]
provider = "googledrive" # or "gdrive"
access_key = "ya29.a0AfH6SMC..." # OAuth2 access token
bucket_id = "folder-id-here" # Optional: Google Drive folder ID (defaults to root)
Requires OAuth2 access token. Get one from Azure Portal.
[storage]
provider = "onedrive"
access_key = "eyJ0eXAiOiJKV1QiLCJub..." # OAuth2 access token
bucket_id = "/drive/root:/Backups" # Optional: folder path (defaults to root)
Requires access token. Get one from Dropbox App Console.
[storage]
provider = "dropbox"
access_key = "sl.Bk..." # Dropbox access token
bucket_id = "/Backups" # Optional: folder path (defaults to root)
Requires OAuth2 access token. Get one from Box Developer Console.
[storage]
provider = "box"
access_key = "T9cE5asOhuy8CC6..." # OAuth2 access token
bucket_id = "123456789" # Optional: folder ID (defaults to root folder "0")
Requires API access token. Get one from pCloud API Keys.
[storage]
provider = "pcloud"
access_key = "your-api-access-token" # Get from https://my.pcloud.com/#page=apikeys
region = "us" # "us" for US data center (default) or "eu" for European data center
bucket_id = "/Backups" # Optional: folder path (defaults to root "/")
Note: pCloud has two data centers (US and EU). Set
region = "eu"if your account is in the European data center. The provider automatically uses the correct API endpoint.
Uses MEGAcmd (official MEGA command-line tool) which handles client-side encryption automatically.
Prerequisites: Install MEGAcmd from https://mega.nz/cmd
[storage]
provider = "mega"
account_name = "your-email@example.com" # MEGA email
account_key = "your-password" # MEGA password
bucket_id = "/Backups" # Optional: folder path (defaults to root "/")
Note: MEGA uses client-side encryption, which MEGAcmd handles automatically. The provider will automatically log in using your credentials and manage the encryption/decryption process.
[storage]
provider = "digitalocean"
endpoint = "https://nyc3.digitaloceanspaces.com"
region = "nyc3"
bucket = "my-backups"
access_key = "your-spaces-key"
secret_key = "your-spaces-secret"
[storage]
provider = "contabo"
endpoint = "https://eu2.contabostorage.com"
region = "eu2"
bucket = "my-backups"
access_key = "your-access-key"
secret_key = "your-secret-key"
[backup]
# Local directory for storing backups
local_backup_dir = "./backups"
# Main directory to backup
project_path = "/var/www/myapp"
# Additional files/directories to include
additional_paths = [
"/etc/nginx/nginx.conf",
"/etc/nginx/sites-available/myapp",
]
# Number of incremental backups per day
incremental_per_day = 4
# Upload interval in hours
upload_interval_hours = 24
# Retention period in days
retention_days = 7
# Compression level (0-22)
# 0 = no compression, 3 = balanced, 22 = maximum
compression_level = 3
# Paths to exclude (supports patterns)
exclude = [
"node_modules",
".git",
"*.log",
]
Supports multiple database types: postgres, mariadb, mysql, mongodb, cassandra, scylla, redis, sqlite
[database]
enabled = true
type = "postgres" # postgres, mariadb, mysql, mongodb, cassandra, scylla, redis, sqlite
host = "localhost"
port = 5432
database = "myapp_db"
username = "myuser"
password = "your_password" # Optional: can also use DB_PASSWORD env var or .env file
[system]
systemd_services = [
"myapp.service",
"myapp-worker.service",
]
systemd_timers = [
"myapp-cleanup.timer",
]
Backup the output of any command as a text file. This is a general pattern that works for any command:
[system]
command_outputs = [
{ command = "docker", args = ["ps", "-a"], output_file = "docker_containers.txt", enabled = true },
{ command = "systemctl", args = ["list-units", "--type=service"], output_file = "systemd_services.txt", enabled = true },
{ command = "dpkg", args = ["-l"], output_file = "installed_packages.txt", enabled = true },
]
Quick configuration presets for common backup needs:
[system.presets]
# Nginx configuration
nginx_enabled = true # Backs up /etc/nginx/nginx.conf and sites-available/enabled
nginx_sites = [
"example.com",
"another-site.com",
]
# Crontab
crontab_enabled = true
crontab_user = null # null = current user, or specify like "www-data"
# User config files (from home directory)
user_configs = [
".zshrc",
".bashrc",
".vimrc",
".gitconfig",
]
user_configs_home = null # null = $HOME, or specify path
# Common /etc files and directories
etc_files = [
"hosts",
"fstab",
]
etc_dirs = [
"ssl",
"letsencrypt",
]
Create a systemd service file at /etc/systemd/system/zesty-backup.service:
[Unit]
Description=Zesty Backup Service
After=network.target
[Service]
Type=simple
User=your-user
WorkingDirectory=/opt/zesty-backup
ExecStart=/usr/local/bin/zesty-backup daemon \
--backup-interval 6 \
--upload-interval 24 \
--pid-file /var/run/zesty-backup.pid \
--config /opt/zesty-backup/config.toml
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl enable zesty-backup
sudo systemctl start zesty-backup
project_path (respects exclude patterns)additional_paths[system.systemd_services] and [system.systemd_timers][database] (supports postgres, mariadb, mysql, mongodb, cassandra, scylla, redis, sqlite)[system.command_outputs])nginx_enabled = true in presetscrontab_enabled = true in presets[system.presets.user_configs])/etc/ (configured in presets)Zesty Backup uses zstd compression with configurable levels:
Choose based on your priorities: speed vs. storage space.
config.toml with real credentials to version controlDB_PASSWORD environment variable for database passwordsconfig.toml has restrictive permissions: chmod 600 config.tomlproject_path exists and is readablelocal_backup_dirzesty-backup logspg_dump, mysqldump, mongodump, etc.) and in PATHDB_PASSWORD env var, or .env file)Contributions welcome! Open an issue or submit a PR. Keep it simple - format code with cargo fmt, run cargo clippy, and test your changes.
MIT License - do what you want with it.
# Build the image
docker build -t zesty-backup .
# Run a backup
docker run --rm -v $(pwd)/config:/app/config:ro -v $(pwd)/backups:/app/backups zesty-backup backup
# Run as daemon
docker-compose up -d
See Dockerfile and docker-compose.yml for more details.
# Run all tests
cargo test
# Run specific test suite
cargo test --test integration_test
cargo test --test provider_test
cargo test --test backup_test
See the examples/ directory for usage examples.