Automated encrypted backups with deduplication to local disk, SFTP, BackBlaze B2, and S3 storage. Leverages Duplicacy CLI v3.2.5 to follow the 3-2-1 Backup Strategy while removing the complexity of manual configuration.
Archiver automates backing up directories to multiple remote storage locations with encryption and deduplication. Configure once, then backups run automatically on a schedule.
Each directory gets backed up independently, with optional pre/post-backup scripts for service-specific needs (like database dumps). Backups are encrypted at rest and deduplicated across all your directories to save storage space.
Supports local disk, SFTP (Synology NAS, etc.), BackBlaze B2, and S3-compatible storage.
Direct installation on host systems is no longer supported. All deployments must now use Docker.
If you're currently running Archiver v0.6.5 or earlier directly on your host system, see the Legacy to Docker Migration Guide for step-by-step upgrade instructions.
New users: Continue reading below to get started.
- Encrypted & Deduplicated: Block-level deduplication minimizes storage, RSA encryption secures data
- Multiple Backends: Local disk, SFTP, BackBlaze B2, S3-compatible storage
- Automated Rotation: Configurable retention policies (keep daily, weekly, monthly snapshots)
- Service Integration: Pre/post-backup scripts, custom restore procedures
- Notifications: Pushover alerts for successes and failures
- Easy Restoration: Interactive restore script to recover specific revisions
- Docker installed
- Docker Compose (optional, for easier management)
Prepare at least one storage location before running init. Expand the sections below for setup instructions.
Click to expand local disk setup instructions
Local disk storage is the simplest and fastest option, ideal as your primary backup target. Backups can then be copied from local storage to remote locations (SFTP, B2, S3) for off-site redundancy.
- A local directory path (e.g.,
/mnt/backup/storage) - Sufficient disk space for your backups
- Proper read/write permissions
-
Create the backup directory:
sudo mkdir -p /mnt/backup/storage
-
Set appropriate permissions:
sudo chown -R $USER:$USER /mnt/backup/storage sudo chmod 755 /mnt/backup/storage
-
Verify the directory is accessible:
ls -la /mnt/backup/storage
Tip: For the 3-2-1 backup strategy, use local disk as your primary storage target for fast backups, then configure additional remote storage targets to copy backups off-site automatically.
Click to expand Synology setup instructions
- Login to Synology DSM Web UI (usually
http://<nas-ip>:5000) - Open Control Panel → File Services → FTP tab
- Enable SFTP service (not FTP/FTPS)
- Default port 22 is fine
- Click Apply
- Control Panel → User & Group → Create
- Set Name and Password
- Assign to appropriate Groups
- Grant shared folder permissions
- Under Application Permissions, allow SFTP
- Complete and click Done
- Control Panel → Shared Folder → Create
- Name the folder and configure settings
- Don't enable WriteOnce (incompatible with backups)
- If using BTRFS, enable data checksum
- Grant Read/Write access to backup user
Generate SSH key first (init script can do this), then:
- Control Panel → User & Group → Advanced
- Enable user home service
- Open File Station → homes folder → your user folder
- Create
.sshfolder if it doesn't exist - Upload or create
authorized_keysfile containing your public key (ssh-ed25519 AAAA...)
Click to expand BackBlaze B2 setup instructions
- Create account or sign in
- My Settings → Enable B2 Cloud Storage
- Buckets → Create a Bucket
- Choose unique Bucket Name
- Files: Private
- Encryption: Enable
- Object Lock: Disable
- Lifecycle: Keep all versions (default)
- Application Keys → Add New
- Name the key
- Allow access to your bucket
- Type: Read and Write
- Enable List All Bucket Names
- Click Create New Key
- Save the keyID and applicationKey (shown only once)
Click to expand S3 setup instructions
S3 providers vary, but you'll need:
- Bucket Name (globally unique)
- Endpoint (e.g.,
s3.amazonaws.comors3.us-east-1.wasabisys.com) - Region (optional, provider-specific, e.g.,
us-east-1) - Access Key ID (with read/write permissions)
- Secret Access Key
Create these through your S3 provider's console (AWS, Wasabi, Backblaze S3 API, etc.)
Click to expand Pushover setup instructions
- Create account or sign in
- Note your User Key from the dashboard
- Add a device to receive notifications
- Create an Application/API Token
- Name your app and agree to terms
- Save the API Token/Key
You'll enter the User Key and API Token during init.
Skip this step if you already have a bundle file (e.g., bundle.tar.enc or export-*.tar.enc from a previous installation).
For new installations, run initialization interactively to generate your configuration bundle:
docker run -it --rm \
-v ./archiver-bundle:/opt/archiver/bundle \
ghcr.io/sisyphusmd/archiver:0.7.0 initThis creates archiver-bundle/bundle.tar.enc with your configuration and keys.
Create compose.yaml:
services:
archiver:
container_name: archiver
image: ghcr.io/sisyphusmd/archiver:0.7.0
restart: unless-stopped
stop_grace_period: 2m # Allow time for graceful shutdown and cleanup
hostname: backup-server # used for backup service label (optional)
environment:
BUNDLE_PASSWORD: "your-bundle-password-here" # Escape $ as $$ (e.g., my$pass → my$$pass)
CRON_SCHEDULE: "0 3 * * *" # Ex: daily at 3am, or omit for manual mode
TZ: "America/New_York" # Timezone for cron and timestamps (default: UTC)
volumes:
- ./archiver-bundle:/opt/archiver/bundle # Bundle file (required)
- ./archiver-logs:/opt/archiver/logs # Persistent logs (optional)
- /path/to/host/backup-dir:/mnt/backup-dir # Data to backup (must match config.sh)
# - /var/run/docker.sock:/var/run/docker.sock # For docker exec in scripts (optional)
# - /path/to/host/restore-dir:/mnt/restore-dir # Restore location (will be prompted)Update paths and password, then start:
docker compose up -dIf your backup scripts need to control other containers (e.g., docker exec for database dumps), mount the socket:
volumes:
- /var/run/docker.sock:/var/run/docker.sockSecurity Warning: This grants root-level access to the Docker daemon. The container can start/stop/delete any container or access any data. Only use if necessary.
The stop_grace_period: 2m setting allows the container time to complete cleanup when stopped. When docker compose down or docker stop is called, Archiver will:
- Complete any running pre-backup hooks
- Run post-backup hooks to restore services (e.g., restart databases, remove snapshots)
- Terminate gracefully
If your post-backup hooks take longer than 2 minutes, increase this value accordingly.
| Variable | Required | Description |
|---|---|---|
BUNDLE_PASSWORD |
Yes | Password for decrypting bundle.tar.enc. Note: If your password contains $, you must escape it as $$ (e.g., my$password → my$$password) |
CRON_SCHEDULE |
No | Cron expression for automatic backups (empty = manual mode) |
TZ |
No | Timezone for cron scheduling (default: UTC) |
0.7.0- Specific version (recommended)0.7- Minor version (receives patches automatically)0- Major version (receives minor/patch updates)
The config.sh file defines what to backup and where. See Editing Configuration for how to modify it in Docker.
Directories to backup. Use * for subdirectories:
SERVICE_DIRECTORIES=(
"/srv/*/" # Each subdirectory as separate repository
"/home/user/data/" # Single repository
)Define multiple storage locations (local disk, SFTP, B2, S3):
Note: Storage names should only contain letters, numbers, and underscores. Other characters will be automatically sanitized (e.g.,
my-storage→my_storage).
# Primary storage (required)
STORAGE_TARGET_1_NAME="local"
STORAGE_TARGET_1_TYPE="local"
STORAGE_TARGET_1_LOCAL_PATH="/mnt/backup/storage"
# Secondary storage (optional)
STORAGE_TARGET_2_NAME="nas"
STORAGE_TARGET_2_TYPE="sftp"
STORAGE_TARGET_2_SFTP_URL="192.168.1.100"
STORAGE_TARGET_2_SFTP_PORT="22"
STORAGE_TARGET_2_SFTP_USER="backup"
STORAGE_TARGET_2_SFTP_PATH="/volume1/backups"
# Tertiary storage (optional)
STORAGE_TARGET_3_NAME="backblaze"
STORAGE_TARGET_3_TYPE="b2"
STORAGE_TARGET_3_B2_BUCKETNAME="my-bucket"
STORAGE_TARGET_3_B2_ID="keyID"
STORAGE_TARGET_3_B2_KEY="applicationKey"
# Quarternary storage (optional)
STORAGE_TARGET_4_NAME="hetzner"
STORAGE_TARGET_4_TYPE="s3"
STORAGE_TARGET_4_S3_BUCKETNAME="my-bucket"
STORAGE_TARGET_4_S3_ENDPOINT="endpoint"
STORAGE_TARGET_4_S3_REGION="none"
STORAGE_TARGET_4_S3_ID="id"
STORAGE_TARGET_4_S3_SECRET="secret"STORAGE_PASSWORD="encryption-password-for-duplicacy-storage"
RSA_PASSPHRASE="passphrase-for-rsa-private-key"ROTATE_BACKUPS="true" # Enable automatic pruning after backups
PRUNE_KEEP="-keep 0:180 -keep 30:30 -keep 7:7 -keep 1:1"Default retention policy keeps:
- All backups younger than 1 day old
- 1 backup every 1 day for backups older than 1 day (
-keep 1:1) - 1 backup every 7 days for backups older than 7 days (
-keep 7:7) - 1 backup every 30 days for backups older than 30 days (
-keep 30:30) - Delete all backups older than 180 days (
-keep 0:180)
Format: -keep n:m means keep 1 snapshot every n days if the snapshot is at least m days old.
You can override ROTATE_BACKUPS for individual backup runs:
docker exec archiver archiver start prune # Force pruning this run
docker exec archiver archiver start retain # Skip pruning this runIf multiple repositories backup to the same storage target, only ONE should run prune to avoid race conditions:
- Set
ROTATE_BACKUPS="false"in all but one repository's config - The designated repository will prune for all snapshot IDs using the
-allflag - Prune uses
-exhaustiveflag to remove ALL unreferenced chunks, including:- Orphaned chunks from manually deleted snapshots
- Incomplete backup chunks from interrupted operations
- Unreferenced chunks from any source
See the Duplicacy prune documentation for more details on the two-step fossil collection algorithm.
DUPLICACY_THREADS="10"Number of parallel upload/download threads for duplicacy operations. (Default: 4)
NOTIFICATION_SERVICE="Pushover"
PUSHOVER_USER_KEY="userKey"
PUSHOVER_API_TOKEN="apiToken"With CRON_SCHEDULE set, backups run automatically. Without it, run commands manually.
docker exec -it archiver archiver logs
docker logs --tail 20 -f archiverdocker exec archiver archiver status
docker exec archiver archiver healthcheckdocker exec archiver archiver start
docker exec archiver archiver start logs # with log viewing
docker exec archiver archiver start prune # force rotation
docker exec archiver archiver start retain # force retentiondocker exec archiver archiver pause
docker exec archiver archiver resume
docker exec archiver archiver stopdocker exec -it archiver archiver bundle export
docker exec -it archiver archiver bundle importarchiver start # Run backup now
archiver start logs # Run backup and follow logs
archiver start prune # Run backup and force prune (ignore config)
archiver start retain # Run backup without pruning (ignore config)
archiver stop # Stop backup gracefully (completes cleanup)
archiver stop --immediate # Stop backup immediately (skip cleanup)
archiver restart # Stop then start backup
archiver pause # Pause backup (experimental)
archiver resume # Resume paused backup (experimental)
archiver logs # Follow backup logs
archiver status # Check if backup is running
archiver bundle export # Create encrypted config/keys bundle
archiver bundle import # Import from encrypted bundle
archiver restore # Restore data from backup
archiver healthcheck # Check system health
archiver help # Show helpBefore restoring, ensure you have a volume mount for the restore destination directory. The restore script will interactively prompt you for:
- Which storage target to restore from
- Snapshot ID to restore
- Local directory path (where to restore the files)
- Which revision to restore
# Check status first (ensure no backup is running)
docker exec archiver archiver status
# Run interactive restore
docker exec -it archiver archiver restoreThe restore destination can be any path accessible within the container. If you need to restore to a new location not currently mounted, add a volume mount and restart the container first.
For a one-time restore without modifying your running container, use a temporary container:
# One-off restore (container exits after completion)
docker run --rm -it \
-e BUNDLE_PASSWORD='your-bundle-password-here' \
-v /path/to/bundle/dir:/opt/archiver/bundle \
-v /path/to/restore/destination:/mnt/restore \
ghcr.io/sisyphusmd/archiver:0.7.0 \
archiver restoreWhen prompted for the local directory path during restore, enter the container path (e.g., /mnt/restore). The restored files will appear on your host at /path/to/restore/destination.
Create service-backup-settings.sh in any service directory:
#!/bin/bash
# Custom file filters
DUPLICACY_FILTERS_PATTERNS=(
"+*.txt"
"-*.tmp"
"+*"
)
# Run before backup
service_specific_pre_backup_function() {
echo "Dumping database..."
docker exec postgres-container pg_dump -U user dbname > backup.sql
}
# Run after backup
service_specific_post_backup_function() {
echo "Cleaning up..."
rm -f backup.sql
}Create restore-service.sh in any service directory to run post-restore tasks:
#!/bin/bash
# Runs after restoration completes
echo "Importing database..."
docker exec postgres-container psql -U user -d dbname -f /backup/dump.sql
echo "Setting permissions..."
chown -R 1000:1000 /mnt/restored-data
echo "Starting services..."
docker compose up -d- Legacy to Docker Migration - Migrating from v0.3.2-v0.6.5 to Docker-only v0.7.0
- Uninstalling Legacy Installation - Removing legacy installation after migration
- Editing Configuration - How to edit config in Docker environment
- Local Storage Setup - Adding local disk as primary backup target
- SSH Key Management - Creating and managing SSH keys for SFTP
Archiver is free and open-source software licensed under GNU AGPL-3.0.
Archiver uses the Duplicacy CLI v3.2.5 binary as an external tool. Duplicacy is licensed separately under its own terms:
- Free for personal use and commercial trials
- Requires a CLI license for non-trial commercial use ($50/computer/year from duplicacy.com)
What counts as commercial use? Backing up files related to employment or for-profit activities.
Note: Restore and management operations (restore, check, copy, prune) never require a license. Only the backup command requires a license for commercial use.
If you're using Archiver commercially, please purchase a Duplicacy CLI license to support the project that makes this tool possible