Also, this is part of my journey learning Go, so expect some rough edges and "I'm still figuring this out" moments. Lots of vibing. Feedback welcome, but no promises on feature requests. 😊
btrfs-backup creates read-only BTRFS snapshots of your subvolumes and sends
them to a remote host via SSH. It handles both full and incremental backups,
with optional age encryption, SHA256 checksum verification, and automatic
cleanup of old backups.
Locally, I only ever keep one snapshot. On the remote server, you can configure your retention. I keep a week's worth of incremental backups and do a full backup once a week. I never keep more than this, and the tool is built around this concept.
Think of it as my opinionated take on "how do I back up my BTRFS system to a remote server without thinking about it too much."
I wanted:
- Automated BTRFS backups to a remote server
- To back up to a non-BTRFS file system, so needed to save raw images
- No need to maintain hourly snapshots locally (I use snapper for that)
- Incremental backups to save bandwidth and storage
- Encryption support (because sometimes the target is offsite)
- A learning project to get better at Go
There are other tools out there, but I wanted something simple that fit my specific workflow, and building it myself seemed like a good way to learn Go.
- BTRFS filesystem (obviously)
- SSH access to remote backup host
agebinary if using encryption (https://age-encryption.org)- Root access (needed for BTRFS operations)
git clone https://github.com/RichGuk/btrfs-backup.git
cd btrfs-backup
go build
sudo cp btrfs-backup /usr/local/bin/Create /etc/btrfs-backup.yaml with your settings:
# SSH configuration
ssh_key: /root/.ssh/id_ed25519
remote_host: backup@backup-server.example.com
remote_dest: /data/backups
# Backup policy
max_age_days: 7 # Force full backup after this many days
max_incrementals: 5 # Force full backup after this many incrementals
# Optional encryption (recommended!)
encryption_key: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
# Volumes to backup
volumes:
- name: root
src: / # Source subvolume
snapdir: /.snapshots/btrfs-backup # Where to store local snapshots
- name: home
src: /home
snapdir: /home/.snapshots/btrfs-backupage-keygen -o backup-key.txt
# Use the public key (age1...) in your config
# Store the private key securely on your restore machine# Normal run
sudo btrfs-backup
# Dry run (implies verbose, shows what would happen)
sudo btrfs-backup -n
# Verbose output (detailed logging)
sudo btrfs-backup -v
# Very verbose dry run (includes command previews)
sudo btrfs-backup -vv -n
# Custom config location
sudo btrfs-backup -config /path/to/config.yamlCreate /etc/systemd/system/btrfs-backup.service:
[Unit]
Description=BTRFS Backup
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/btrfs-backupCreate /etc/systemd/system/btrfs-backup.timer:
[Unit]
Description=BTRFS Backup Timer
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.targetEnable and start:
sudo systemctl daemon-reload
sudo systemctl enable --now btrfs-backup.timerCheck status:
sudo systemctl status btrfs-backup.timer
sudo journalctl -u btrfs-backup.service -fThe tool decides whether to do a full or incremental backup based on:
- Full backup needed if:
- No previous remote backups exist
- Previous snapshot no longer exists locally
- Last full backup is older than
max_age_days - More than
max_incrementalsincremental backups since last full
- Otherwise: Creates an incremental backup from the previous snapshot
To keep storage manageable while maintaining restore capability:
- Keeps the latest full backup chain (full backup + all its incrementals)
- Deletes everything older than the latest full backup
- Only runs cleanup after successfully creating and verifying a new full backup
- This approach assumes the verified full backup is reliable
- Create snapshot:
btrfs subvolume snapshot -r <src> <snapdir>/<timestamp> - Determine backup type: Check if full or incremental is needed
- Send to remote:
btrfs send(with-pfor incremental)- Optional
ageencryption - Stream to remote via SSH
- Verify: Calculate and verify SHA256 checksum
- Cleanup: Delete old snapshots locally, old backups remotely (if full backup)
<volume>-<timestamp>.<type>.<ext>
Examples:
root-2024-05-12_11-30-45.full.btrfsroot-2024-05-12_11-30-45.full.btrfs.age(encrypted)home-2024-05-13_03-00-00.inc.btrfs.ageroot-2024-05-14_03-00-00.inc.btrfs
Checksums are stored as <filename>.sha256.
Restoration is currently a manual process. On your restore machine:
-
Decrypt if needed:
age -d -i backup-key.txt backup.btrfs.age > backup.btrfs -
Receive full backup:
sudo btrfs receive /mnt/restore < root-2024-05-12_11-30-45.full.btrfs -
Apply incrementals in order:
sudo btrfs receive /mnt/restore < root-2024-05-13_03-00-00.inc.btrfs sudo btrfs receive /mnt/restore < root-2024-05-14_03-00-00.inc.btrfs
Run the test suite:
go test -v- No automatic restore: You'll need to manually restore backups (for now)
- SSH only: No support for local or cloud storage backends
- Linux only: Requires BTRFS and Linux-specific syscalls
- Root required: Needs root for BTRFS operations and lock file location
- Lock file path: Hardcoded to
/var/run/btrfs-backup.lock - No progress indicators: Large backups just... happen. Be patient.
- Alpha software: Did I mention this is alpha? Because it is.
This is a personal project, so I'm not actively soliciting contributions. That said, if you find bugs or have suggestions, feel free to open an issue.
If you fork it and make improvements, that's awesome! Let me know and I'll link to your fork.
MIT License - See LICENSE file for details.
Use at your own risk. No warranty. If this deletes all your data, that's on you. (But seriously, test with non-critical data first!)