My ZFS backup strategy


I am now doing some experiments running my own NAS at home (mostly out of boredom), and I went with a small solution that goes inside my IKEA PS with a Raspberry Pi 4, a couple of 1TB USB SATA disks and ZFS on Linux mirroring them. I have less than 200GB in data and a very stable 50Mbps uplink at home, so this post explains my strategy to backup my data in a remote location.

Before I even started, I had to decide what would be my backup strategy. Given I’m running this from a home connection, it wasn’t an option to do full backups every day, specially when my data rarely changes more than 50MB in a given day, so I had to think about something that would play nice with incremental backups.

I first researched tools for such tools, and was amazed by Borg, but given I planned to backup this directly to an S3/GCS/B2 bucket, I found too many limitations on it. I can’t have the borg binary running in the remote, so I either needed a full copy of the data locally or it had to download a lot of data from the bucket to be able to compute the incremental diff.

My second test was with ZFS. Seeing demos of zfs send | ssh remote zfs recv was very beautiful, but again, I won’t have a way to run ZFS on the remote. But I did a few tests with it that caught my attention. First, snapshot management in ZFS is awesome to manage changes across time. Second, you can pipe zfs send to a file. Third, while zfs send is commonly used to send a snapshot to another machine also running ZFS (zfs send | ssh remote zfs recv), when you pipe zfs send to a non-ZFS command (or file), it will dump the entire filesystem, essentially creating a full backup. Fourth, you can zfs send just the diff between two snapshots, even to a file. So I put all of them together to create this backup strategy.

Backup strategy

Note: the command backup-upload below is an unpublished tool that will compress, encrypt and upload the file to a remote location.

Creating the full backup

I’ve imported some data and then created a snapshot with zfs snapshot data@<date> (i.e.: zfs snapshot data@2020-08-24). This snapshot was then uploaded to a remote location using zfs send data@2020-08-24 | backup-upload full/2020-08-24.

Creating daily backups

Every day, a cronjob creates a new snapshot and uploads to remote using zfs send -I <yesterdaySnapshot> <todaySnapshot> | backup-upload daily/<date>).

Creating monthly backups

Once a month, after the daily snapshot was created, a cronjob uploads it using zfs send -I <fullbackup> <todaySnapshot> | backup-upload monthly/<date>).

This cronjob will also mark for deletion older monthly and daily backups, except the full backup.

Restoration strategy

Backups are worthless if they can’t be restored. This is the strategy to restore the data from the remote location. This assumes the data in both my local disks can’t be trusted anymore, so I’ll restore the backups from remote into a brand new disk with an empty zpool.

While creating the monthly backups, I run zfs send against the <fullbackup>. This means that the restoration process needs 1) the full backup; 2) the latest monthly backup and 3) all daily backups since the last monthly backup. This way most of my data will be restored by obtaining just two files, while I still retain the daily granularity if my future self wishes to use it.

The backup-download command below is an unpublished tool that will download, decrypt and decompress the file from a remote location.


backup-download full/2020-08-24 | zfs recv data

backup-download monthly/2020-12-01 | zfs recv data

backup-download daily/2020-12-02 | zfs recv data
backup-download daily/2020-12-03 | zfs recv data
backup-download daily/2020-12-19 | zfs recv data


This is just an experiment and this blog post is a live document that will be updated as I fine tune my strategy. I am yet to buy a new disk and attempt full restoration from remote (I did small scale tests only). I am not too worried about losing data at this point because I have another external disk, without ZFS, that I manually copy all the data into regularly. If you want to try this yourself be careful, as you can lose data.