Cleaning up Incus export orphaned backup file and general backup question

I ran incus export --optimized-storage --instance-only INSTANCE_NAME /path/to/remote/NFS4/share/instance_name.tgz .

A network error caused the incus export command to die after it created the local backup and before it finished moving from the local storage to the final location. This left behind the backup file /var/lib/incus/backups/INSTANCE_NAME/backup0. (65Gig compressed)

I ran the export/backup again and instead of using or deleting the orphaned file, it created a second backup file.

#ls /var/lib/incus/backups/INSTANCE_NAME
total 126041032
-rw------- 1 root root 64533076315 Sep  8 17:06 backup0
-rw------- 1 root root 64532920754 Sep  8 19:31 backup1

After the second backup completed (2 hours later) incus export deleted backup1, but backup0 was still there.

Question 1: is it ok to just delete backup0 using rm from the console? (e.g. incus won’t care?)

Question 2: I’ve read that it’s better for backups to use the zfs pool Incus manages with storage.backups_volume to create volume in the big zfs pool . I’m guessing like so:

#incus storage volume create default big_backup
#incus set storage.backups_volume=default/big_backup

Do I have that right? The pool is the same one that holds the default containers, but backups will only be there long enough to get moved elsewhere.

Question 3: Should we periodically check-for and clear orphaned backup files? Or is there an incus backup garbage collector that runs periodically?

Question 4: Is there an option to specify that incus export should be done directly into a specified directory instead of first being put into wherever storage.backups_volume is set and then moved?

(incus version = 6.15)

Bumping this one because I’m very interested in solving Q #4. If we are unable to adjust this, it’s impossible to export any instance larger than 50% of available space. I have already tried adjusting the storage.backups_volume setting. Still, in my case, when exporting /var/lib/incus/backups is always used. No clue what this setting really does at all.

When set, /var/lib/incus/backups turns into a symlink to the volume provided.

I’ve been playing with remote backups and setting the backup dir as the same remote and final export location for speed increases.

Version 1: I tried something like this (V1)

#pseudocode
mount -t nfs4 REMOTE_DETAILS /path/to/localmount
incus storage pool create BackupPool dir source=/path/to/localmount
incus storage volume create BackupPool BackupVolume
incus config set storage.backups_volume BackupPool/BackupVolume
cd /path/to/localmount && incus export FOO
incus config unset storage.backups_volume
umount /path/to/localmount

But I found when I interrupted the NFS connection , incus storage list still shows the status as “CREATED” instead of “offline” or anything like that. Can incus test a remote pool/volume incus to know if it is an NFS directory and online or not?

I’ve read that for remote NFS connections, the local server can hang waiting for the NFS server to return.

Version 2: I also tried something like (V2)

#pseudocode 
mount -t nfs4 REMOTE_DETAILS /path/to/localmount
mv /var/lib/incus/backups /var/lib/incus/backups.dist
ln -s /path/to/localmount/backups /var/lib/incus/backups 
cd /path/to/localmount && incus export FOO
rm /var/lib/incus/backups 
mv /var/lib/incus/backups.dist  /var/lib/incus/backups
umount /path/to/localmount

Both ended up with a mv command on the NFS remote directory being instantly executed (NFS server took care of it nearly instantly). For example here’s what incus reported for a just under 2 GB export. The “local” backup part is slowed by the process of creating the snapshot and not the network. The “exporting the backup” part was so fast it just reported it as the entire file in one second

Backing up instance: 1.35GB (13.88MB/s)
Exporting the backup: 100% (1.37GB/s)

V1 uses the native incus server config commands to change directories, however it seems incus storage commands all just trust that a pool/volume on a remote NFS server is available.

So I’m moving forward with some test scripts for backups using V2 for NFS ( GitHub - AJRepo/incus_tools: Tools for Incus )

Questions:

  1. Thoughts about adding an NFS handler for remote backups? Something like

incus storage remote create BackupRemote nfs source=/path/to/localmount

I found this issue ( Add a `nfs` storage driver · Issue #1311 · lxc/incus · GitHub ) but it seemed just for generic storage.

  1. What do you think of a configuration setting for backups as a directory or an NFS location instead of a pool/volume? ( e.g. set storage.backups_LOCATION /path/to/nfs/mount ) .