I am trying to test a backup with optimized storage set to true, when trying to run a backup I get the signal received error. It seems to happen once i have more than 3.5GB of data in the container.
$ lxc launch images:ubuntu/focal c1
$ lxc config device set c1 root size=25GB
$ lxc shell c1
head -c 4GB /dev/zero > data.iso
exit
$ lxc export c1 --optimized-storage
Error: Create backup: Backup create: Failed to run: zfs send lxdpool/containers/c1@backup-a3c81f96-c61c-45d0-810c-66ec420ec902: warning: cannot send 'lxdpool/containers/c1@backup-a3c81f96-c61c-45d0-810c-66ec420ec902': signal received
Check the storage pool
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
lxdpool 6.75G 44.6G 96K none
I ran this on the LXD host
$ free -mh
total used free shared buff/cache available
Mem: 1.9Gi 459Mi 1.3Gi 0.0Ki 163Mi 1.4Gi
Swap: 1.9Gi 31Mi 1.9Gi
It’s unfortunate that it doesn’t tell you what signal was received…
It could be an out of memory situation, could be out of disk space or a network disconnect, those could cause SIGKILL, SIGPIPE, …
I have increased the size of the virtual machine so that memory is 4GBm, and 8GB without success.
Note: I have ubuntu installed inside a VM using Parallels Desktop M1 Preview.
I am starting to suspect it is a space problem (again) which is causing the signal, but not the on the storage pool, on / mount.
When optimize_storage is set to true, how does this different internally to when being set to false, i know zfs send or btrfs send are called, but is the binary dumped to a temp folder before being compressed. I did try to look through the source code but did not find where it was calling the zfs send.
This is my free space.
$ df / -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.8G 6.2G 3.1G 67% /
Yes it does get dumped to a temporary file in LXD’s var directory (which is /var/snap/lxd/common/lxd/backups if using the snap) before being written to the tarball (as we need to know the size of the file before being able to add it to the tarball).
In non-optimised mode, we create a snapshot of the volume, and temporarily mount it and then read each file individually and add them to the tarball (which is created also in the LXD backups directory).
So in optimised mode for a short period of time you can end up having both the uncompressed binary blob from zfs send and the tarball (containing the blob file) existing in the backups directory at the same time.
If you have space on a storage pool for the temporary file (not necessarily the same pool as the instance you’re exporting) you can create a custom volume on that storage pool and then set the global config setting storage.backups_volume to indicate to LXD to use it instead.
Just realized that head -c 4GB /dev/zero > data.iso was not the best way to create temporary data for testing different backups on different storage systems. For the optimize_storage=true this is fine because its binary dump, but if tar compresses files as is, it can shrink 4gb into 30mb. DOH.
May I suggest that in the production setup section of the documentation, it suggests creating a volume for backups or leaving enough free disk space for backups etc.