Backup create: Failed to run: zfs send signal received

  • LXC: 4.0.5
  • Ubuntu 20.04.2 LTS
  • Custom partition for ZFS.

I am trying to test a backup with optimized storage set to true, when trying to run a backup I get the signal received error. It seems to happen once i have more than 3.5GB of data in the container.

$ lxc launch images:ubuntu/focal c1
$ lxc config device set c1 root size=25GB
$ lxc shell c1
 head -c 4GB /dev/zero > data.iso
 exit
$ lxc export c1 --optimized-storage
Error: Create backup: Backup create: Failed to run: zfs send lxdpool/containers/c1@backup-a3c81f96-c61c-45d0-810c-66ec420ec902: warning: cannot send 'lxdpool/containers/c1@backup-a3c81f96-c61c-45d0-810c-66ec420ec902': signal received

Check the storage pool

$ zfs list
NAME      USED  AVAIL     REFER  MOUNTPOINT
lxdpool   6.75G  44.6G       96K  none

I ran this on the LXD host

$ free -mh
              total        used        free      shared  buff/cache   available
Mem:          1.9Gi       459Mi       1.3Gi       0.0Ki       163Mi       1.4Gi
Swap:         1.9Gi        31Mi       1.9Gi

Any ideas? Thanks.

Anything relevant in dmesg?

It’s unfortunate that it doesn’t tell you what signal was received…
It could be an out of memory situation, could be out of disk space or a network disconnect, those could cause SIGKILL, SIGPIPE, …

Nada

Okay, i just copied the container to my BTRFS host and backup works fine, it has the same memory specs, i even increased the data in the container.

is it possible zfs send is blowing up because the memory on the host is 4GB, and i am trying to export a container with 4.4GB?

I have increased the size of the virtual machine so that memory is 4GBm, and 8GB without success.
Note: I have ubuntu installed inside a VM using Parallels Desktop M1 Preview.

Try enabling debug mode and then tailing the logs as before you run the export:

sudo snap set lxd daemon.debug=true; sudo systemctl reload snap.lxd.daemon
sudo tail -f /var/snap/lxd/common/lxd/logs/lxd.log
t=2021-02-18T13:59:41+0000 lvl=dbug msg="BackupInstance started" driver=zfs instance=c1 optimized=true pool=default project=default snapshots=true
t=2021-02-18T13:59:41+0000 lvl=dbug msg="UpdateInstanceBackupFile started" driver=zfs instance=c1 pool=default project=default
t=2021-02-18T13:59:41+0000 lvl=dbug msg="Skipping unmount as in use" driver=zfs pool=default refCount=1
t=2021-02-18T13:59:41+0000 lvl=dbug msg="UpdateInstanceBackupFile finished" driver=zfs instance=c1 pool=default project=default
t=2021-02-18T13:59:41+0000 lvl=dbug msg="Generating optimized volume file" driver=zfs file=/var/snap/lxd/common/lxd/backups/lxd_backup_zfs534649666 name=backup/container.bin pool=default sourcePath=lxdpool/containers/c1@backup-8914932d-4136-4cd3-9afa-b5392741d56d
t=2021-02-18T13:59:46+0000 lvl=dbug msg="BackupInstance finished" driver=zfs instance=c1 optimized=true pool=default project=default snapshots=true
t=2021-02-18T13:59:46+0000 lvl=dbug msg="Instance backup finished" instance=c1 name=c1/backup0 project=default
t=2021-02-18T13:59:46+0000 lvl=dbug msg="Failure for task operation: 08fa0acf-dd27-43c5-a272-eb6c5d677aec: Create backup: Backup create: Failed to run: zfs send lxdpool/containers/c1@backup-8914932d-4136-4cd3-9afa-b5392741d56d: warning: cannot send 'lxdpool/containers/c1@backup-8914932d-4136-4cd3-9afa-b5392741d56d': signal received" 
t=2021-02-18T13:59:46+0000 lvl=dbug msg="Event listener finished: 05b4c8c4-52d0-4d49-b418-0f4adc988ce8" 
t=2021-02-18T13:59:46+0000 lvl=dbug msg="Disconnected event listener: 05b4c8c4-52d0-4d49-b418-0f4adc988ce8" 

I am starting to suspect it is a space problem (again) which is causing the signal, but not the on the storage pool, on / mount.

When optimize_storage is set to true, how does this different internally to when being set to false, i know zfs send or btrfs send are called, but is the binary dumped to a temp folder before being compressed. I did try to look through the source code but did not find where it was calling the zfs send.

This is my free space.

$ df / -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       9.8G  6.2G  3.1G  67% /

Yes it does get dumped to a temporary file in LXD’s var directory (which is /var/snap/lxd/common/lxd/backups if using the snap) before being written to the tarball (as we need to know the size of the file before being able to add it to the tarball).

In non-optimised mode, we create a snapshot of the volume, and temporarily mount it and then read each file individually and add them to the tarball (which is created also in the LXD backups directory).

So in optimised mode for a short period of time you can end up having both the uncompressed binary blob from zfs send and the tarball (containing the blob file) existing in the backups directory at the same time.

If you have space on a storage pool for the temporary file (not necessarily the same pool as the instance you’re exporting) you can create a custom volume on that storage pool and then set the global config setting storage.backups_volume to indicate to LXD to use it instead.

See https://linuxcontainers.org/lxd/docs/master/server

e.g.

lxc storage volume create <pool> mybackups size=n (optional on some storage pools)
lxc config set storage.backups_volume=<pool>/mybackups

See

1 Like

I ran this and watched it go to 0 before crashing, so there you have it.

$ watch df -h /

Is there a way to configure backups to be saved to the storage pool without having to set a volume size?

Just realized that head -c 4GB /dev/zero > data.iso was not the best way to create temporary data for testing different backups on different storage systems. For the optimize_storage=true this is fine because its binary dump, but if tar compresses files as is, it can shrink 4gb into 30mb. DOH.

1 Like

If you use a storage pool for your backup volume that doesn’t require a size property on volumes (like dir, btrfs, zfs) then yes.

e.g.

lxc storage volume create zfs myvol

For block backed storage pools (like lvm, ceph) if size is not specified it defaults to 10GB.

May I suggest that in the production setup section of the documentation, it suggests creating a volume for backups or leaving enough free disk space for backups etc.