Lxc export throws (Backup Storage) error - Unable to generate compressed file

Noted. I am running the export operation again, without the --optimized-storage flag. Will report the findings but based on the documentation, I’d imagine --optimized-storage flag provides for better compression (based on storage pool used) and thus be restored only on similar pool (zfs in this case).

Why should “similar pool”-meaning point to zfs ? In my understanding is zfs an other layer. And if we exporting a container to a *.tar-file, what has this to do with zfs? What is a tar-ball made of ? Magic? Do we talking about containers? Ok then …
Again as long as similar is not explained definitively its meaning can belong to everything. Don’t trust the wizard of oz. :slight_smile:
Be brave and focus on things beyond the curtain.

In this case I’d suggest to check available disk space on system partition as well as available memory during the export to ensure that there is no resource problem.

In general a good idea. But in that case can’t be the trouble source. If partition(space) is not enough or other things underlying you’ll get an error from the underneath system because it’s escalated up from OS. Not from LXD except THEY catched this error and tossed it into their own error-handler. The same if something struggles with memory. In any case.
That’s the point LXD-makers come into the game. Ask them directly. And ask why they don’t document those important ‘features’ like orphans. Maybe you’ll get an answer.

unless there is some problem in the bubbling of low level errors toward the user interface or even the logs. Have seen it happen with other programs.

That is exactly what I was talking about. Hence LXD muddled on that issues.

Quick update,

I ran export without the optimized-storage flag and ran into the same error. I also noticed while running export,

  • Swap slowly increased continuously (end up eating 1.05 GB out of 2 GB).
  • RAM consumption also increased from average of 6 GB to 10 GB (total is 16 GB) but never went past that mark.
  • Rsync operation ran successfully and dumped container in /var/snap/lxd/common/lxd/backups directory with name lxd_backup_457021993.
  • A backup0 file is generated followed by backup0.compressed which is when the export operation fails with error as mentioned above.

Edit 1: Here’s the complete command lxc export nextcloud nextcloud_optimized.tar.xz --container-only

1 Like

Can you please post the exact command you made ?

Updated my comment above with command.

Thank you very much indeed.

No, I cannot confirm your experience. Can you just accomplish the lxc export without ANY flags and parameters ?

What LXD | LXC version you are using?
Please do a “df -h” at your command line and post it, thanks.

Tried but I get the same error.


Bump. Anyone?

Well, I’m still here :slight_smile:
someone asked you to post df -h but you did not; I have hinted myself that the available space on system partition mattered but you did not reply on this point.
So, I’m not yet sure that you have at least 200 Gb free on system partition before an export.

Doh. I missed the df -h part, anyhow here you go,


Also, updated the original post with some observations (see Edit 1)

Hmm, it seems to rule out lack of available space. Too bad, I like easy solutions.
It’s a bit strange that swaps grows so much while you still have lot of free memory. Memory management as reported by Linux tools is tricky, though.
It’s not obvious that there is a lot of people with the same kind of load; I am sure that there are a lot of professionals using LXD with big containers, but this kind of people tend to stick to LTS version and are unlikely to use SNAP version. And lxc export don’t work with LTS versions. So it’s not unthinkable you have hit a problem no one has seen before.
Maybe there is a more exotic limit somewhere; I don’t know about snap itself, don’t think there is a limit, but maybe in Zfs ? not set any quota ? I’d say that export should involve a Zfs snapshot, and snapshots take space.

Before opening an issue on Github, I think it would be good to be sure that there is no remnant of a prior crash before trying a new test. Like I said in this comment I am not sure where and when the data is cleaned. Possibly waiting a bit (not kidding, I have a vague remembering of reading somewhere in the code that there is some sweeper periodically triggered) and restarting host.
Possibly you could try create another Zfs volume, duplicate your container in it (with a good deal of free space, without any quotas…) and do an export of this copy. Just a thought, but since you have plenty of free space it seems it could be done.

  • Swap slowly increased continuously (end up eating 1.05 GB out of 2 GB).
    Got the same issue here but doesn’t stop and consumes everything.
  • RAM consumption also increased from average of 6 GB to 10 GB (total is 16 GB) but never went past that mark.
    But then for me it went back from average of 10 GB to 1 GB. And naturally then the whole OS nearly goes to a full stop. Have to reboot by force.

I also use LXC 3.11 as a snap package.

Weird AF.

I’ve found the most reliable way to backup to a file if using zfs as backend storage is with zfs send.

I have this cobbled together script which works nicely for me.

# set echo on
set -x #echo on
cd $backupdir
/sbin/zfs destroy zfs1/containers/$container@snapshotBackup
/sbin/zfs snapshot zfs1/containers/$container@snapshotBackup
/sbin/zfs send zfs1/containers/$container@snapshotBackup | /usr/bin/mbuffer -q -m 2000M | /usr/bin/pigz -c -p 8 | /usr/bin/mbuffer -q -m 2000M > $backupdir/$container.gz
/sbin/zfs snapshot zfs1/containers/$container@snapshotBackup

I haven’t really found a solution to this problem yet but as a workaround (and maybe a better practice), I have started using storage volumes instead. All the application related data (in this case nextcloud’s data dir) gets saved to storage volumes and container itself remains barebones carrying nothing else but the application setup itself.

As for the backups, I am export containers (lxc export) and storage volumes (zfs send).


I ahve the exact same issue. Is it a bug???


To close this matter, it seems the problem is solved now with LXD 3.14 rev 10972
see this thread