Lxc export error backup file not found

Even without compression it fails.

i dont understand why. space is more then enough. ???

same error.

is there any maximum that lxd uses (mem hdd) on an lxc export??

its driving me nuts. whatever i do its not working. first step

lxc export starts. And it creates the backup0. After that it will compress the backup0

Then when this is done the backup0.compress file is deleted and the only file that it keeps is backup0

But still i get this error message (with or without compression) See image

Why the hell is it not working for bigger containers. smaller containers are working perfect :slight_smile:

There are more users with this problem.

Is it a bug?

https://discuss.linuxcontainers.org/t/lxc-export-throws-backup-storage-error-unable-to-generate-compressed-file/4343/25

Sorry for the delay, I needed some time to look at the code to try to understand.
FTR, here is how I think it is done. The function is in lxc/export.go
There are 2 phases:

  1. ask to the lxd server to create a backup file. This is the part I have asked you to examine and we have found no obvious problem in this
  2. once the file is done, ask the lxd server to download the file. This is the part where it all goes pear shaped.

I have now a very nasty feeling. I have noticed there is an automatic pruning of ‘old’ backups. How it’s done is that in lxc/export.go lxc client ask to create a backup file with a maximum shelf life of 30 minutes.
ExpiryDate: time.Now().Add(30 * time.Minute)
For a typical backup of one of my containers (3 Go at most) there is absolutely no chance a backup takes more than that.
In your case it’s rather different, and the task pruning ‘old’ backup run hourly. This does not seem configurable. The only way of testing if this hypothesis is correct would be to recompile the lxc client with an expiration delay of say 4 hours.
I don’t know if you are ready for such an enterprise :slight_smile:

Sent a pull request to bump to 24h expiry.

It could have side effects in case of people having several backup crashes in rapid succession and rather low disk space. IMO the ideal solution would be to get a parameter to bump this default time for people needing it. But of course there is another problem, that is, the error reporting is somewhat limited.

lxc export will delete the backup as soon as it’s retrieved so that shouldn’t really be a problem unless your container is huge AND you make a lot of exports AND they all run at the same time AND they all fail somehow.

Hi,

I only export 1 container however the small containers all work except for the bigger ones they all fail.

I installed lxd true snap. Will the pull also be merged in lxd true snap?

*** Fail somehow: that was my first hypothesis. Everything that can fail, will fail eventually.
*** huge container: that’s a reasonable supposition and it may not even be necessary, since I also did the hypothesis of limited space - another realist posslbility, you can get good prices for a 50 Gb SSD on the Net, that’s what I use for one of mu servers
****make lot of exports: there are 2 ways it can happen:

  • a script that retries in case of failure
  • a human that tend to think that it’s possible to get different results when trying again the same operation when it fails; that’s something that has a bad name but it’s part of the human nature and it can happen more frequently than one can imagine.

Hi

The containers that are failing are all arround the 110gb. I allredeay write them to ssd. But the export, compression takes longer then 1 hour.

I notised that the 24 hour windows has bin pushed. How can i get the new version true snap?

I admit that I never looked deep to this problem, but git log on a recently fetched lxd gives
commit b58bfa5d1586b12c8bb327c09b209251dfb09745 (HEAD -> master, origin/master, origin/HEAD)
that’s the commit immediately following the one interesting you.

and snap info lxd gives
channels:
(…)
edge: git-b58bfa5 2019-06-20 (10939) 56MB -

the git ref (b58bfa5) looks like the beginning of the git commit; so it’s probably that; the ‘edge’ snap is the equivalent of a ‘nightly build’ (remember the firefox version that crashed ? that’s what a nightly build is, no quality control, nothing but raw untested code). But if you want to experiment, that’s your data. A backup first could be useful. Oops, I almost forgot that’s the feature that is not working for you :slight_smile: Probably a better idea would be to setup a test server.

For now the edge snap will have it so you could switch to that, not that I would ever recommend running edge on anything but a test system.

Otherwise, we do cherry-pick bugfixes somewhat regularly and this will be included the next time we do so.

It may also be worth pointing out that the compression algorithm is configurable, so if space isn’t a problem for you, you can set backups.compression_algorithmtonone` and save yourself a lot of CPU and time.

Hi I will wait for the cherry bugfix.

I allredeay tried without compression but this also fails.

That’s done, the cherry is picked in 10972 (snap info lxd)
How I know it ? started an export then

lxd sql global “select name,expiry_date from containers_backups;”

expiry_date = current + 24 hours
Hope it will fix your problem.

How could i retrieve this update true snap?

snaps are auto-updating. Like I said, use ‘snap info lxd’ to check if you have it (if you have a value >10972 there was another update)

Hi all

Yust to inform you its working now​:+1::ok_hand: