Root space usage during export

My LXD server is an Ubuntu server (18.04), LXD is installed with snap package and my storage is ZFS.

When I try to export a large container (> 90 GB) to a NFS share (to backup it) the operation fail because my host root space is filled up (19 GB free).

Is it normal ?
What can I do ?

Can we move “all” LXD directories to ZFS storage because with this configuration I can’t make large images ?

Thanks

You can create a custom volume on one of your storage pools using lxc storage volume create and then assign that for use as a temporary location for backups using lxc set storage.backups_volume

See https://linuxcontainers.org/lxd/docs/master/server

Thanks I will try

For information, when I export I give the destination of the tar.gz file :

/snap/bin/lxc export --instance-only ${container} /mnt/nfs/backup/lxc-archive.${container}.tar.gz

It’s quite surprising that lxc export write on my root filesystem, no ?

So the export has to be streamed from the server to the client command rather than the server creating it locally at the specified position because the client running the command maybe on another machine.

Also, we currently use the existing backup subsystem to provide the export command, where a backup is created on the local LXD node, and then “pulled” to the client via the HTTP API.

We have considered streaming the output to the client rather than writing it to disk, but are concerned that network buffering may cause large amounts of memory to be consumed.

Finally, for optimized exports, we have to create a large file on disk first because we cannot add it to the tarball unless we know its size, which we cannot know in advance.

1 Like

Thanks, that’s make sense :slight_smile:
I successfully exported my container, using storage.backups_volume attribute, thanks a lot.