USER@SERVER:/DATA/USER/SERVER_BACKUP/CONTAINERS$ sudo time lxc export CONTAINER /DATA/USER/SERVER_BACKUP/CONTAINERS/CONTAINER.tar.gz [sudo] password for USER: 0.16user 2.10system 27:21.89elapsed 0%CPU (0avgtext+0avgdata 22800maxresident)k 228inputs+7920827outputs (1major+6824minor)pagefaults 0swaps USER@SERVER:/DATA/USER/SERVER_BACKUP/CONTAINERS$ll -h -rw-r--r-- 1 root root 3.8G Jan 31 09:05 CONTAINER.tar.gz
Possibly, depends on exactly how large the container was and how fast your CPU and disks are.
gzip isn’t multi-threaded so it’d have used one CPU core for a while, reading and compressing possibly over 10GB of data to compress it down to 3.8GB.
So this doesn’t feel abnormal but maybe there’s a reason why you think your system would have been able to do this much faster somehow.
@stgraber is correct it depends on file system, number of files and file size. I have development web page that holds huge amount of small files, container size 6G. Snapshot and export takes up to ~ 1h on Dell R710 server