Lxc create storage from source fails with "isn't empty" error

Hello!
Recently i have found out that ‘lxc storage create’ has a key ‘source’ to create new storage pool based on source (as i understand just by binding to that?). So i have created a test case to check how it works and will it work.
First i have

  1. created a dir storage ‘lxc storage create messangers_dirpool14G dir’,
  2. launched there a container ‘lxc launch messangers-latest messangers --storage=messangers_dirpool14G’,
  3. checked it and stopped everything including snap lxd.
  4. moved /var/snap/lxd/common/lxd/storage-pools/messangers_dirpool14G to a backup location and created at its place empty folder to be able to remove storage and container by lxc … delete

Finally, i have tried to create new storage pool by issuing

$ lxc storage create testdir dir source=/media/backup/images/var/snap/lxd/common/lxd/storage-pools/messangers_dirpool14G

and have received an aforementioned
Error: Source path '/var/lib/snapd/hostfs/media/local_disk_data/images/var/snap/lxd/common/lxd/storage-pools/messangers_dirpool14G' isn't empty

$ sudo ls -la /var/snap/lxd/common/lxd/storage-pools/messangers_dirpool14G
ls: cannot access ‘/var/snap/lxd/common/lxd/storage-pools/messangers_dirpool14G’: No such file or directory
$ sudo ls -la /var/lib/snapd/hostfs/media/local_disk_data/images/var/snap/lxd/common/lxd/storage-pools/messangers_dirpool14G
ls: cannot access ‘/var/lib/snapd/hostfs/media/local_disk_data/images/var/snap/lxd/common/lxd/storage-pools/messangers_dirpool14G’: No such file or directory

Please, help me to do it right with dir storage type.
Mainly i choose dir for this task, because i can put it to cloud disk to sync seamlessly on the other end - only changed files will update, not the whole image (if i’ll use btrfs or other types). I want it to happen automatically. Maybe you can advice another option for this task.

os_name: Arch Linux
os_version: “”
project: default
server: lxd
server_clustered: false
server_pid: 9215
server_version: “4.4”
storage_version: 4.15.1

:thinking:

I guess that is only for empty sources.

What does this exactly mean?
You want to use the whole storage on a second computer/system?
And then reuse it on the first computer?

Why don’t you use a disk device or volume for this then?

This way you don’t need to copy the whole storage pool.

Regarding your specific problem of using the backup:
I guess you can find something on the forum, but my guess is that you should link your backup to the original path “/var/snap/lxd/common/lxd/storage-pools/messangers_dirpool14G”.
Then lxd might be able to find the storage pool again.

Another alternative:
If two computers/servers are involved, which are online at the same time, it might be easier to give one computer access to the other computers LXD, instead of trying to always copy things around.

Actually i do that now, but to be able to do that i need to take my external drive with storage image with me (and i use btrfs image). Then i just bind mount it. (i’ve created a small storage pool just for one specific container with custom setup)
My aim is to get rid of that limitation, and i mean external disk and upload it to the cloud, i suppose MEGA is the best place for that.
The problem with image is that if something is changed inside then it will be updated as a whole. And it’s hard to sync bunch of Gygabytes every time and i think it’s not possible with free account as there are applied traffic limits.
So i’m trying to find a solution to sync storage like with rsync of smth similar, like Timeshift. But to do that storage must be like an open tree, like dir.
I hope now it’s more clearer what i’m trying to do.

I’m new to LXC, so i actually don’t know what for it’s used, but from description it seems to me like smaller blocks of one container. i.e. still they are closed images. Of course, if i correctly understand description, and it’s a BIG if, because it’s new to me.

I have tried that, but it doesn’t work. However i haven’t saved results of my attempt to that, so i’ll recheck it later to be sure that it actually failed. I think there was some problem with links inside mount, but it’s just a guess.

This one is a good alternative, thank you. But not in my case. Because i want to make backup of container and use it on the fly from Web Storage instead of hard drive. Of course, with Mega it will be copied to local drive, but it will be already installed and working and i’ll have to just bind mount it to make it visible for LXC. (and up to date)

Regarding disk devices:

Essentially you can link/mount folders from your host inside containers.
So if you only need an update for (lets say) /home/user/folder1, then you can create a disk device for that path and can easily update the folder on your host.
This way you can use faster and better storage backends than “dir” and also you save a lot of traffic and wasted energy, because these folders will be much smaller than your approach of copying the whole storage pool every time.

But I still don’t get your usecase.
Why make it so complicated with always copying everything?
Why not set up a solution like a LXD remote server?

Well, i already have LXD server that hosts a big container, and it’s a main container that i access from outside. It happens that sometimes i need to access that from a random place. Every time i needed to do that i had to setup workspace from a scratch. To ease that process i have created a relatively small containers that hosts all necessary data and packages to do that relatively easy and fast. Of course i can keep them on LXD server, but they are kind of private so it’s not a good place for them. Now i keep their storage-pools on external drive and bind mount them on different pc’s.

However external drive is not reliable, so i want to sync every change i make to this containers to be able to restore them as fresh as possible in case of failure. Of course as this containers are just a gateways, there are no much changes made, but still. I can just make snapshots or/and export them somewhere (and i do that now), but this process results in a full copy and i want to eliminate that.

I like an idea of rsync and a way Timeshift works - it makes incremental backup that i can easily host in Web Storage, because only changed files are synced and still this backup allows to recreate system from a scratch.

I have in mind another option that probably will ease the process - to attach private configs to a containers like you said with mounts from host. And leave container intact. Thus there will be no difference how to backup container, because it will happen rarely, and there will be no private data/configs and at the same time private data and configs will be kept somewhere else, for example on usb storage.