hello
I use from CEPH as a remote pool for lxd cluster.
When I use from lxc export I don’t see any option for storing backup on remote pool.
Is there any option or solution about that?
regards.
hello
I use from CEPH as a remote pool for lxd cluster.
When I use from lxc export I don’t see any option for storing backup on remote pool.
Is there any option or solution about that?
regards.
See How can I control the file system path used by lxc import commands to avoid failures? - #2 by tomp for how to use a custom volume for backups.
thank you
I created new volume on remote storage pool and set it as storage.backup_volume
Now i see a symlink in /var/snap/lxd/common/lxd for “backups” folder.
So if i store exported file in this folder it will store in remote storage?
regards.
Can you show me the export command you are planning to run that you want to be stored in the remote volume?
lxc export my-ct /var/snap/lxd/common/lxd/backups/my-ct.tar.gz --optimized-storage
Oh I see, no that is not correct.
When you provide a target path the lxc
client will download the backup from the server and store it at the local path you specified.
The storage.backups_volume
setting is intended for indicating the local custom volume to use for storing temporary files that are used to create the backup file that will be downloaded (which is useful in situations where the space available on the /
partition is not sufficient to generate the backup file).
In your case you would need to ensure the target path you are providing is:
As I’m thinking about this now, its also worth keeping in mind you cannot mount the same Ceph RBD volume concurrently on multiple systems, so I’m not sure that LXD should allow you to use a custom volume from a Ceph pool as the storage.backups_volume
setting. @stgraber does this make sense?
What is it that you’re trying to do? Are you trying to store backups of instances on the same Ceph cluster as they run on?
thank you for clarifying.
yes i want to store backups of instances in Ceph cluster as they run on.
regards.
In that case have you considered using snapshots (which LXD can automate on a schedule or you can do manually when needed). This would create a snapshot of the Ceph RBD for the instance and allow you to restore or copy it as a new instance if needed.
As its stored on the same ceph cluster it would offer the same level of data protection as storing instance export files on the same cluster.
I couldn’t find any option to set a profile to do snapshot stateful as same as snapshots.pattern or snapshots.schedule
for example snapshots.stateful true
Is there any option about that?
Hrm, good point, I don’t think that is possible at the moment.
Please can you open an issue at Issues · lxc/lxd · GitHub
Thanks