LXD pool on a 2 mirrored HDDs - How?

I have a 19.10 server running on SSD. I have 2 WD 1TB HDDs, which I want to get mirrored and to create 1 partition there, where I’ll put LXD pool with multiple containers. Should I create mirror with LVM first and then LXD pool there? Which filesystem to use?

Also, I need to be sure that if I’d like to move mirror to another server it won’t be super hard. Plus, if 1 hdd fails, I’d like to be able to connect another one to another PC with Windows to perform a backup.

Yeah, LXD API doesn’t let you configure that directly so you either need to add the second drive after the fact or configure the whole thing yourself and just tell LXD to use it.

So, is it a good idea to use lvm?
Which FS is better to use for this partition?
What combination is best for LXD?

If using LVM, you’d want to create a new VG with a PV on each disk and then give that VG for LXD to use directly using the lvm storage driver.

1 Like

Ah, so I can map to a VG directly! Very cool!
And is such config, in your opinion, reliable?

Yeah, giving an existing VG to the LXD LVM driver is perfectly fine.
Just make sure you never have anything else use that VG. It must be completely dedicated to LXD (we have a check for that at creation time but we can’t detect later tempering).

vgcreate blah PV1 PV2
lxc storage create default lvm source=blah

Is the gist of how to make that work.

LVM isn’t our favorite storage driver since it relies on a block device per container, making it a bit slower to create containers, also not quite as nice to migrate between systems (a full rsync is used). But it does support all our features and is actively tested.

Okay, so I have the following problem.
I have a pool of zfs:

zpool list
zfs_lxd 464G 1.59G 462G 0% 1.00x ONLINE -

After lxd init:

lxd init

Would you like to use LXD clustering? (yes / no) [default = no]: no
Do you want to configure a new storage pool? (yes / no) [default = yes]: yes
Name of the new storage pool [default = default]: zfs_lxd
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default = zfs]: zfs
Create a new ZFS pool? (yes / no) [default = yes]: no
Name of the existing ZFS pool or dataset: zfs_lxd
Would you like to connect to a MAAS server? (yes / no) [default = no]: no
Would you like to create a new local network bridge? (yes / no) [default = yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes / no) [default = no]: yes
Name of the existing bridge or host interface: lxdbr0
Would you like LXD to be available over the network? (yes / no) [default = no]:
Would you like stale cached images to be updated automatically? (yes / no) [default = yes]
Would you like a YAML “lxd init” preseed to be printed? (yes / no) [default = no]: yes
config: {}
networks: []

  • config:
    source: zfs_lxd
    description: “”
    name: local
    driver: zfs
  • config: {}
    description: “”
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nothing
    path: /
    pool: local
    type: disk
    name: default
    cluster: null

Error: Failed to create storage pool ‘zfs_lxd’: Failed to run: zpool import zfs_lxd: cannot import ‘zfs_lxd’: pool was previously in use from another system.

And this error occurs all the time.
If I do
zpool export zfs_lxd
All ends without any errors.
Is there a better way?

Also I don’t know why I must set name zfs_lxd for storage pool or i have similar error.

I think I solved the problem by changing
ZFS mountpoint
zfs set mountpoint = none zfs_lxd

Ubuntu 18.04/20.04:

Wipe the disks, create 2 empty partitions and create a ZFS mirror with root permissions.
zpool create zfs_lxd mirror /dev/sdaX /dev/sdbX

Go to your LXD user and run lxd init
Create a new ZFS pool? (yes / no) [default = yes]: no
Name of the existing ZFS pool or dataset: zfs_lxd

That’s all you have to do.

Go to your LXD user and run lxd init

You mean su - lxd?
This won’t work due to lack of permissions

2020/07/19 15: 21: 04.428324 cmd_run.go: 918: WARNING: cannot create user data directory: cannot create “/ var / snap / lxd / common / lxd / snap / lxd / 16100”: mkdir / var / snap / lxd / common / lxd / snap: permission denied
cannot create user data directory: / var / snap / lxd / common / lxd / snap / lxd / 16100: Permission denied

The snap package runs in its own mount namespace, shielding the host from any mount that occurs inside the snap.

zfs list
zfs_lxd 85.5K 449G 24K /zfs_lxd

And i think the problem is with mountpoint.

When i create zpool:

zfs list
zfs_lxd 85.5K 449G 24K /zfs_lxd

But in on an existing other installation MOUNTPOINT is set to none.

 zfs list
NAME                               USED  AVAIL  REFER  MOUNTPOINT
zfs_lxd                            454K   449G    24K  none
zfs_lxd/containers                  24K   449G    24K  none
zfs_lxd/custom                      24K   449G    24K  none
zfs_lxd/deleted                    120K   449G    24K  none
zfs_lxd/deleted/containers          24K   449G    24K  none
zfs_lxd/deleted/custom              24K   449G    24K  none
zfs_lxd/deleted/images              24K   449G    24K  none
zfs_lxd/deleted/virtual-machines    24K   449G    24K  none
zfs_lxd/images                      24K   449G    24K  none
zfs_lxd/virtual-machines            24K   449G    24K  none

Lack of permissions? You can use LXD without root or sudo permissions and assign ZFS pools without any special settings.

Install LXD snap with root, create your zpool, add your lxd user (without sudo and root permissions) to the LXD group and you are ready to go.