Questions, about new zfs/lxd setup

Looking for some pointers about current best practices on zfs & lxd use as i plan my next bare metal server, this time though not ext4/LUKS but fully encrypted zfs.

Right now i do have encrypted zfs mirror root on SSD and 2 spare HDDs that i plan on use as a encrypted zfs mirror pool.
Faint memories from reading here something about whether it is preferred that lxd init grabs the 2HDDs or if i could create my encrypted mirror pool/dataset manually and pass it on to lxd init without problems?
I plan on using snap install again and that would be on the mirrored SSDs, was there anything that might be causing problems with anything when the containers reside on the HDDs pool/dataset?

Also thinking of using a pool/dataset on the SSD for containers that might need more disk speed, irrc it shouldn’t be a problem of adding two zfs pool/datasets to the lxd init?

No plans to use swap with zfs.

Well anything that could lead to problems down the line is what i would like to avoid, thanks.

LXD is perfectly happy to use an existing pool or dataset, when you need a less common config, that’s usually the easiest.

In your case, you’d indeed want to create the two zpools ahead of time, setup encryption and anything else you want, then you can add them both to LXD with:

  • lxc storage create hdd zfs source=hdd-pool/lxd
  • lxc storage create ssd zfs source=ssd-pool/lxd
  • lxc profile device add default root disk pool=hdd path=/
  • lxc network create lxdbr0
  • lxc profile device add default eth0 network=lxdbr0 name=eth0

The commands above effectively replace your normal lxd init run.
lxd init does support using an existing zpool but not creating additional pools so since you want two, I figured I’d give you the direct commands :slight_smile:

2 Likes

Thanks for the quick reply.
Had to dig deeper into zfs and redo my setup, learn more how to use zfs and decided against zfs on root for now on this fresh ubuntu server.

The first four commands work but the last one adding the default network to the default profile fails,

lxc profile device add default eth0 network=lxdbr0 name=eth0
Error: Device validation failed for "eth0": Failed loading device "eth0": Unsupported device type
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 90:1b:0f:fd:aa:xx brd ff:ff:ff:ff:ff:ff
    inet xx.xx.xx.xx/32 scope global enp0s31f6
       valid_lft forever preferred_lft forever
    inet xx.xx.xx.xx/26 brd xx.xx.xx.xx scope global enp0s31f6
       valid_lft forever preferred_lft forever
3: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3c:0e:aa:bb brd ff:ff:ff:ff:ff:ff
    inet 10.169.100.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever

lxc profile show default
config: {}
description: Default LXD profile
devices:
  root:
    path: /
    pool: hdd
    type: disk
name: default
used_by: []