Choosing a storage backend

I want to use LXD in production and it’s time for me to get some new servers.

I’ll probably have just two SSDs that are mirrored, so I won’t have dedicated drives for LXD. My experience with storage backends are limited so I would prefer something simple to set up and maintain, while still offering great performance for databases. I’ll be using Ubuntu 20.04.

I see that ZFS and BTRFS are recommended, but I don’t get why it’s recommended over LVM. (is it just because of faster transfer between hosts?)

The way I see it, my options are, from simplest to hardest(?):

  1. mdraid + LVM.
  2. mdraid on one partition from each drive, while giving ZFS a larger partition on each drive.
  3. ZFS root.

Sadly Ubuntu doesn’t offer ZFS root out of the box.

I’ve been running option 2 on a couple of servers at home, but with spinning rust and a SSD as ZIL. The performance isn’t great, but I’m not sure if that is the drives fault or the setup.

If docker with overlay2 in a container is simpler with LVM then that is also a small plus for me. I can’t get it working when using ZFS backend.

What would you guys choose, both from a maintainability standpoint and a performance standpoint?

Hoping to get some pointers and tips, thanks! :blush:

1 Like

overlay2 doesn’t work on top of ZFS, so if that’s a big part of what you’re running, avoiding ZFS may help.

LVM works fine, but snapshots can be expensive, initial creation is slower, backups aren’t optimized and migration isn’t too great either. You also can’t grow/shrink containers while running. All of those boil down to limitations of block based storage vs filesystem based storage as seen on dir, zfs and btrfs.

1 Like

I have a similar situation, but I have and older server to reuse in production.
I have a raid 10 hw configured on its to use either for root fs and backend for lxd 4.0 LTS
I want to install bionic LTS on the root fs ( /, /home, /etc… ) and use the other space for LXD.
If I setup an LVM installation I can use different lv for installation of ubuntu and lxd.
If I choose zfs or btrfs I have to to separate partition instead that one physical volume ?
which of that fs is recommend if I also want to use docker on lxd?
I don’t think to use docker directly on the host.
I don’t have in the past good experience with zfs, but it was at least 6 years ago on debian.
Thanks

I’ve been running lxd at a relatively small scale (less than 10 servers) for a little under 5 years. I plan to add another 5 servers in the next few weeks.

Initially I used the btrfs storage backend, but I hit too many performance bugs, particularly when I was using multiple btrfs snapshots.

For the new deployments I will be using XFS with the dir backend, and project quotas.

Although this makes snapshots and copies much slower and more disk intensive (Unfortunately, LXD doesn’t use reflink to copy VMs or make snapshots with the dir backend, because it uses rsync which doesn’t support reflink), I favoured the overall stability and consistent performance of XFS.

I reclaim space from duplicate files (which also has the effect of reducing memory usage and improving execution speed by sharing binaries in RAM between containers) by running dupremove periodically as a scheduled task. e.g. see this example after manually running an apt-get upgrade on 5 Debian containers.

root@lxd006:/srv/lxd-default-storage-pool# du -hs .
7.2G    .
root@lxd006:/srv/lxd-default-storage-pool# df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc         40G  6.8G   34G  17% /srv
root@lxd006:/srv/lxd-default-storage-pool# chrt --idle 0 ionice -c 3 duperemove -q -r -d /srv/lxd-default-storage-pool/
Comparison of extent info shows a net change in shared extents of: 1537785568
root@lxd006:/srv/lxd-default-storage-pool# df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc         40G  5.7G   35G  15% /srv
root@lxd006:/srv/lxd-default-storage-pool# du -hs .
7.2G    .

This suits my usage case, but the slow snapshots etc. may be a deal-breaker for you!

1 Like