Ubuntu 18.04 Install on single SSD for Best Performing LXD

I have a single high-capacity SSD on a machine I want to set-up as a server to run sftp, web sites and a Nextcloud instance - all in separate containers. And probably other services too. LXD works perfectly for me, so I already know that’s my containerization of choice.

Can someone advise: what’s the best way to install Ubuntu server 18.04, which likely needs way less than 20 GB of total disk space, such that I can get the best performing LXD “drive” on the free space. E.g. do I use LVM for the installation and create a large lvm thinpool on the remaining unused space? Or should I pre-partition the disk into a large and small drive, and run the OS installer on the small drive and have the remaining for e.g. zfs or similar?

Or am I sweating over nothing important here?

PS - I LUKS encrypt all my drives, and that’s a practice I will continue with this install whatever I do.

Hello Andrew, your worries are well founded.

As far as I know the most performant “drive” would be Ext4, however you would be missing on a lot of the LXD features, so the default and most recommended is ZFS.

By default, lxd init will use a “file” and mount zfs on top of it, this is a terrible idea when using ZFS because of low performance and limited features.

The simplest and quite effective option is just what you mentioned, ZFS will be much happier with a partition than a “file”, so create a small partition for the host and a different partition for ZFS and you should be fine.

The best possible solution is to give ZFS one or more entire disks.
Also consider ZFS was not designed to work with virtual block devices, so getting it to perform optimally is quite difficult on cloud environments.

1 Like

THANK YOU. I can’t give zfs an entire disk, as I only have one drive on this device and don’t want to buy a second one if I can avoid it, as I have plans for the other slots; but I can change my mind if I am shooting myself in the foot. Do you think I will I lose much by way of features/performance if I use a large partition of the SSD for a zfs partition versus the entire drive?

TL;DR:

  • ZFS on a file is BAD
  • ZFS on a Partition is fine
  • ZFS on a virtual disk is mmmm OK
  • ZFS on top of multiple physical disks is awesome.

Andrew, no, not at all. A partition is much better than the default “ZFS on top of a file” (which is barely good for testing and that should be mentioned somewhere probably in the lxd docs or even the lxd init).

As you may know, ZFS stands for Zettabyte File System, when you are dealing with huge pools of physical drives you are indeed interested in nailing the milliseconds of your petabyte-sized storage, for regular servers, there’s really no need to learn the complexities of the ZFS core.

When you use ZFS with dedicated physical drives, it is able to talk directly to the disk controllers and hence you can deal with small nuances and optimize for your specific hardware, however this is pretty advanced and specific and the default settings work just as fine on a partition as they would with a single dedicated disk.

ZFS really shines when you use it with lots of disks.

2 Likes

You are awesome. THANK YOU!!!

Glad to be of help…

Hi,

Thank you for those tips, I was asking myself the same question than Andrew :slightly_smiling_face:
I was hesitating between :

  • Cloud VPS + ZFS partition (Nvme)
  • Baremetal with dedicated disks (SSD)

The second option seems to be the best.
Thx !

The issue here is about the trade-off between cost and what is adequate for your specific case.

The ideal would be to have several physical PCI Express NVMe disks (not SSD) on a baremetal server.
If you can afford it, go for that.

Hi @simos

Thank you for this comment, I’ll do that !
For my understanding why do you not recommend SSD ? (speed aside)

NVMe disks are much faster than SSD’s

Latency and IOPS are also better with NVMe (really important for MySQL).

The choice of storage device is a tradeoff of cost, easiness to setup, what is really necessary for your needs and probably other factors.

If cost is not an issue, then you would go for multiple NVMe devices (not network storage on NVMe but NVMe devices on a baremetal server).

In practice, you often try to lower the costs and you can consider cheaper options, as long as you get acceptable performance and reliability for your needs. You can measure performance with available tools. For reliability, you take into account the statistics (how likely it is to fail in some way) and you develop backup plans when the failure happens.

512GB nvme drives are getting really cheap

Example on NewEgg. $57

Patriot SCORCH M.2 2280 512GB PCI-Express 3.0 x2 with NVMe 1.2 Internal Solid State Drive (SSD) PS512GPM280SSDR Internal SSDs - Newegg.com

Problem with M.2 is finding a good board to stack several of these…

1 Like

You mean like these?
https://www.icydock.com/goods_cat.php?id=174

I bought one, installed it in one of my extra 5 1/4 drive bays and now have 4 drives installed using 1 bay.

2 Likes

Hi,

Thank everyone for these answers-tips :slightly_smiling_face:

Vurtn3