they use the stock Ubuntu image with stock Ubuntu Linux kernel. LXD with ZFS works fine.
LXD is pre-installed in the image, however, when you “lxd init”, it gets stuck. You need to purge, then reinstall. I have some additional info in the article. It looks like a bug.
They will probably offer soon Ceph network storage. Would be interesting to try as well.
I noticed you use zfs with a loop device. Does this have acceptable performance? Is a loop device equivalent to creating a zpool from a file? Scaleway seems to offer additional SSD block storage. Wouldn’t that be a better option to use with zfs?
I am not sure about the proper ZFS terminology for this, loop device file, or zpool from a file, or sparse file. The idea is that it’s not a block device but goes first through some other existing filesystem.
There should be some performance hit, but if the workload is not I/O-intensive, it is generally OK. Especially if you have two or more vCPUs in the VPS.
It is possible with many VPSes to repartition the storage, therefore making space to create a ZFS pool on block storage. You generally do that when you boot to rescue. I have not tried it with Hetzner, though I am fairly confident it is doable. Scaleway cannot do that repartitioning. Linode is quite good here because they support repartitioning through the management interface!
Scaleway has network block storage for all storage needs. I think other companies let you use storage from the SSDs of the server, which is much better for I/O. That is, a 20GB VPS gives you 20GB on that local server. In terms of I/O, it remains to be tested which is better, 1) ZFS on loopback device but on local real SSD, or 2) ZFS on network block storage.
And an article on how to repartition the disk of the VPS so that a new partition is created in order to put ZFS in there for LXD. In the case of Hetzner, you go into rescue mode and use command installimage to repartition and install a new OS image. Compared to other VPSes, this one is moderate difficulty.
Hetzner now offers add-on volumes (block devices) for its VPS series, so you can create a ZFS volume for LXD without repartitioning the OS. I tried them and they perform well. The ZFS zpool is not limited by the size of the VPS. And if you need more space, you can easily add another volume to the zpool.
I haven’t used it in practice, except for loading that database. It’s a real-life database from a website/forum. I find this load time acceptable. ZFS is not optimal for mysql, which might perform better in ext4, but the convenience of snapshots and fast container copies with ZFS is worth it to me.
The Hetzner add-on volumes were not available until recently. They are still in beta.
Try it. It costs 0.004 EUR/hour for the cheapest VPS, so you can try it for a few days for pennies. I used a VPS for 396 hours and got billed 1.52 EUR. I used a couple of cloud volumes too, which are free until December 6. Then they will cost 0.04 EUR/GB/mo, and I believe they will be charged hourly. + VAT. The only hitch is that you need to trust them with a credit card, so they will charge you automatically every month, according to your usage.
I’ve used another company’s VPS with LXD before. It worked fine, but I was limited by the 10 GB disk. I could reliably fit a Ubuntu container and a few Alpine containers, in “dir” storage backend. I also tried to use it with ZFS on top of files. I could launch a few Ubuntu containers, because of ZFS sharing between them, but I ran out of disk space at unpredictable times (e.g. when I did an apt-get upgrade, or when I deleted images).
I currently have a dedicated Hetzner auction server with 40 running containers (50 total). Some of these are test containers, e.g. install some software in order to evaluate it. I use 800 GB of disk space (mostly videos and photos from some events, that I share with nextcloud). 800 GB would cost me 32 EUR/mo (+VAT) in the Hetzner cloud. The dedicated server is cheaper. In the meantime Hetzner launched Nextcloud servers for 7.90 EUR/1 TB/mo, so perhaps I could move my media files there (it’s not clear to me how), and replace the dedicated server with a few VPSs + cloud volumes. The economics of servers keeps shifting as new products and services come along. It generally gets better and cheaper, although occasionally some companies raise prices.
LXD provides a big advantage in this quest for the best option: It makes it easy to switch hardware and companies, because you can easily move containers between hosts, either by using “lxd copy” between two live hosts, or by exporting the containers as images and then importing them on another host. I used this to move my containers from a dedicated server temporarily to a VPS with attached storage (ceph - slow), so I could rebuild my dedicated server (upgrade OS, replace the software RAID with mirrored ZFS) and then move them back again.
Exactly… I’ve used scaleway for some very simple, low load stuff but I don’t think they’re worth trusting with more important work… Hetzner looks very interesting now they have volumes… iwstack are good but a little pricey compared to Hetzner…
I’m guessing you’d recommend Hetzner for LXD containers then?
I have 40 running containers on a machine with 16 GiB RAM. All are low-load containers. Here are the memory lines from top:
KiB Mem : 16355176 total, 1894228 free, 9479844 used, 4981104 buff/cache
KiB Swap: 8380412 total, 5199824 free, 3180588 used. 6144168 avail Mem
I’ve put my heaviest load on a VPS (2CPU/2GB). Part of the reason I did that was to see if it could handle it (it does). I’m now considering whether to move that load to my dedicated server or to a Hetzner VPS (with LXD).