Setting up home network with KVM and LXD

Hello,

I’m looking for guidance in setting my home server. I have a HP Proliant Gen 8 Microserver G1610T box upgraded with 12GB RAM.
I was using it and plan to continue using it with some critical applications AND as a homelab to test an play with new OSes, technologies and solutions.

TL;DR
If this post is too long for you, just please look on the diagram and the questions below it.

Background

In short, my plan is to isolate my production environment from testing ground, so I won’t mess up with services that should be up and running most of the time. Also, I am occasionally away from the physical box so if I screw up network configuration (which I’ve done…) or my host won’t boot up I’m f.u.b.a.r.. For that reason, I decided to have one VM for all critical applications and others for testing, according to my needs.

What I want

  1. Separate testing environments from services I don’t want to break
  2. Have remote access to administer machine, install / reinstall OSes
  3. Use snapshots to have point in time backups that I can revert to if something goes wrong

My plan

To achieve further fail-proof isolation and preserve from config contamination, I want to put those critical application in containers, preferably LXD ones because I then can provision them with ansible without much tweaking (the same way I’d done without any layer of vm / container).
Because my server is in my home behind my home router in order to access it from the internet I have to forward ports to it, so I my solution is as follows

  1. Port 80 and 443 for web applications are pointed to a container with nginx serving as reverse proxy (pointing cloud.mydomain.com to container 2, git.mydomain.com to container 3 etc.).
  2. All other applications are separated in LXD containers according to their purpose
  3. Other services (like XMPP) are forwarded (by router directly to specific container
  4. Bare-metal host OS is Ubuntu 18.04 installed on LVM to achieve easy rollback and backup with snapshots. Livepatch and unattended upgrades are setup. Host OS have minimal set of applications installed and all ports except SSH opened. For visualization KVM is used.
  5. On VM PROD another Ubuntu 18.04 is installed with latest snap LXD and several per-service containers.
  6. Both HOST and VM prod have bridge setup so every new VM and container can have its own IP on my LAN so further configuration will be easier.
  7. Each single machine (VM and container) is provisioned by Ansible with separate playbooks.
  8. For snapshots and backups ZFS is used for containers and LVM for VM PROD and HOST
  9. Storage consists of 1 SSD for HOST OS with setup and 4x3TB WD RED drives. I have not decided how to set them.

Below I attach a diagram of what I plan to setup and would like to ear some feedback about it - what are possible pitfalls and drawbacks and what should I do differently.

QUESTIONS

  1. How to orchestrate / administer it? Since every container / VM is a full separate OS, how do I batch upgrade them? With Ansible?
  2. HP Proliant has two NICs - how can I isolate home LAN (samba) with internet?
  3. Should I put samba on HOST rather than in LXD for performance reasons?
  4. How do I manage storage? To expose folders/ disk to containers I have to share it with VM PROD and then with LXD - is there a big penalty in I/O for it? Can I go with ZFS (since HOST is Ubuntu 18.04 which supports it without DKMS)
  5. Is there a big performance fall in setting ZFS storage for LXD containers on LVM?

Any additional comments and shared experience would be much appreciated.

Automate with Ansible, Salt, Puppet, remote API requests with curl and your own scripts, whatever takes your fancy.
With 2 nics either bond them, give the bond a L3 interface and route to your virtual networks which reside inside your machine (requires routing changes on your lan most likely) or Nat on egress and port forward on egress.
Alternative is bridge the bondX interface with your lxd / kvm bridge and the virtual machines / containers get an address on your lan.
More complex things are available like what I do, with and ovsbridge bridged to your bondX and trunk specific vlans from your switch, you can isolate that way if needed if you want to put samba on its own vlan. So many ways with the networking really there is no right way. I would advise routing to your containers if you have a decent switch/router capable of l3. next hop will be your micro servers address from your core switch / firewall.
if your really bothered about isolation then use them as two separate nics, one can be used for samba and connect it to its own vlan on the switch and then firewall the traffic at your router.
you also have the option of VRF on linux now with which you can create a bridge and put it in its own routing table for extra security. Its also useful for management interfaces. just in case you muck up your main routing table and kick yourself off the “in band” management address.
you can create and mount storage volumes in lxd if thats what you mean. zfs, btrfs etc
I’m not a storage person at all, but why use LVM if your using ZFS I would just give lxd a par of disks as block devices and they are owned by zfs entirely.

I’ve not noticed any performance hit with zfs

I even run my kvm virtual machines in a lxd container, whether or not that is advisable or not, I’m not sure!

Other option is run proxmox-ve and let that deal with kvm then install LXD via snap, I can vouch for that I have it running on a hetzner server,.

Jon.

  1. For package upgrades, Ubuntu uses unattanded-upgrades and the default configuration is to install all security upgrades within about a day from their release.
    You can change the default to include the auto-installation of all upgrades, if you want to.
  2. You can disable IP forwarding between the two NICs, and setup the router not to allow communication between them within the LAN.
1 Like

Greetings,

I am in the middle of constructing my system and I run into some fundamental issues that I have to reconsider before moving on. The most important of them is how to share storage between Host, VM an LXD containers.

To sum up my draft setup:

  1. HOST OS: Ubuntu 18.04 with KVM on LVM on /dev/sda SDD with ZFS RAIDZ1 pool made from /dev/sd[b-e] 4xHDD
  2. VM: Ubuntu 18.04 as raw image on LVM on same /dev/sda - as I read, raw images tend to be slightly faster than qcow2 and I still can do live snapshots thanks to LVM
  3. LXD containers INSIDE VM

Now, I want to make the best use of ZFS storage that HOST OS takes care of. However, the pool cannot be managed neither by VM or LXD because of the virtualization layer. So my solution is:

  1. Share ZVOL block device, mount it inside VM as /dev/vdb
  2. Use that disk as block device to ZFS storage for LXD (so I can use snapshots functionality within LXD)
  3. NFS mount tank/nextcloud from HOST OS to one of the LXD container to have more storage

I guess at this point I realized I am overcomplicating this and going against KISS rule. I know I can just store all LXD containers directly in the host, but my reasoning to the initial solution is:

  1. Having everything inside one VM makes it more portable, easier to backup
  2. In case of system restore, it’s easier to restart VM than HOST
  3. Adding additional layer of virtualization makes it more secure? Or is it just false sense of security?

I am open to suggestions and certainly would like to hear an opinion from someone more seasoned than me. In particular, I have following questions:

  1. Is there a serious I/O performance hit in creating ZFS pool for LXD from ZVOL blok device?
  2. Is NFS mount form HOST to LXD container and/or VM a good idea to expose storage for application like nextcloud? Are there better alternatives?

I would put LXD on the host, it’s very portable and easy to backup with lxc copy, lxc move, lxc export (to file) and soon to have lxc refresh for incremental updates. Just make sure it has a zfs dataset as a source for storage.

Seem to be loosing the benefits of lxd sticking it in a VM other than for doing test/proof of concept work.

Just my two pence anyway… :slight_smile:

Cheers!
Jon.

1 Like

And yes will be more “secure” in a VM if you are trying to keep the security folk happy. No shared kernel with the host.