Need sanity check/help with finishing touches to setup my LXD node

Hello all,
after whole day of googling and fooling around I nearly am in the end of “prodding” LXD node in my homelab. I have few questions that are kind of blockers for me due to my lack of knowledge :slight_smile: :

  1. General: I managed to install on Ubuntu 18 Server via snappy after few tries. Ubuntu 18 comes with LXD preinstalled via .deb package so to move to snap install I had to remove everything via sudo apt remove --purge lxd lxd-client liblxc1 lxcfs otherwise when just removing lxd, sudo apt purge lxd*, I got socket error while connecting to LXD after initial init. Is this OK?

  2. Storage: I have RAIDZ2 (5x 4TB) storage based on HDDs. Now few questions here:
    a) I am waiting for power cables for my PSU so I could connect 2 SSDs and create ZFS mirror and I’d like to know what would be the best way to change default storage to SSDs and migrate all containers to it.
    b) I have specific scenario where host is sharing storage with one or two containers. I found a way to share on but say my users have different uid/guid and here is my question, how to do it properly? Host, say uid/guid of the user is 412:415, should be happy with RO while container(s) will have to have RW rights. This is biggest blocker for me as everything is happening on single host (NAS/Storage) and I might have NUC cluster + separate storage in the future.

  3. Profiles/Networking.
    I managed to create br0 on Netplan so some containers would have direct access to my LAN. Then I created natbr0 via LXD so I would have NAT for other containers. So I have created 3 different profiles:
    a) default -> br0
    b) NAT -> natbr0
    c) dualnic -> br0 + natbr0
    All works cool in a way. I noticed that while default profile is used there is no issue. My LAN DHCP is assigning addresses. While creating containers with either NAT or dualnic then fun starts. Please notice here for me eth0 is my LAN and eth1 is LXDs internal LAN and all is tested on Ubuntu 18.04 containers.

  • So with dualnic eth0 is getting an address while eth1 does not so I must disable cloud-init in it and add eth1 manually.
  • Same goes for NAT. In this profile I’ve set eth1 as network device so when listing I could see which container is connected to which network. Similiar story to dualnic, cloud-init pushes config to eth0 anyways.
    How to fix this? :slight_smile:

I think that is all I have for now. I’d appreciate if someone could point me or help me with setting this or say that all is good :stuck_out_tongue: Let me know if you need specific settings from my current setup so you could see how this is configured right now.

Hi Artur!

  1. Indeed, when you use Ubuntu which already has the deb package of LXD, you can either (a) remove first the deb packages for LXD, or just install the snap package of LXD and then run sudo lxd.migrate to remove the deb packages. If you want to remove yourself the deb packages of LXD, you can also use this single-line command sudo apt-get remove --auto-remove lxd.
  2. Use lxc storage to create the new storage as desired, and then lxc move to move the containers from the old storage pool to the new storage pool. For 2b, better ask a separate question.
  3. There should not be an issue with creating containers with the NAT profile, unless something is missing from the description you are giving. For the dualnic profile, it is up to the container to figure out from where to get the IP addresses and whether to up both interfaces. It is a matter of testing to see what happens. LXD makes both eth0 and eth1 appear in the container, and the container runtime will do what it needs with them.
    From your description, there is no hint about you actually using cloud-init in the profiles. The network configuration in the LXD profiles is not made known to a container through cloud-init; you can use cloud-init to add specific special network configuration, but if you get an issue with that, show the full LXD profile with cloud-init instructions (lxc profile show myprofile).
  1. Thank you for confirmation :slight_smile: I prefered to install LXD from scratch. Ubuntu 18.04 came with LXD 3.0.3 iirc while LXD has the latest version :smiley:

  2. a) Will test it by the end of the week. b) I will do that!

  3. I thought that profiles are connected to cloud-init somehow. Or I at least understood that this way. Anyway here is my dualnic profile + test container -> (formatting is terribly wrong here).

As you can see I have two NICs (eth0 and eth1). That info is passed to container (eth1 is up) BUT netplan is not configured for that specific port. Am I missing something here?

The network configuration does not pass to the containers with cloud-init, because you can setup, for example, a bridge or macvlan on a container that does not support cloud-init and it will work. For example, test with images:alpine/3.7.

I think that the /etc/netplan/50-cloud-init.yaml file did not come from LXD, and it shows that by default, the container will only try to get a DHCP lease for the device eth0. You can perform some black box verification if you create a LXD profile that uses the name myeth0 and check whether the launched container has a mention of myeth0 in /etc/netplan/50-cloud-init.yaml!

To get your containers to ask for a DHCP lease for both eth0 and eth1, you can use cloud-init so that the container will ask for a lease for eth1 as well.

I’ve managed to create test network with interface eno1 but LXD still configures alpine image with eth0:

In the end this is not big problem but it would be cool to understand that :slight_smile:

Kind of liked LXD while doing “production” although documentation is tough to read. I love to see examples and here kudos for Oracle’s documentation for ZFS :stuck_out_tongue: learning with examples is the best :smiley:

The eth0 in Alpine is likely the default network configuration of the container image.
You can try the same with ubuntu:18.04 and inspect there whether the container sees eno1 or whether it sees eth0. By doing this, it should be conclusive that LXD does not use cloud-init unless you add cloud-init instructions explicitly in the profile.

When you run lxc image list images:, you will notice duplicate container names, such as debian/10 and debian/10/cloud. The cloud ones are container images with cloud-init.
Also, all containers in ubuntu: have cloud-init.

The LXD documentations has
You just need to add the appropriate configuration in the profile in order to create custom containers.

I wrote the following that shows how to setup a LXD profile with cloud-init instructions to setup two network interfaces in the containers; an interface to the private bridge and an interface to the LAN (through macvlan).