So far I’ve only ever used LXD on Arch linux systems, where everything is always current. I’m about to do a deployment on Ubuntu 20.04. When I checked the distro package list, the version of LXD that comes with 20.04 is 0.9? Is that even possible? But that’s the only thing that comes up when I search for packages. I’d prefer to avoid using a snap for this project… Does anyone know if there’s an LXD PPA?
The version you’re looking at (0.9) is the version of the package which migrates pre-snap users over to the snap package on Ubuntu.
For new installs, Ubuntu server will usually come pre-installed with the latest LXD LTS release, so currently 4.0.7. Then you can switch that over to the latest release track which then gets you 4.19 (snap refresh lxd --channel=latest).
Sounds like I’m being pushed into using a snap. <:)
I read somewhere that the snap version of LXD doesn’t use /etc/set[uid|gid] ? What other differences am going to need to worry about? It there some documentation on this?
Yeah, the snap doesn’t care about /etc/subuid and /etc/subgid which mostly means less work configuring the system to get it working.
The other visible differences likely are:
- Data is in /var/snap/lxd/common/lxd/ instead of /var/lib/lxd/
- Snaps use mount namespaces so you’ll see a lot less LXD related stuff in your mount table and if you need to go see what the snap sees, you’ll need to go through /var/snap/lxd/common/mntns/
- The snap includes all dependencies and so LXD will usually not use any of the tools from your host system. This can occasionally be an issue if your host system has a much more recent version of say LVM or Ceph than what’s in the snap, but both of those have snap config options to force the snap to use the system tools instead. None of that will be an issue for you though as the snap is based on Ubuntu 20.04 and your host system will also be 20.04 so all versions will line up anyway.
Ah and snaps automatically update unless configured otherwise.
We have a forum post covering that: Managing the LXD snap
However I do care about uid mapping. I’m already having to deal with uid issues resulting from a Samba PDC to AD-DC upgrade and would like the LXD uid mapping to be deterministic. If the snap version of LXD doesn’t use /etc/sub[uid|gid], then how does it map uids, and is this configurable? I read through part of a lengthy discussion of this here:
but it seemed to end inconclusively.
When running on the snap, the range LXD uses is 100000-1001000000 which provides 1 billion uid/gid by default to containers.
If you want each container to be isolated with its separate range, you can set
security.idmap.isolated=true and then control how many uid/gid are assigned with
security.idmap.size and where the range begins on the host with
What if I just want to use the usually suggested default for the non-snap version, namely 1000000 - 1065536?
I’m worried that anything over one billion will start to interfere with the Samba RID → UID mapping scheme.
Samba won’t notice it.
To the inside of the container it will look like you went from just having 0-65536 be valid to now having 0-1000000000 be valid.
You could get the exact same behavior as you had before by setting this on the container, but that’s not needed:
(Assuming a normal default where it starts as 100000 and not 1000000 as you suggest, otherwise add a zero above)
If I’m mapping an external directory in to the container, say by using a bind mount, then I think it does matter how the UIDs get mapped, or no? BTW, I’ve been meaning to ask and can put this in as a separate topic, but is the best way to handle this still using extended POSIX ACLs? I.e. if foobar is owned by UID 1562224688 in the bare metal OS, and I want the LXD user with UID 1001 to have rw access to this file, then I would
setfacl -m u:101001:rw foobar
or is there a better way to allow file sharing between systems?
Ah yeah, if dealing with ownership on a shared filesystem, then it does matter and above should let you do that.
For your other question, ACLs are the safe option though they may look a bit odd.
You could alternatively map uid 1562224688 from the host to uid 1001 in the container using
raw.idmap but that means that anything which runs as 1001 in the container will be running as 1562224688 too which may be problematic.
On newer kernels, we’re starting to use the new VFS idmap shifting feature but that’s still quite limited (only works on ext4, xfs and vfat) and won’t work for a uid/gid that’s outside of what’s available in the container.
Thanks, that was super helpful. I still haven’t wrapped my brain around how VFS idmap shifting works (but am pretty sure I’ll have too soon).