Why different ubuntu container images provide different group id for LXD inside containers?

Today I tried downloading version 19.04 of ubuntu image from ubuntu: and noticed a strange behavior of the LXD group in the container. I am wondering if it is a bug in LXD or something has changed in version 19.04 of Ubuntu. Here is how to repeat the experiment:

Download the images from the remote repository:

$ lxc image copy ubuntu:19.04/amd64 local: --alias ubuntu_1904
$ lxc image copy ubuntu:18.04/amd64 local: --alias ubuntu_1804

Next launch container with version 18.04 and observe group IDs:

$ lxc launch ubuntu_1804 1804
$ lxc exec 1804 -- id ubuntu
uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(lxd),114(netdev)

Repeat with version 19.04:

$ lxc launch ubuntu_1904 1904
$ lxc exec 1904 -- id ubuntu
uid=1000(ubuntu) gid=1001(ubuntu) groups=1001(ubuntu),1000(lxd),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),116(netdev)

Notice in the latter case how ubuntu's group got bumped up to 1001 and this is because lxd now has group ID 1000 instead of 108 in version 18.04 of Ubuntu.

Is this a bug in LXD or, is this normal behavior? If the latter is the case, what could one do to enforce consistent behaviour of group IDs inside the containers?

The UID/GID of the account in a container image are independent of LXD.
These are dependent on the container image generation tool distrobuilder but mainly the runtime that is produced by each distribution.

That is to say, if you install Ubuntu 19.04 on a computer or a VM and you get gid 1001, then this is an issue with the distribution. If you install Ubuntu 19.04 and the lxd group is not gid 1000, then I think it is an issue with distrobuilder. To be filed at https://github.com/lxc/distrobuilder/issues

Actually, this is the ubuntu: remote which are the official Ubuntu images built by the Ubuntu team, not by us.

I’ve reported the issue internally to the cloud team, hopefully they can track down what’s going on easily enough.

This should be the link,

Indeed, deleted the comment with the wrong link, not sure why my browser completed for that old issue :slight_smile: