Kanidm PAM and nsswitch in Incus (LXD) system container

getent passwd and getent group works as expected.

But when I want to login over SSH

Login with SSH key:
LOG:

Mar 10 07:06:05 ah sshd[1727]: fatal: initgroups: me@kanidm.example.com: Invalid argument

No home folder created.


Login with password:

ssh me@ah.incus

me@ah.incus's password: 
client_loop: send disconnect: Broken pipe

LOG:

Mar 10 07:02:35 ah unix_chkpwd[1691]: check pass; user unknown
Mar 10 07:02:35 ah unix_chkpwd[1691]: password check failed for user (me)
Mar 10 07:02:35 ah sshd[1688]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=fd42:8eeb:a9a2:85db::1  user=me
Mar 10 07:02:36 ah unix_chkpwd[1692]: could not obtain user info (me)
Mar 10 07:02:36 ah sshd[1688]: Accepted password for me from fd42:8eeb:a9a2:85db::1 port 40356 ssh2
Mar 10 07:02:36 ah sshd[1688]: pam_keyinit(sshd:session): Unable to change GID to 1883861673 temporarily
Mar 10 07:02:36 ah sshd[1688]: pam_unix(sshd:session): session opened for user me(uid=1883861673) by (uid=0)
Mar 10 07:02:36 ah sshd[1688]: pam_systemd(sshd:session): Failed to stat() runtime directory '/run/user/1883861673': No such file or directory
Mar 10 07:02:36 ah sshd[1688]: pam_systemd(sshd:session): Not setting $XDG_RUNTIME_DIR, as the directory is not in order.
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_drop_priv: initgroups failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_drop_priv: change_gid failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): Unable to drop privileges
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_drop_priv: initgroups failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_drop_priv: change_gid failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): Unable to change UID to 1883861673 temporarily
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_regain_priv: called with invalid state
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): Unable to change UID back to -1
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_drop_priv: initgroups failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_drop_priv: change_gid failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): Unable to drop privileges
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_drop_priv: initgroups failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_drop_priv: change_gid failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): Unable to change UID to 1883861673 temporarily
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): pam_modutil_regain_priv: called with invalid state
Mar 10 07:02:36 ah sshd[1688]: pam_motd(sshd:session): Unable to change UID back to -1
Mar 10 07:02:36 ah sshd[1688]: pam_mail(sshd:session): pam_modutil_drop_priv: initgroups failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_mail(sshd:session): pam_modutil_drop_priv: change_gid failed: Invalid argument
Mar 10 07:02:36 ah sshd[1688]: pam_unix(sshd:session): session closed for user me
Mar 10 07:02:36 ah sshd[1688]: fatal: initgroups: me@kanidm.example.com: Invalid argument

It creates the home folders:

drwxr-x---  2 root   root   4096 Mar 10 06:53 a6086074-562e-479d-9a0c-b952504972a9
lrwxrwxrwx  1 root   root     42 Mar 10 07:02 me@kanidm.example.com -> /home/a6086074-562e-479d-9a0c-b952504972a

Same with

root@node-incus-1:~# incus exec ah -- su --login me
su: cannot set groups: Invalid argument

It authenticates ok but breaks right after it.

The same setup works when I don’t run it in a system container.

Any pointers please? Thank you.

Run into a similar issue moons ago and if I’m not mistaken it is related to subuid/subgud range you have configured on your host. According to the line above you need a default range defined which is greater than “1883861673”. On my system I have the following defined:

host># cat /etc/subuid
root:1000000:2000000000

host># cat /etc/subgid
root:1000000:2000000000

After changing the values restart Incus and it should all work. For more details read this page Idmaps for user namespace - Incus documentation or search the forum to understand the backgroud…

1 Like

Thanks @osch I already tried the same thing following Authentication w/in Incus with Active Directory but for some reason it doesn’t work.

I restarted Incus, I restarted whole server. Created a new container, but still no success :frowning: .

Host /etc/subuid and /etc/subgid are as yours but inside container I have

root@aha:~# cat /etc/subuid
ubuntu:100000:65536

root@aha:~# cat /etc/subgid
ubuntu:100000:65536

I noticed the uid_map for host is:

root@node-incus-1:~# cat /proc/self/uid_map
         0          0 4294967295

and inside container

root@aha:~# cat /proc/self/uid_map
         0    1000000 1000000000

Is this correct?

OK, given the values in your container you are still out of range with your ID requirements:

  • 1000000000 <= container max allowed id
  • 1883861673 <= required ID

In my container I see:

container># cat /proc/self/uid_map
         0    1000000      10000
     10000      10000          1
     10001    1010001 1999989999

I have a few more mappings defined.

What does “incus config show ” return for the running container? Mine look like:

  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":10000},{"Isuid":true,"Isgid":true,"Hostid":10000,"Nsid":10000,"Maprange":1},{"Isuid":true,"Isgid":false,"Hostid":1010001,"Nsid":10001,"Maprange":1999989999},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":10000},{"Isuid":true,"Isgid":true,"Hostid":10000,"Nsid":10000,"Maprange":1},{"Isuid":false,"Isgid":true,"Hostid":1010001,"Nsid":10001,"Maprange":1999989999}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":10000},{"Isuid":true,"Isgid":true,"Hostid":10000,"Nsid":10000,"Maprange":1},{"Isuid":true,"Isgid":false,"Hostid":1010001,"Nsid":10001,"Maprange":1999989999},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":10000},{"Isuid":true,"Isgid":true,"Hostid":10000,"Nsid":10000,"Maprange":1},{"Isuid":false,"Isgid":true,"Hostid":1010001,"Nsid":10001,"Maprange":1999989999}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":10000},{"Isuid":true,"Isgid":true,"Hostid":10000,"Nsid":10000,"Maprange":1},{"Isuid":true,"Isgid":false,"Hostid":1010001,"Nsid":10001,"Maprange":1999989999},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":10000},{"Isuid":true,"Isgid":true,"Hostid":10000,"Nsid":10000,"Maprange":1},{"Isuid":false,"Isgid":true,"Hostid":1010001,"Nsid":10001,"Maprange":1999989999}]'

It properly reports a Maprange of 1000000000, so it didn’t pick up the subuid/subgid changes from the host.

What you can try is to change:

  security.idmap.size: "2000000000"

In the container config and see if it starts and uses the right range.

As mentioned it has been a while since I played around with it, so might not remember all settings I changed. One for sure is defining subgid/subuid to the correct range as this was key from what I remember.

@osch this my config

  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[]'

I added security.idmap.size but the same result and inside the container it still reports:

root@aha:~# cat /proc/self/uid_map
         0    1000000 1000000000

:frowning:

Thank you @osch for confirming I’m heading the right direction so I dig in some more.

It seems that problem was I was missing uidmap package on my host in Ubuntu.
Idmaps for user namespace

newuidmap (path lookup) and newgidmap (path lookup)

So it felt back to this:

If none of those files can be found, then Incus will assume a 1000000000 UID/GID range starting at a base UID/GID of 1000000.

It properly set the ranges now

  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":2000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":2000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":2000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":2000000000}]'
  volatile.last_state.idmap: '[]'

Have a nice day!

1 Like

Glad you found it.

1 Like