Authentication w/in Incus with Active Directory

Good evening:

I run lxc within ProxMox and Incus on my Debian 12 box. All linux containers on the PVE server authenticate with Active Directory. I would like to do the same w/ Incus on my Debian 12 box. On PVE, to get this up and running is as “simple” as modifying the container’s .conf file to add:

lxc.idmap: u 1000000000 1000000000 2500000000
lxc.idmap: g 1000000000 1000000000 2500000000
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536

and then running this script I created:

#!/bin/bash
read -p "Run installation and configuration?" -n 1 -r
if [[ $REPLY =~ ^[Yy]$ ]]
then
	apt update && apt upgrade -y && apt install -y ntp realmd sssd sssd-tools libnss-sss libpam-sss krb5-user adcli samba-common-bin git sudo curl
fi
echo
echo "dns_lookup_kdc = true" >> /etc/krb5.conf
echo "dns_lookup_realm = true" >> /etc/krb5.conf
echo Please provide FQDN of Domain Controller
domain_name=foo.bar
#sed -i 's/#NTP=/NTP=$domain_name/g' /etc/systemd/timesyncd.conf
#timedatectl set-ntp true &&
#systemctl restart systemd-timesyncd.service &&
#timedatectl --adjust-system-clock &&
echo
touch /etc/realmd.conf &&
echo
os_name=$(uname -o 2>&1)
echo $os_name
echo
echo Please Provide Os-Version
os_version=$(uname -v 2>&1) 
echo $os_version
# Editing realmd configuration file
echo "[users]" >> /etc/realmd.conf
echo "default-home = /home/%D/%U" >> /etc/realmd.conf
echo "default-shell = /bin/bash" >> /etc/realmd.conf
echo "" >> /etc/realmd.conf
echo "[active-directory]" >> /etc/realmd.conf
echo "default-client = sssd" >> /etc/realmd.conf
echo "os-name = $os_name" >> /etc/realmd.conf
echo "os-version = $os_version" >> /etc/realmd.conf
echo "" >> /etc/realmd.conf
echo "[service]" >> /etc/realmd.conf
echo "automatic-install = no" >> /etc/realmd.conf
echo "" >> /etc/realmd.conf
echo "[$domain_name]" >> /etc/realmd.conf
echo "fully-qualified-names = yes" >> /etc/realmd.conf
echo "automatic-id-mapping = no" >> /etc/realmd.conf
echo "user-principal = yes" >> /etc/realmd.conf
echo "manage-system = yes" >> /etc/realmd.conf
echo
pam-auth-update &&
echo
echo  Please provide Domain Admin Username
domain_uname=foo.domadm
read -p " Confirm $domain_uname ?" -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]
then
        echo Please provide Domain Name
        echo $domain_name
        read -p "Confirm $domain_name ?" -n 1 -r
        echo
        if [[ $REPLY =~ ^[Yy]$ ]]
        then
		echo Please provide computer-ou
		computer_ou="OU=devices,OU=linux,DC=foo,DC=bar"
		read -p "Confirm $computer_ou ?" -n 1 -r
		echo
		if [[ $REPLY =~ ^[Yy]$ ]]
		then
			realm join --verbose --user=$domain_uname --computer-ou=$computer_ou $domain_name --install=/
		fi
        fi
fi
# Editing sssd.conf
sed -i 's/services = nss, pam/services = nss, pam, ssh/g' /etc/sssd/sssd.conf
sed -i 's/ldap_id_mapping = False/ldap_id_mapping = True/g' /etc/sssd/sssd.conf 
sed -i 's/use_fully_qualified_names = True/use_fully_qualified_names = False/g' /etc/sssd/sssd.conf
echo "ldap_user_ssh_public_key = altSecurityIdentities" >> /etc/sssd/sssd.conf
echo
realm deny --all
echo Please provide Authorized Active Directory Security Group
domain_gname=realm_permit
realm permit -g $domain_gname@$domain_name
echo $domain_gname security group added to authorization list
echo
# Editing sudoers file 
echo "# Allow AD Security Group SUDO Access" >> /etc/sudoers
echo "%realm_sudo ALL=(ALL:ALL) ALL" >> /etc/sudoers  
echo
echo linux_sudo security group added to sudoers file
# Editing sshd_config file
echo "Modifying sshd_config file"
sed -i 's/\#SyslogFacility\ AUTH/SyslogFacility\ AUTH/g' /etc/ssh/sshd_config
sed -i 's/\#LogLevel\ INFO/LogLevel\ INFO/g' /etc/ssh/sshd_config
sed -i 's/\#LoginGraceTime\ 2m/LoginGraceTime\ 30s/g' /etc/ssh/sshd_config
sed -i 's/\#PermitRootLogin\ prohibit-password/PermitRootLogin\ prohibit-password/g' /etc/ssh/sshd_config
sed -i 's/\#MaxAuthTries\ 6/MaxAuthTries\ 3/g' /etc/ssh/sshd_config
sed -i 's/\#MaxSessions\ 10/MaxSessions\ 3/g' /etc/ssh/sshd_config
sed -i 's/\#PubkeyAuthentication\ yes/PubkeyAuthentication\ yes/g' /etc/ssh/sshd_config
sed -i 's/\#AuthorizedKeysCommand\ none/AuthorizedKeysCommand\ \/usr\/bin\/sss_ssh_authorizedkeys\ \%u/g' /etc/ssh/sshd_config
sed -i 's/\#AuthorizedKeysCommandUser\ nobody/AuthorizedKeysCommandUser\ root/g' /etc/ssh/sshd_config
sed -i 's/\#PasswordAuthentication\ yes/PasswordAuthentication\ no/g' /etc/ssh/sshd_config
sed -i 's/\#PermitEmptyPasswords\ no/PermitEmptyPasswords\ no/g' /etc/ssh/sshd_config
echo "Done modifying sshd_config file"
systemctl restart sshd &&
systemctl status sshd &&
# Create config file copies
cp -v /etc/krb5.conf $HOME/realm_configs/ &&
cp -v /etc/systemd/timesyncd.conf $HOME/realm_configs/ &&
cp -v /etc/realmd.conf $HOME/realm_configs/ &&
cp -v /etc/sssd/sssd.conf $HOME/realm_configs/ &&
echo
echo "Don't foreget to check /etc/ssh/sshd.conf" 
echo
echo "Specifically, look for:"
echo "AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys %u" 
echo "AuthorizedKeysCommandUser root"
sss_cache -E

Now, I run the script within the container and everything seems to go to plan but I am not able to login via Active Directory credentials.

I suspect it may have to do with not adding:

lxc.idmap: u 1000000000 1000000000 2500000000
lxc.idmap: g 1000000000 1000000000 2500000000
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 65536

but I am not sure where I would add the above as I am not sure incus has a similar .conf file for containers.

I may be way off, too. I’ll take any suggestions :slight_smile:

Thank you!

Can you show incus config show --expanded NAME for one of your instances?
Also, if they exist, the content of /etc/subuid and /etc/subgid?

1 Like

Container config:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian bookworm amd64 (20231228_05:24)
  image.os: Debian
  image.release: bookworm
  image.serial: "20231228_05:24"
  image.type: squashfs
  image.variant: default
  limits.cpu: "8"
  limits.memory: 16GB
  limits.memory.swap: "false"
  volatile.base_image: 90f7549f92c05e0a41006674e81e853194181375e9655daec88b886e64343e1b
  volatile.cloud-init.instance-id: f8e04743-6d91-462b-8f42-dc54f1f18676
  volatile.eth0.host_name: vethb761e8d8
  volatile.eth0.hwaddr: 00:16:3e:25:c0:0d
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1933401106,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1933401106,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1933401106,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 745918e8-9ea7-455e-955e-9ff0cd3aa979
  volatile.uuid.generation: 745918e8-9ea7-455e-955e-9ff0cd3aa979
devices:
  eth0:
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: graynode
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

/etc/subuid (host)

zabbix_ssh:100000:65536
root:1000000:1000000000

/etc/subgid (host)

root:1933401105:1
zabbix_ssh:100000:65536
root:1933401106:1000000000

Ok, update both /etc/subuid and /etc/subgid to have:

root:1000000:10000000000

Then restart incus with systemctl restart incus and try creating a new container and see if things work in there.

That should allow the container to have 10B uid/gid which is far more than you had in the previous setup and will definitely cover than 1B to 3.5B range you had defined back then.

1 Like

Stephane,
Thank you for the prompt response. Unfortunately it does not seem to work. If I attempt to use incus exec graynode-2 -- su --login ad_user I get:
su: cannot set groups: Invalid argument

If I exec into the container first and use su, I get the same error:
su: cannot set groups: Invalid argument

If I exec into the container first and use login it just says login incorrect.

For reference, here is the “new” container created post changes to the subuid and subgid:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian bookworm amd64 (20231231_05:24)
  image.os: Debian
  image.release: bookworm
  image.serial: "20231231_05:24"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 9e1b576ed17215cebca0b15fa904e9833c25203a67bc811e69d2a54b75fa1ef7
  volatile.cloud-init.instance-id: 8ff029c0-3838-4157-9f60-b515cb6227fb
  volatile.eth0.host_name: vethf3d04b30
  volatile.eth0.hwaddr: 00:16:3e:8b:02:86
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: f9763c24-e74f-409f-9d7b-0ed5968ae352
  volatile.uuid.generation: f9763c24-e74f-409f-9d7b-0ed5968ae352
devices:
  eth0:
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: graynode
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

Thank you!

EDIT: I am connected to the realm (it would appear)

realm list:

foo.local
  type: kerberos
  realm-name: FOO.LOCAL
  domain-name: foo.local
  configured: kerberos-member
  server-software: active-directory
  client-software: sssd
  required-package: sssd-tools
  required-package: sssd
  required-package: libnss-sss
  required-package: libpam-sss
  required-package: adcli
  required-package: samba-common-bin
  login-formats: %U
  login-policy: allow-permitted-logins
  permitted-logins: 
  permitted-groups: realm_permit@foo.local

Can you show getent passwd USERNAME for an AD user in the container as well as cat /proc/self/uid_map ?

1 Like

getent passwd USERNAME from within the container:

matthew:*:1933401103:1933400513:Matthew Foo:/home/foo.local/matthew:/bin/bash

cat /proc/self/uid_map from within the container:

root@graynode-2:~# cat /proc/self/uid_map 
         0    1000000 1000000000

Also, Happy New Year!

Hmm, that’s odd.

The value I gave you earlier:

root:1000000:10000000000

Should have resulted in 10000000000 uid/gid being available in your container, but instead the output above shows you only have 1000000000 which is less than the minimum of 1933401103 you need for that uid to be valid.

Could it be that you made a typo in subuid/subgid and are missing a zero in there? :slight_smile:
It gets a bit hard to read once you’re into the billions…

1 Like

I wish that was the case :confused:

here are my subuid and subgid, verbatim

subgid

root:1933401105:1
zabbix_ssh:100000:65536
#root:1933401106:1000000000
root:1000000:10000000000

subuid

zabbix_ssh:100000:65536
root:1000000:10000000000

I forgot to mention this, but the host also authenticates against the AD Domain Controller.

Okay, so I’d edit both subuid and subgid and have them only contain:

root:1000000:10000000000

The zabbix_ssh entry is a mistake from zabbix not using a system account for their user, you don’t need that line and the other root line is just going to cause you issues at some point.

Once you have both files with that content, make sure that you have the subuid package installed, this provides the needed newuidmap and newgidmap commands.

After that’s all done, restart incus with systemctl restart incus and try creating a new container.

1 Like

I changed the above-referenced files and install the package uidmap (which provides newgidmap and newuidmap commands in Debian 12) and then restarted incus. Unfortunately still no dice. I am sure it has something to do with a peculiarity of my setup (though I can’t think what it might be).

Do incus config show and cat /proc/self/uid_map still show 1B uid/gid instead of 10B?

1 Like

Yes, strangely.

Does Incus have a built-in map range limit?

Doh, I’m an idiot :slight_smile:

10000000000 is larger than the max value of a uint32 which is why the entry is considered invalid (because it is).

Can you change 10000000000 to 4000000000 instead, then do the usual dance of restarting incus and then creating a new instance?

1 Like

Firstly, if you are an idiot then I have barely achieved consciousness haha

Secondly, made the change but alas, different error this time:

incus launch images:debian/12 -s graynode graynode-2
Creating graynode-2
Starting graynode-2
Error: Failed to run: /opt/incus/bin/incusd forkstart graynode-2 /var/lib/incus/containers /var/log/incus/graynode-2/lxc.conf: exit status 1
Try `incus info --show-log local:graynode-2` for more info
╭─matthewryzen7-3700x ~/script 
╰─  incus info --show-log local:graynode-2                                                                                                                 1 ↵
Name: graynode-2
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2024/01/02 19:22 EST
Last Used: 2024/01/02 19:22 EST

Log:

lxc graynode-2 20240103002208.970 ERROR    cgfsng - ../src/lxc/cgroups/cgfsng.c:cgfsng_chown:1721 - No such file or directory - Error requesting cgroup chown in new user namespace
lxc graynode-2 20240103002208.970 ERROR    lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:878 - Received container state "ABORTING" instead of "RUNNING"
lxc graynode-2 20240103002208.970 ERROR    start - ../src/lxc/start.c:__lxc_start:2107 - Failed to spawn container "graynode-2"
lxc graynode-2 20240103002208.970 WARN     start - ../src/lxc/start.c:lxc_abort:1036 - No such process - Failed to send SIGKILL via pidfd 17 for process 3571348
lxc 20240103002208.993 ERROR    af_unix - ../src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20240103002208.993 ERROR    commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_init_pid"

Can you show the content of /var/log/incus/graynode-2/lxc.conf?

1 Like
sudo cat /var/log/incus/graynode-2/lxc.conf                                                                                                            1 ↵
[sudo] password for matthew.root: 
lxc.log.file = /var/log/incus/graynode-2/lxc.log
lxc.log.level = warn
lxc.console.buffer.size = auto
lxc.console.size = auto
lxc.console.logfile = /var/log/incus/graynode-2/console.log
lxc.mount.auto = proc:rw sys:rw cgroup:rw:force
lxc.autodev = 1
lxc.pty.max = 1024
lxc.mount.entry = /dev/fuse dev/fuse none bind,create=file,optional 0 0
lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file,optional 0 0
lxc.mount.entry = /proc/sys/fs/binfmt_misc proc/sys/fs/binfmt_misc none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/firmware/efi/efivars sys/firmware/efi/efivars none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/kernel/config sys/kernel/config none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/kernel/security sys/kernel/security none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/kernel/tracing sys/kernel/tracing none rbind,create=dir,optional 0 0
lxc.mount.entry = /dev/mqueue dev/mqueue none rbind,create=dir,optional 0 0
lxc.include = /opt/incus/share/lxc/config//common.conf.d/
lxc.arch = linux64
lxc.hook.version = 1
lxc.hook.pre-start = /proc/3547625/exe callhook /var/lib/incus "default" "graynode-2" start
lxc.hook.stop = /opt/incus/bin/incusd callhook /var/lib/incus "default" "graynode-2" stopns
lxc.hook.post-stop = /opt/incus/bin/incusd callhook /var/lib/incus "default" "graynode-2" stop
lxc.tty.max = 0
lxc.uts.name = graynode-2
lxc.mount.entry = /var/lib/incus/guestapi dev/incus none bind,create=dir 0 0
lxc.apparmor.profile = incus-graynode-2_</var/lib/incus>//&:incus-graynode-2_<var-lib-incus>:
lxc.seccomp.profile = /var/lib/incus/security/seccomp/graynode-2
lxc.idmap = u 0 1000000 4000000000
lxc.idmap = g 0 1000000 4000000000
lxc.mount.auto = shmounts:/var/lib/incus/shmounts/graynode-2:/dev/.incus-mounts
lxc.net.0.type = phys
lxc.net.0.name = eth0
lxc.net.0.flags = up
lxc.net.0.link = veth50c2c4d6
lxc.net.0.hwaddr = 00:16:3e:cf:91:12
lxc.rootfs.path = dir:/var/lib/incus/storage-pools/graynode/containers/graynode-2/rootfs

Hmm, okay, that looks fine…

Can you try with 3000000000 see if that somehow helps?
I’m not sure why it would, but then again, 1000000000 was working fine and now it’s not.

1 Like

Same result …

btw, I appreciate your time on this!

Does it start working again if you set it to 1000000000?

1 Like