Can not use sudo command in the container

The background is that ubt-1 is a container instance. I set the lxd storage pool path to my old hard drive partition and the type is “dir”. Before I change the lxd storage pool, I run sudo command in the container successfully. After I change the lxd storage pool, I met Error information such as below.

tang@ubt-1:~ $ sudo tasksel remove lubutu-core
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?

I check the file system mounted relevant to “lxc|lxd” on the physical host machine.

(base) admin1@admin1-S2600CO:/media/admin1/work/tangmaomao/project/lxd/lxd_images $ mount | grep -E "lxd|lxd"
/var/lib/snapd/snaps/lxd_14954.snap on /snap/lxd/14954 type squashfs (ro,nodev,relatime)
nsfs on /run/snapd/ns/lxd.mnt type nsfs (rw)
tmpfs on /var/snap/lxd/common/ns type tmpfs (rw,relatime,size=1024k,mode=700)
nsfs on /var/snap/lxd/common/ns/shmounts type nsfs (rw)
nsfs on /var/snap/lxd/common/ns/mntns type nsfs (rw)

What’s the problem?

Can you show ls -lh /usr/bin/sudo and cat /proc/mounts from inside the container?

tang@ubt-1:~ $ ls -lh /usr/bin/sudo
-rwsr-xr-x 1 root root 146K Feb  1 01:18 /usr/bin/sudo


tang@ubt-1:~ $ cat /proc/mounts
    /dev/sda / ext4 rw,nosuid,nodev,relatime,stripe=8191,data=ordered 0 0
    none /dev tmpfs rw,nodev,relatime,size=492k,mode=755,uid=1000000,gid=1000000 0 0
    proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
    sysfs /sys sysfs rw,nodev,relatime 0 0
    udev /dev/fuse devtmpfs rw,nosuid,relatime,size=16395124k,nr_inodes=4098781,mode=755 0 0
    udev /dev/net/tun devtmpfs rw,nosuid,relatime,size=16395124k,nr_inodes=4098781,mode=755 0 0
    binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
    fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
    pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
    debugfs /sys/kernel/debug debugfs rw,relatime 0 0
    securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
    sysfs /sys/kernel/tracing sysfs rw,nosuid,nodev,noexec,relatime 0 0
    mqueue /dev/mqueue mqueue rw,relatime 0 0
    tmpfs /dev/lxd tmpfs rw,relatime,size=100k,mode=755 0 0
    tmpfs /dev/.lxd-mounts tmpfs rw,relatime,size=100k,mode=711 0 0
    lxcfs /proc/cpuinfo fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    lxcfs /proc/diskstats fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    lxcfs /proc/loadavg fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    lxcfs /proc/meminfo fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    lxcfs /proc/stat fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    lxcfs /proc/swaps fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    lxcfs /proc/uptime fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    lxcfs /sys/devices/system/cpu/online fuse.lxcfs rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    udev /dev/full devtmpfs rw,nosuid,relatime,size=16395124k,nr_inodes=4098781,mode=755 0 0
    udev /dev/null devtmpfs rw,nosuid,relatime,size=16395124k,nr_inodes=4098781,mode=755 0 0
    udev /dev/random devtmpfs rw,nosuid,relatime,size=16395124k,nr_inodes=4098781,mode=755 0 0
    udev /dev/tty devtmpfs rw,nosuid,relatime,size=16395124k,nr_inodes=4098781,mode=755 0 0
    udev /dev/urandom devtmpfs rw,nosuid,relatime,size=16395124k,nr_inodes=4098781,mode=755 0 0
    udev /dev/zero devtmpfs rw,nosuid,relatime,size=16395124k,nr_inodes=4098781,mode=755 0 0
    devpts /dev/console devpts rw,relatime,gid=5,mode=620,ptmxmode=666 0 0
    none /proc/sys/kernel/random/boot_id tmpfs ro,nosuid,nodev,noexec,relatime,size=492k,mode=755,uid=1000000,gid=1000000 0 0
    devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=1000005,mode=620,ptmxmode=666,max=1024 0 0
    devpts /dev/ptmx devpts rw,nosuid,noexec,relatime,gid=1000005,mode=620,ptmxmode=666,max=1024 0 0
    tmpfs /dev/shm tmpfs rw,nosuid,nodev,uid=1000000,gid=1000000 0 0
    tmpfs /run tmpfs rw,nosuid,nodev,mode=755,uid=1000000,gid=1000000 0 0
    tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k,uid=1000000,gid=1000000 0 0
    tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755,uid=1000000,gid=1000000 0 0
    cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
    cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
    cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
    cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
    cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
    cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset,clone_children 0 0
    cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
    cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0
    cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
    cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
    cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
    /dev/sda /snap ext4 rw,nosuid,nodev,relatime,stripe=8191,data=ordered 0 0
    snapfuse /snap/snapd/7264 fuse.snapfuse ro,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    snapfuse /snap/core/9066 fuse.snapfuse ro,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    snapfuse /snap/midori/550 fuse.snapfuse ro,nodev,relatime,user_id=0,group_id=0,allow_other 0 0
    tmpfs /run/user/1001 tmpfs rw,nosuid,nodev,relatime,size=3283388k,mode=700,uid=1001001,gid=1001001 0 0
    /dev/sda / ext4 rw,nosuid,nodev,relatime,stripe=8191,data=ordered 0 0

So your container is indeed mounted as both nosuid and nodev which are both going to be a bit of a problem.

This looks like a dir storage pool, so you’ll need to make sure that the source of that storage pool isn’t itself nosuid or nodev.

3 Likes

So you mean I should change my sda into suid and dev? Or I should configure the lxd storage pool? It’s dir type storage pool.

How do you get the non-root account into the container?
Which of the many commands do you use?

3 days ago I used the lxd default zfs storage pool. I launch a container. At that time I can get into the container, and I can create normal user account in the container. I can use sudo command after I log in the normal user account. And then I installed ssh in my container, and sshd will be turned on forever automatically. I made a private image from this container.

Because the default zfs storage pool directory is in /var/… , it is under “/” directory, where the usable size is only 40GB, too small for my developing. I changed the lxd storage into “dir” type and set the source directory into my hard drive partition. In that hard drive partition, 2 TB was used, and 1 TB is usable.

And then I launch a new container from my private image. I use Xshell on my Microsoft Windows 10 to SSH connect to the container logging in the normal user account.
Then the problem is that I can not use sudo command in the container user account. Up to now, I don’t fix it.

Thank you for solution.

Luigi

Thank you for solution :+1:

Hi,
I am new to lxd and I have the same problem as @tangmaomao16 (nosuid,nodev), except that the storage pool was dir from the beginning and it’s native installation instead of a snap package.
Your answer is very vague, so if you could refine your answer and write more about it, it would be nice. Alternatively, show me a place where I can find more information about this topic (lxd commands and configuration and not mount options :wink:).
Thanks

Continuation of my adventure with sudo:
I wanted to check the first part of @tangmaomao16’s question, and since I can’t edit fstab on the server where I am using the containers, I installed lxd locally and mapped the situation from the server. Indeed, adding dev and suid fixes the situation. Unfortunately, as I mentioned before, I cannot edit fstab at the destination. So I have a question:
Is there a way I can change / overwrite mount options for the storage pool. I saw in another discussion (https://discuss.linuxcontainers.org/t/storage-pool-default-mount-options) that there is such a possibility for btrfs, is there such a thing for dir?