I try to create an LXC unprivilleged container , following the Linux Containers - LXC - Getting started tutorial. I allocate the subordinate uid and gid ranges to root in /etc , adding the following entry in both files: root:100000:65536 I created the container and unprvilleged is set to True Runing lxc-ls -f the following output is appeared
mycontainer_a RUNNING 0 - - - true
However seems that the container has critical capabilities.
Running the following command cat /proc/self/status | grep Cap, into the container I see the following
No I did not try this. However one issue , was that I ran container as root. However running this as simple user , when attach the container still has full capabilities and I enter in the container as root , which in my point of view it is not normal
These are three different things: LXC, LXD, Incus. In this forum we provide support for LXC and Incus.
Weirdly, LXD has a command-line tool called lxc that tends to confuse.
It appears that you are using LXC, therefore you must not use the lxc CLI tool. You manage LXC containers using individual tools like lxc-start, lxc-stop, lxc-execute, etc.
When you get a shell into a unprivileged container, you typically get a root shell. You appear as root within that container, you can erase anything, but whatever you do, stays in that container. The rest of the system is not affected.
It’s fine to use LXC. If you want something a bit more user-friendly, I suggest that you try Incus. With Incus, the CLI tool is incus, and helps you manage to full lifecycle of system containers (like LXC), virtual machines, and application containers. You can try Incus online here, Linux Containers - Incus - Try it online It’s the real Incus, usable from withing you Web browser.
To start stop containers I use lxc-start, lxc-stop and e.t.c. I am wondering if using LXC, can I create a container for a non-root user with limited capabilities. Attach to this container and not have any root permission. When I tried something like this dropping the SYS_ADMIN caps I face many issues with permission during startup
Having look in the output
ailed to set hostname to : Operation not permitted
Initializing machine ID from random generator.
Cannot write /run/machine-id: Permission denied
Failed to add address 127.0.0.1 to loopback interface: File exists
Failed to add address ::1 to loopback interface: File exists
Successfully brought loopback interface up
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Setting ‘fs/file-max’ to ‘9223372036854775807’.
Failed to bump fs.file-max, ignoring: Permission denied
Failed to write /run/systemd/container, ignoring: Permission denied
Found cgroup2 on /sys/fs/cgroup/unified, unified hierarchy for systemd controller
Unified cgroup hierarchy is located at /sys/fs/cgroup/unified. Controllers are on legacy hierarchies.
Didn’t get EBADF from BPF_PROG_DETACH, BPF firewalling is not supported: Operation not permitted
Can’t load kernel CGROUP DEVICE BPF program, BPF device control is not supported: Operation not permitted
Controller ‘cpu’ supported: no
Controller ‘cpuacct’ supported: no
Controller ‘cpuset’ supported: no
Controller ‘io’ supported: no
Controller ‘blkio’ supported: no
Controller ‘memory’ supported: yes
Controller ‘devices’ supported: no
Controller ‘pids’ supported: no
Controller ‘bpf-firewall’ supported: no
Controller ‘bpf-devices’ supported: no
Controller ‘bpf-foreign’ supported: no
Controller ‘bpf-socket-bind’ supported: no
Set up TFD_TIMER_CANCEL_ON_SET timerfd.
Enabling (yes) showing of status (commandline).
Failed to create generator directories: Permission denied
[!!!] Failed to start up manager.
Exiting PID 1…
Once you get a shell as root in the container, you can create a non-root account. Then, get a new shell into that container as that non-root account. It’s not clear what commands you are running above, so I cannot help you directly.
Here is me creating a container with Incus.
$ incus launch images:debian/13/cloud mycontainer
Launching mycontainer
$ incus exec mycontainer -- su -l debian
debian@mycontainer:~$ id
uid=1000(debian) gid=1000(debian) groups=1000(debian),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev)
debian@mycontainer:~$ sudo id
uid=0(root) gid=0(root) groups=0(root)
debian@mycontainer:~$ hostname
mycontainer
debian@mycontainer:~$ hostname mynewhostname
hostname: you must be root to change the host name
debian@mycontainer:~$ sudo hostname mynewhostname
debian@mycontainer:~$ hostname
mynewhostname
debian@mycontainer:~$ exit
logout
$