Enable core dump in lxd container

I’m having trouble understanding why running processes in my container can’t dump core files.
I’ve set limits.kernel.core to be “unlimited” for the container, but ulimit -a still shows a value of 0 from within the container, and processes that say they core dump don’t create a core file.

All of this works on the host system, and I changed the /proc/sys/kernel/core_pattern file to point directly to a file, but that did not make a difference.

Am I missing something in the configuration?

# lxc config show lvm-ualpha01
architecture: x86_64
config:
  limits.kernel.core: unlimited

lvm-ualpha01# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited

Hi!

If you do

$ ulimit -c unlimited

then a spawned process can produce a coredump file. I tested this just now.

The issue that you describe is how to configure the container from LXD to allow for coredumps?
I suppose that the container image is likely to reset the configuration and set coredumpsize to 0.

I had thought that should work too, but when I tried it, I get a permission denied error:
# ulimit -c unlimited
bash: ulimit: core file size: cannot modify limit: Operation not permitted

Here is a full example. Note that I changed the default core pattern on the host to save to a file in the current directory. The default in Ubuntu is pipe to apport.

$ lxc launch ubuntu:18.04 coredump
Creating coredump
Starting coredump
user@computer:~$ lxc ubuntu coredump
ubuntu@coredump:~$ sudo apt update
ubuntu@coredump:~$ sudo apt install build-essential
ubuntu@coredump:~$ cat > mytest.c
int main() { int* p = 0; *p = 0; return 0; }
ubuntu@coredump:~$ gcc mytest.c -o mytest
ubuntu@coredump:~$ ./mytest 
Segmentation fault
ubuntu@coredump:~$ ls
mytest  mytest.c
ubuntu@coredump:~$ ulimit -c
0
ubuntu@coredump:~$ ulimit -c unlimited
ubuntu@coredump:~$ ulimit -c
unlimited
ubuntu@coredump:~$ ./mytest 
Segmentation fault (core dumped)
ubuntu@coredump:~$ ls
core.mytest.4577  mytest  mytest.c
ubuntu@coredump:~$ 

I’m confused about this line, is it supposed to be connecting me to the container as the ubuntu user?

Per my previous post, I log into the container and the ulimit command does not work for me the way you’re saying it works for you:

ubuntu@ualpha01:~$ ulimit -c
0
ubuntu@ualpha01:~$ ulimit -c unlimited
-su: ulimit: core file size: cannot modify limit: Operation not permitted
ubuntu@ualpha01:~$

Regarding lxc ubuntu mycontainer, it’s an alias. See

Set up the alias on your computer and try again. I think the issue you are facing may be related to the way you su to the non-root user.

No, the results are the same using your login method:

ubuntu@ualpha01:~$ ulimit -c
0
ubuntu@ualpha01:~$ ulimit -c unlimited
-bash: ulimit: core file size: cannot modify limit: Operation not permitted
ubuntu@ualpha01:~$

I thought I had this figured out, and I was able to finally confirm what you listed in your earlier posts, and have finally realized that the problem isn’t with the container limits per se, it has to do with running a command using sudo. I can’t seem to find any explanations or suggested workarounds. All of the limit configurations seem to be doing what I’d expect, but when the pam limit is checked, it can’t be set.

When I use sudo the same way on the host system, it works the way I’d expect.

In the container, before using sudo, I see the limit set as expected:

ocPADosch@ualpha01:~$ ulimit -a
core file size          (blocks, -c) 8192

When I run sudo, the following error goes to the auth.log file, and the limit is set to 0 in the new shell:

ualpha01 sudo: pam_limits(sudo:session): Could not set limit for 'core' to soft=8388608, hard=8388608: Operation not permitted; uid=0,euid=0

ocPADosch@ualpha01:~$ sudo -u xt_001 bash
ocPADosch$ whoami
xt_001

ocPADosch$ ulimit -a
core file size          (blocks, -c) 0

Any help would be greatly appreciated.

Maybe check in your PAM config for any module which would apply those limits (pam_limits), try just disabling it for now and see if that fixes the issue. If it does, then the problem is that by attempting to raise the core size, pam_limits causes it to reset to 0 somehow.

In which case, tweaking the limits config to match what’s allowed would likely work.

I have configured the limits in the pam settings in pam.d and the /etc/security/limits.conf, and also tried the /etc/systemd/user.conf and /etc/systemd/system.conf. I turned debug on in the pam configuration, and can see that the process is trying to set the limit, but fails with the message I included in my last post:

ualpha01 sudo: pam_limits(sudo:session): Could not set limit for 'core' to soft=8388608, hard=8388608: Operation not permitted; uid=0,euid=0

Also, I think I mentioned this earlier, but sudo doesn’t have the same problem on the host system, this only happens within the container, which is why I was using this forum.

In short, core dump work fine for non root user, but not at all for the pseudo-root of an unprivileged container
Core settings are not inherited for the container root, even if they seem to be for other parameters

lxc config set ddd limits.kernel.core 500000:1000000
lxc config set ddd limits.kernel.cpu 15:38
(start container ddd)
standard user:
cat /proc/self/limits

Limit Soft Limit Hard Limit Units
Max cpu time 15 38 seconds
(…)
Max core file size 500000 1000000 bytes

sudo bash

cat /proc/self/limits
Limit Soft Limit Hard Limit Units
Max cpu time 15 38 seconds
Max core file size 0 0 bytes

with a hard limit of 0, it’s not possible to write a coredump.
Maybe it’s a not very documented security feature. My guess is that’s probably deeper than in lxd.

Yes, that seems to confirm what I’m seeing, that the same configurations that work with sudo on the host system to allow for core dumps do not work within the container for some reason.

Well, after some not very interesting duckduckgoings, I think I have finally stumbled on something interesting, take a look at:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/6.4_technical_notes/sudo

in the BZ#836242 part, there is an interesting bug description. It’s solved there, but maybe some similar problem could exist in lxc that could prevent the ‘reset limit to the previous value’
Looking at the sudo source code is not extremely enlightening, that’s the kind of source that exists since eons and covers Hp-ux, Sunos, and probably many extinct big saurians. In short, it’s a bit complicated.
From a 5 minutes examination, it seems to exist a disable_coredump option that is set to true by default.
To set it to false, you can create a sudo.conf file in /etc and write the following:
Set disable_coredump false
no need to restart the container, it’s taken in account immediately

3 Likes

I can get coredumps in an unprivileged container with an Ubuntu host. Both for ubuntu and root user accounts.
It requires configuration at the host in addition to configuration at the container. Does not require to restart the container.

If there is an issue in the container, check how you get a shell into the container.
If you lxc exec mycontainer -- sudo --user ubuntu --login, then you go in through sudo. And sudo indeed as noticed, does a disable_coredump by default. I use now lxc ubuntu mycontainer which is a nice alias to get a shell without sudo.

Yes, I understand that if you log into the container with a regular account core dumps work, and then are disabled when you invoke sudo. I have an application which had been using sudo to start processes at boot time under a non-root account, which were then unable to dump core files.

Do you know why sudo behaves differently in the container than it does on the host system, and if there is a way to make it work the same way in the container that it works on the host (i.e. stop it from doing a disable_coredump) ?

For anyone having the same problem, the solution is in this answer, but doesn’t jump to the eyes.

To stop sudo from disabling core dumps do the following (same as instruction above but nicely formatted):

  • create a file /etc/sudo.conf in the container
  • write set disable_coredump false in this file
  • if you previously logged in via something similar to lxc exec mycontainer -- sudo --user ubuntu --login, you need to first log out from the container and login again (no need to restart container).
  • now ulimit -a should show:
Limit                     Soft Limit           Hard Limit           Units
[...]
Max core file size        0                    unlimited            bytes
[...]
1 Like

That’s very helpful, as you pointed out, I didn’t realize that the post from gpatel-fr contained the solution.
Thanks!