Hidepid=2 not working in lxc


Hi Community

I am having problems with setting hidepid=2 in LXC containers version 3.0.0. This appears on different systems. One is a Debian stretch running a proxmox kernel (4.15.17-3-pve) and the other is Ubuntu 18.04 with kernel 4.15.0-20-generic.

You can find more information about my problem in the proxmox forum (this post is from us):

Does anyone has any other hint than already written in the proxmox forum?

This is a problem which we have since about August 2016. It is a big security issue for us because multiple customers do have access to the same servers and they should not be able to see the processes of the others. Because it exists since a long time we are really looking forward to have a solution.

If you need more information, please ask! :slight_smile:

Thanks for your responses in advanced!

(Christian Brauner) #2

I need to hear exactly what is failing or not working. It’s not obvious to me what the issue is. Furthermore, I need at least:

  • the container’s config file
  • the trace log (lxc-start <container-name> -l trace -o <container-name>.log



Thanks for your response!

We have created the apparmor config to be able to remount /proc:

# /etc/apparmor.d/lxc/lxc-default-cgns-with-proc-remount
profile lxc-default-cgns-with-proc-remount flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # these are copied from lxc-container-default-cgns:
  deny mount fstype=devpts,
  mount fstype=cgroup -> /sys/fs/cgroup/**,

  # This will allow remounting /proc, eg to change hidepid
  mount options=(rw, nosuid, nodev, noexec, remount, silent, relatime) -> /proc/,

then reloaded apparmor:

# apparmor_parser -r -W -T /etc/apparmor.d/lxc-containers

now added the profile to the container:

lxc.apparmor.profile = lxc-default-cgns-with-proc-remount

So the container config looks like:

arch: amd64
cores: 2
cpulimit: 2
cpuunits: 1024
hostname: hostname.example.com
memory: 4096
net0: name=eth0,bridge=vmbr0,gw=,hwaddr=AA:A1:3A:9D:41:31,ip=,type=veth
onboot: 1
ostype: debian
rootfs: zfsvols:subvol-198-disk-1,acl=1
swap: 1024
lxc.apparmor.profile: lxc-container-default-cgns-with-proc-remount

after that it should work to use:

$ mount -o remount,hidepid=2 /proc
mount: cannot remount block device proc read-write, is write-protected

but after that mountstill gives the same output:

$ mount | grep proc
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)

You can find the trace on:

I hope with this information you can give me a hint. :slight_smile:

(Stéphane Graber) #4

What does the dmesg output look like after the failure? I’d expect to see an apparmor denial in there which would show you exactly what flags need to be included in the profile.


Nothing is reported in dmesg from apparmor. The only entries on the host during the start are:

[Wed Jul 11 14:37:20 2018] IPv6: ADDRCONF(NETDEV_UP): veth189i0: link is not ready
[Wed Jul 11 14:37:21 2018] vmbr0: port 3(veth189i0) entered blocking state
[Wed Jul 11 14:37:21 2018] vmbr0: port 3(veth189i0) entered disabled state
[Wed Jul 11 14:37:21 2018] device veth189i0 entered promiscuous mode
[Wed Jul 11 14:37:21 2018] eth0: renamed from veth1RQE7S
[Wed Jul 11 14:37:21 2018] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[Wed Jul 11 14:37:21 2018] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[Wed Jul 11 14:37:21 2018] vmbr0: port 3(veth189i0) entered blocking state
[Wed Jul 11 14:37:21 2018] vmbr0: port 3(veth189i0) entered forwarding state

The only apparmor thing which is frequently reported is this:

[Wed Jul 11 14:39:10 2018] audit: type=1400 audit(1531312748.197:153): apparmor="DENIED" operation="file_lock" profile="lxc-container-default-cgns-with-proc-remount" pid=25613 comm="(ionclean)" family="unix" sock_type="dgram" protocol=0 addr=none

But this is not reported during the remount or start of the container.