apparmor="DENIED" operation="mount"

Hi!

I have a dedicated server (ubuntu 18.04) with one BTRFS partition (md4 on RAID1) mounted on /srv/lxd

LXD (3.0.1) is installed, BTRFS storage-pools, default profile, nothing has been added or modified and all containers are Ubuntu 18.04.

The storage pool is also mounted on that BTRFS partition → /var/lib/lxd/storage-pools/lxd-pool.

Everything seems to be running fine but I get each 30 minutes some error messages like this one:

apparmor=“DENIED” operation=“mount” info=“failed flags match” error=-13 profile=“lxd-db2_</var/lib/lxd>” name="/bin/" pid=26448 comm="(ionclean)" flags=“ro, remount, bind”

lxc config show --expanded db2
architecture: x86_64
config:
image.architecture: amd64
image.description: ubuntu 18.04 LTS amd64 (release) (20180724)
image.label: release
image.os: ubuntu
image.release: bionic
image.serial: “20180724”
image.version: “18.04”
volatile.base_image: 38219778c2cf02521f34f950580ce3af0e4b61fbaf2b4411a7a6c4f0736071f9
volatile.eth0.hwaddr: 00:16:3e:20:ae:d4
volatile.idmap.base: “0”
volatile.idmap.next: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]’
volatile.last_state.idmap: ‘[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]’
volatile.last_state.power: RUNNING
devices:
eth0:
ipv4.address: 10.247.145.200
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: lxd-pool
type: disk
ephemeral: false
profiles:

  • default
    stateful: false
    description: “”

What does that mean?
Cannot mount ro /bin in all containers?
What should I do to clean these error messages?

Thanks!

Looks like a process inside one of your containers is trying to remount /bin read-only, possibly just in a private namespace. That’s currently not allowed by the apparmor policy in LXD 3.0.1 which you’re using.

I believe we have actually refreshed that very bit of policy so LXD 3.0.2 (once released) should silence this and also unblock whatever that process is trying to do.

Thanks Stéphane I appreciate :slight_smile:

I’m getting a very similar error to this, with a process attempting to remount “/home/”:

kernel: [328822.581315] audit: type=1400 audit(1543007365.154:362): apparmor=“DENIED” operation=“mount” info=“failed flags match” error=-13 profile=“lxd-webserver</var/lib/lxd>” name="/home/" pid=3132 comm="(ionclean)" flags=“ro, nosuid, nodev, remount, bind”

I’m on Ubuntu 18.04 with lxd 3.0.2 on zfs with the container zpool on a dedicated partition. This error is happening on two containers, both of them running apache and php, but not on any other containers (running a variety of services, but none with apache or php). On one container, running Ubuntu 16.04, I get a the error twice a day. On the other, running Ubuntu 18.04, it is happening twice an hour.

1 Like

Same with me. Ubuntu 18.04 with snap lxd

Seems to be caused by the PHP cron job to clear old sessions, but I don’t see where that job is attempting to remount /home.

same here with ubuntu 18.04.1, snap lxd 3.9

apparmor=“DENIED” operation=“mount” info=“failed flags match” error=-13 profile=“lxd-mail_</var/snap/lxd/common/lxd>” name="/home/" pid=31152 comm="(ionclean)" flags=“ro, nosuid, nodev, remount, bind”

I’m still getting this on lxd 3.0.3 on Ubuntu 18.04.2. I did a bit more digging, and this error message appears to be cause by the systemd unit for phpsessionclean, which is triggered by a systemd timer. The phpsessionclean.service unit includes the ProtectHome option. ProtectHome appears to cause systemd to remount /home readonly prior to running the job (and then presumably to remount it rw again after the job completes). For some reason, this appears to be attempting to remount /home on the host, even though the systemd service is being run inside a container. I can get rid of the error if I edit the systemd unit and set ProtectHome to false, but I assume there’s a good reason that it is set to true for the phpsessionclean unit.

The odd thing is that there are lots of other units with ProtectHome set to true, but they are not being triggered by timers, so maybe that has something to do with the problem.

I get a similar error in another container on a timer-activated unit that has the PrivateTmp option set to true – which I believe similarly works by remounting /tmp.

Hi:

I know this is an old thread…

I am seeing this same error and I am running LXD 3.03. I caused LXC to hang. I was using my local lxc instance to copy a container on another server (which appeared to complete successfully), but my instance stopped dead during the process. I tried a systemctl restart of lxd and it made no difference to the hanging lxc. I did three ctrl-c’s and got my console back. This was in the kernel log afterwards:

May 30 12:09:01 R2D2 kernel: [65260.867115] audit: type=1400 audit(1559232541.390:101): apparmor=“DENIED” operation=“mount” info=“failed flags match” error=-13 profile=“lxd-NC-R2D2_</var/lib/lxd>” name="/home/" pid=27493 comm="(ionclean)"
flags=“ro, nosuid, nodev, remount, bind”

May 30 12:11:40 R2D2 kernel: [65419.999193] audit: type=1400 audit(1559232700.519:102): apparmor=“DENIED” operation=“mount” info=“failed flags match” error=-13 profile=“lxd-NC-R2D2_</var/lib/lxd>” name="/tmp/" pid=28547 comm="(pachectl)"
flags=“rw, remount, bind”

May 30 12:11:40 R2D2 kernel: [65420.091903] audit: type=1400 audit(1559232700.615:103): apparmor=“DENIED” operation=“mount” info=“failed flags match” error=-13 profile=“lxd-NC-R2D2_</var/lib/lxd>” name="/tmp/" pid=28552 comm="(pachectl)"
flags=“rw, remount, bind”

May 30 12:22:02 R2D2 kernel: [66042.450324] audit: type=1400 audit(1559233322.981:104): apparmor=“STATUS” operation=“profile_replace” info=“same as current profile, skipping” profile=“unconfined” name="/usr/bin/lxc-start" pid=32582 comm="
apparmor_parser"

May 30 12:22:02 R2D2 kernel: [66042.465111] audit: type=1400 audit(1559233322.993:105): apparmor=“STATUS” operation=“profile_replace” info=“same as current profile, skipping” profile=“unconfined” name=“lxc-container-default” pid=32586 com
m=“apparmor_parser”

May 30 12:22:02 R2D2 kernel: [66042.465115] audit: type=1400 audit(1559233322.993:106): apparmor=“STATUS” operation=“profile_replace” info=“same as current profile, skipping” profile=“unconfined” name=“lxc-container-default-cgns” pid=3258
6 comm=“apparmor_parser”

May 30 12:22:02 R2D2 kernel: [66042.465118] audit: type=1400 audit(1559233322.993:107): apparmor=“STATUS” operation=“profile_replace” info=“same as current profile, skipping” profile=“unconfined” name=“lxc-container-default-with-mounting”
pid=32586 comm=“apparmor_parser”

May 30 12:22:02 R2D2 kernel: [66042.465120] audit: type=1400 audit(1559233322.993:108): apparmor=“STATUS” operation=“profile_replace” info=“same as current profile, skipping” profile=“unconfined” name=“lxc-container-default-with-nesting”
pid=32586 comm=“apparmor_parser”

I don’t seem to have any after effects, but I like to dig around on these in case I have something screwed up. Insights appreciated. V/R

Andrew

@stgraber

I am bumping this post as the error is pervasive in logs on three different physical machines running lxd v 3.03. It’s not mission-critical, but it is a uisance factor.

sysog is absolutely rammed full of line after line of these types of errors. In this case, it’s an openvpn container, but I get the same for our Nextcloud instances too. There are no operational side effects other than making syslog impossible to easily read. But I have to wonder what resources are being wasted? Here’s a single log entry:

Jul 18 14:34:55 vader kernel: [1681065.556456] audit: type=1400 audit(1563474895.857:119431): apparmor=“DENIED” operation=“mount” info=“failed flags match” error=-13 profile=“lxd-openvpn_</v
ar/lib/lxd>” name="/home/" pid=7483 comm="(openvpn)" flags=“ro, nosuid, nodev, remount, bind”

It’s always a mount error, regardless of container flavor.

Is this soemthing that can be fixed in 3.03 or do we need to upgrade? We run production servers so we don’t want to introduce instability with a development lxd.

If I can help track down to root cause, please let me know what you need (I may need help extracting it, so please be specific). We still love lxd!! The fact that we see this in at least two very different containers on very different machines mkes us think this is not a dumb user mistake. Thank you.

cat /etc/rsyslog.d/20-apparamor.conf
:msg,contains,“apparmor=” /var/log/apparmor.log
& stop

FTR I have an Ubuntu 16.04 LTS container running Openvpn 2.4.7 on a host running LXD snap 3.15 under Ubuntu 18.04 LTS and don’t see any apparmor DENIED message, so it makes me think that’s not an obvious LXD problem.

We don’t see this on our Ubuntu 16.04 server either, which is running lxd 2.0 (flawlessly as far as we can tell).

We are only seeing this on our Ubuntu 18.04 servers, which run lxd version 3.03. And a minor correction to my original post: it’s only occuring on two of our three ubuntu 18.04 servers, all of which run lxd 3.03.

I know I can filter the log, but the excessive entries are a symptom of something somewhere not quite right. Ordinarily, it would likely be an error on our machine but this seems unlikely in this case since it appears on two completely different pieces of hardware.

I am merely reporting that this seems to be a real ‘bug’ (for us at least), albeit a low priority one as it does not seem to affect container functioning. I hope I didn’t cause offence.

V/R

Are those privileged or unprivileged containers?

This mount failure usually hits when a privileged container is attempting to run a systemd unit inside an isolated mount namespace. The operations performed by systemd in such a case currently cannot be safely allowed due to a long standing apparmor parser bug.

This isn’t the case for unprivileged containers as we don’t rely on apparmor quite as much there and so have relaxed the rules enough to have this normally work.

Hmm. The containers are unpriviliged in all instances on both machines. I will be happy to provide profiles, configs etc. But please note that other than log spamming, I can detect no operational orfunctional problems, so it’s probably a “low priority”

THANK YOU.

Hello there,

I don’t know if a should open a new subject but it seems that I have the same kind of errors. I have a LXD cluster, one of my container is a NFS server and another one is a client (nfs-common).

I’ve configured my export file as follows :

  • /opt/share *(rw,sync,no_subtree_check)

The problem appears when I try to mount my share from the client

root@coruscant:~# sudo mount -t nfs4 192.168.0.53:/opt/share /tmp
mount.nfs4: access denied by server while mounting 192.168.0.53:/opt/share

The configuration for raw.apparmor looks like this on both container :

  • fstype=rpc_pipefs,
  • mount fstype=nfsd,

And the security.privileged is set to true.

if I do a dmesg | grep audit :

[156719.704077] audit: type=1400 audit(1563960827.890:569): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-coruscant_</var/snap/lxd/common/lxd>" pid=10789 comm="apparmor_parser"

[156720.220197] audit: type=1400 audit(1563960828.406:570): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-coruscant_</var/snap/lxd/common/lxd>" pid=10832 comm="apparmor_parser"

[156721.933495] audit: type=1400 audit(1563960830.118:571): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxd-coruscant_</var/snap/lxd/common/lxd>" name="/run/rpc_pipefs/" pid=10902 comm="mount" fstype="rpc_pipefs" srcname="sunrpc"

[156721.933525] audit: type=1400 audit(1563960830.118:572): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxd-coruscant_</var/snap/lxd/common/lxd>" name="/run/rpc_pipefs/" pid=10902 comm="mount" fstype="rpc_pipefs" srcname="sunrpc" flags="ro"

[156737.337037] audit: type=1400 audit(1563960845.522:573): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-coruscant_</var/snap/lxd/common/lxd>" name="/run/systemd/unit-root/" pid=10982 comm="(networkd)" srcname="/" flags="rw, rbind"

[156737.448751] audit: type=1400 audit(1563960845.634:574): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-coruscant_</var/snap/lxd/common/lxd>" name="/run/systemd/unit-root/" pid=10984 comm="(resolved)" srcname="/" flags="rw, rbind"

[156776.456988] audit: type=1400 audit(1563960884.642:575): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="lxd-najedha_</var/snap/lxd/common/lxd>" pid=11150 comm="apparmor_parser"

[156805.891536] audit: type=1400 audit(1563960914.074:576): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxd-coruscant_</var/snap/lxd/common/lxd>" name="/tmp/" pid=11161 comm="mount.nfs4" fstype="nfs4" srcname="192.168.0.53:/opt/share"

[156813.173386] audit: type=1400 audit(1563960921.358:577): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-najedha_</var/snap/lxd/common/lxd>" pid=11183 comm="apparmor_parser"

[156891.399632] audit: type=1400 audit(1563960999.581:578): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-najedha_</var/snap/lxd/common/lxd>" name="/run/systemd/unit-root/" pid=11361 comm="(networkd)" srcname="/" flags="rw, rbind"

[156892.461966] audit: type=1400 audit(1563961000.645:579): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-najedha_</var/snap/lxd/common/lxd>" name="/run/systemd/unit-root/" pid=11363 comm="(resolved)" srcname="/" flags="rw, rbind"

[157038.748558] audit: type=1400 audit(1563961146.933:580): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxd-coruscant_</var/snap/lxd/common/lxd>" name="/bin/" pid=11803 comm="(ionclean)" flags="ro, remount, bind"

[157676.803317] audit: type=1400 audit(1563961784.984:581): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxd-coruscant_</var/snap/lxd/common/lxd>" name="/tmp/" pid=11950 comm="mount.nfs4" fstype="nfs4" srcname="192.168.0.53:/opt/share"

I changed many times the apparmor’s settings … now it looks like this on both container

/etc/apparmor.d/lxc/lxc-default

profile lxc-container-default flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # the container may never be allowed to mount devpts.  If it does, it
  # will remount the host's devpts.  We could allow it to do it with
  # the newinstance option (but, right now, we don't).
  deny mount fstype=devpts,
  mount options=(rw, bind),
}

I’ve been looking on the internet for a while now, that’s why I come here to get some help.

Ok,

after hours searching a solution, I’ve decided to set the appramor’s profile to “unconfined”. Even though it’s not the safer way to mount a NFS share into a lxc container, I didn’t find another solution, still get these DENIED message from appramor.

That how I did it :
sudo lxc config set najedha raw.lxc 'lxc.apparmor.profile=unconfined'
sudo lxc config set coruscant raw.lxc 'lxc.apparmor.profile=unconfined'

Although I’m in a lab env, I’m still interested about a proper solution, I assume that disabling the whole security is not the way in prod env.

You’ll likely want a small variation on that one as you’re not running the server.
So in your case I think you care about fstype=nfs4 rather than fstype=nfsd.

You can always attempt/add more entries based on the apparmor denials.

1 Like

Hi :wave:
Not sure if bumping this 2 year old topic is a great idea, but this was the only reference I found on here for the problem I had just now with syslog being filled with “apparmor=“DENIED” operation=“mount” messages.

The problem started appearing after upgrading one of the containers from debian 10 to debian 11, which might explain how your problem appeared only in newer ubuntu versions @Andrew_Wilson

The fix seems to be setting

lxc config set yourcontaintername security.nesting true

This stopped the messages appearing in syslog.

According to this source LXC Container Upgrade to Bullseye - Slow Login and AppArmor Errors | Proxmox Support Forum systemd requires it for namespacing purposes. I have yet to fully understand all this, but for now my syslog is back to normal and the containers seem to run smoothly.

Our LXD / LXC Version is 4.21, the host OS is debian 10.11 and the container causing the “apparmor="DENIED” messages runs debian 11.2. I wonder if the message would disappear if the host OS was the same or newer than the said lxc container…?

A sample warning message to the above in syslog is:
Jan 2 14:20:37 ourmachine kernel: [356300.913411] audit: type=1400 audit(1641133237.822:68517): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-ourcontainer_</var/ snap/lxd/common/lxd>" name="/run/systemd/unit-root/proc/" pid=29573 comm="(d-logind)" fstype="proc" srcname="proc" flags="rw, nosuid, nodev, noexec"

2 Likes

@kevinmoilar

Apparmor limiting the size of program names in its logs (!!) I think that (d-logind) means that the application that triggers this message is in fact systemd-logind, and that @stgraber’s reply about systemd is still relevant. Systemd don’t play nice with Linux containers that’s a fact of life - Systemd maintainer is pushing for its own solution and don’t care to cater to Lxc container special needs- and Ubuntu people can do nothing about this fact , my 0.02 € advice from a non-Ubuntu person.

if your container is privileged as was asked by @stgraber to a previous poster, the only (fragile) security you get is through apparmor. Lowering it to get your logs to be easier on the eye don’t seem the greatest idea to me. You should rather silence the logs with a syslog trick.

And I can’t see why having the container and the host to be on the same OS version could be relevant.

Thanks @gpatel-fr! Glad to hear OS version difference should not be relevant for this issue. Our containers are unprivileged, so I guess it is safe to “silence” the logs this way…

1 Like