New mount options do not match the existing superblock, will be ignored

Hello,

I have LXD-3.2 running on a gentoo host and using ZFS. On bootup dmesg throws output as follows,

[   84.944129] br0: port 3(vethBAYU3V) entered blocking state
[   84.944131] br0: port 3(vethBAYU3V) entered listening state
[   85.099268] new mount options do not match the existing superblock, will be ignored
[   85.131533] br0: port 4(veth6DC8GG) entered blocking state
[   85.131534] br0: port 4(veth6DC8GG) entered disabled state
[   85.131559] device veth6DC8GG entered promiscuous mode
[   85.131606] IPv6: ADDRCONF(NETDEV_UP): veth6DC8GG: link is not ready
[   85.131607] br0: port 4(veth6DC8GG) entered blocking state
[   85.131607] br0: port 4(veth6DC8GG) entered listening state
[   85.175708] eth0: renamed from vethOXOR3Q
[   85.182365] IPv6: ADDRCONF(NETDEV_CHANGE): veth6DC8GG: link becomes ready
[   85.287733] new mount options do not match the existing superblock, will be ignored

# dmesg|grep "new mount"
[   85.099268] new mount options do not match the existing superblock, will be ignored
[   85.287733] new mount options do not match the existing superblock, will be ignored
[   85.455913] new mount options do not match the existing superblock, will be ignored
[   85.798619] new mount options do not match the existing superblock, will be ignored
[   86.050972] new mount options do not match the existing superblock, will be ignored
[   86.242958] new mount options do not match the existing superblock, will be ignored
[   86.542053] new mount options do not match the existing superblock, will be ignored


# lxc storage  list
+---------+-------------+--------+-----------+---------+
|  NAME   | DESCRIPTION | DRIVER |  SOURCE   | USED BY |
+---------+-------------+--------+-----------+---------+
| default |             | zfs    | rpool/lxd | 14      |

Any idea what is causing that warning to appear?

The usual suspect for that message are CGroups. If the CGroup configuration of the host doesn’t exactly match what the container attempts to mount, you get that message in the kernel log.

The same is true of other filesystems, but I’ve found cgroup to be the most likely one to have mismatching mount options.

Thanks. I restarted an Ubuntu guest and received that message, I compared /etc/mtab between Host Gentoo and Guest Ubuntu. I have marked with * where a difference was found.

 OS	Mount device	Mount Point	File system									
*Gentoo	cgroup_root	/sys/fs/cgroup	tmpfs	rw	nosuid	nodev	noexec	relatime	size=10240k	mode=755	0	0
*Ubuntu	tmpfs	/sys/fs/cgroup	tmpfs	rw	nosuid	nodev	noexec	mode=755	uid=1000000	gid=1000000	0	0
Gentoo	blkio	/sys/fs/cgroup/blkio	cgroup	rw	nosuid	nodev	noexec	relatime	blkio	0	0	
Ubuntu	cgroup	/sys/fs/cgroup/blkio	cgroup	rw	nosuid	nodev	noexec	relatime	blkio	0	0	
*Gentoo	cpu	/sys/fs/cgroup/cpu	cgroup	rw	nosuid	nodev	noexec	relatime	cpu	0	0	
*Gentoo	cpuacct	/sys/fs/cgroup/cpuacct	cgroup	rw	nosuid	nodev	noexec	relatime	cpuacct	0	0	
Gentoo	cpuset	/sys/fs/cgroup/cpuset	cgroup	rw	nosuid	nodev	noexec	relatime	cpuset	0	0	
Ubuntu	cgroup	/sys/fs/cgroup/cpuset	cgroup	rw	nosuid	nodev	noexec	relatime	cpuset	0	0	
Gentoo	devices	/sys/fs/cgroup/devices	cgroup	rw	nosuid	nodev	noexec	relatime	devices	0	0	
Ubuntu	cgroup	/sys/fs/cgroup/devices	cgroup	rw	nosuid	nodev	noexec	relatime	devices	0	0	
Gentoo	freezer	/sys/fs/cgroup/freezer	cgroup	rw	nosuid	nodev	noexec	relatime	freezer	0	0	
Ubuntu	cgroup	/sys/fs/cgroup/freezer	cgroup	rw	nosuid	nodev	noexec	relatime	freezer	0	0	
Gentoo	hugetlb	/sys/fs/cgroup/hugetlb	cgroup	rw	nosuid	nodev	noexec	relatime	hugetlb	0	0	
Ubuntu	cgroup	/sys/fs/cgroup/hugetlb	cgroup	rw	nosuid	nodev	noexec	relatime	hugetlb	0	0	
Gentoo	memory	/sys/fs/cgroup/memory	cgroup	rw	nosuid	nodev	noexec	relatime	memory	0	0	
Ubuntu	cgroup	/sys/fs/cgroup/memory	cgroup	rw	nosuid	nodev	noexec	relatime	memory	0	0	
*Gentoo	openrc	/sys/fs/cgroup/openrc	cgroup	rw	nosuid	nodev	noexec	relatime	release_agent=/lib64/rc/sh/cgroup-release-agent.sh	name=openrc	0	0
Gentoo	pids	/sys/fs/cgroup/pids	cgroup	rw	nosuid	nodev	noexec	relatime	pids	0	0	
Ubuntu	cgroup	/sys/fs/cgroup/pids	cgroup	rw	nosuid	nodev	noexec	relatime	pids	0	0	
*Gentoo	cgroup	/sys/fs/cgroup/systemd	cgroup	rw	relatime	name=systemd	0	0				
*Ubuntu	cgroup	/sys/fs/cgroup/systemd	cgroup	rw	nosuid	nodev	noexec	relatime	name=systemd	0	0	
*Gentoo	none	/sys/fs/cgroup/unified	cgroup2	rw	nosuid	nodev	noexec	relatime	nsdelegate	0	0	
*Ubuntu	cgroup	/sys/fs/cgroup/unified	cgroup2	rw	nosuid	nodev	noexec	relatime	nsdelegate	0	0	

The gentoo host has more CGroups than the guest. Any idea which Cgroup could be causing the issue?