Any way to map UIDs with virtiofs?

I’m using Incus to play around with NixOS. Since my host system (Ubuntu) has an awkward AppArmor setup and I’d prefer not to mess with it, I’m doing this via virtualisation:

incus launch images:nixos/24.11 nixos --vm

Then I mount all my source code into the guest so I can just carry on using my editor in my host as normal:

incus config device add nixos src disk source=$HOME/src path=/src shift=true

(IIUC that shift=true does nothing for VMs, but just including it to accurately report what I’m trying).

Now naturally if I create a file on the guest, it is owned by root in the host:


[root@nixos:/src/nix-learn]# touch foo  # Guest

❯❯  ls -l foo # Host
-rw-r--r-- 1 root root 0 Apr  6 15:57 foo

This is somewhat undesirable. If I was doing this via a more manual QEMU/virtiofsd setup, I would get around it by just throwing unshare -r behind everything. Is there any way to achieve something similar for Incus disk devices?

(Or, is this approach of using a disk device to mount my source code just a dumb approach, and there’s a better way to achieve something similar?)

Since I started using Linux containers about five years ago, I’ve discovered two effective methods for file sharing between the host and instances, which I continue to use daily:

  1. NFS Shares: I export an NFS share on the host to be mounted inside a container or virtual machine. This approach has proven effective for running a Minio server and a torrent client within an instance, while the data is stored on the host with the correct permissions.

  2. SSH Server and IDE Integration: I run an SSH server within the instance and configure my IDE to connect over SSH. This has become my primary method for development, allowing me to maintain numerous instances for different experiments, while my host system only requires a few IDE installations. All coding and testing are done within these instances (either containers or VMs). I can stop an instance anytime, export, and archive it when not needed. If necessary, I can restore it to its previous state, preserving all settings and data.

This setup is so convenient that I can’t imagine developing or experimenting in any other way.

Interesting, what kind of approach are you using for mounting it into a container?

Here’s what I do to setup an NFS mount in a container.

First, on Incus host (Ubuntu):

mkdir -p /data/storage

apt install nfs-kernel-server

cat <<EOF >>/etc/exports
/data/storage *(rw,async,no_subtree_check,no_root_squash,insecure)
EOF

systemctl enable --now nfs-kernel-server.service

This will export /data/storage over NFS. you can change it to whatever you want.

Second, you need to create a container with security.privileged="true" and raw.apparmor="mount fstype=nfs,".

I usually do it like this:

incus init images:almalinux/9 a9 \
  -c security.privileged="true" -c raw.apparmor="mount fstype=nfs,"`

In my case on RHEL Linux I need to install nfs-utils. And after that I can update fstab and mount it:

cat <<EOF >>/etc/fstab
_gateway:/data/storage	/data	nfs	defaults	0 0
EOF

mount -a

I wish I could do it without giving security privileges. But in my case I get more benefits from being able to map the same folder to multiple containers over NFS.

And if you are running a VM it won’t need security privileges.

Thanks, I see how you got it working.

These settings are properly fine for a dev / test env but I wouldn’t enable it for any production system. Changing the security for a container disables the isolation form the host and as such not something I will do. I hope someone will find the time and writes NFS support for Incus so it can be used without any workarounds like this.

Thanks for sharing your solution

I agree about not using it in production unless you know what you are doing.

Look at this…

On a production server you may run Minio server, right?

If it’s running as root it will have full access to your system.

If you are running Minio server in a privileged Incus container it may gain full access to your system under very specific circumstances if it’s smart enough to escape the container.

What is more secure then?

Giving full system access to Minio for real or giving a limited access with a chance to hack into full access if it’s smart enough?