My goal is to run one Incus container for NFS server and one for a Samba server, both with the same btrfs custom filesystem volume attached containing the data to be shared across the network.
I started with the NFS server but got stuck immediately because of “nfs-server.service: Job nfs-server.service/start failed with result ‘dependency’.”
Is it correct I will have to run this container in privileged mode for NFS (nfs-kernel-server) to run succesfully ?
Is this also the case for a Samba server ?
Should I consider running NFS and Samba on the Incus host itself rather than these containers when they have to be privileged ? I.e. what is a bigger risk ?
yes, nfs-kernel-server as the name implies runs within the kernel, so about as privileged as it gets
no, samba is a pure userspace process
For NFS, yeah, host or VM would be better. The NFS server is in-kernel so a container doesn’t really do anything.
Not sure, it would still have to be a privileged container, so still somewhat pointless for it to be in a container as there would be close to no useful isolation.
Ganesha is a purely user land NFS server. I’ve never used it. TrueNAS scale somewhat briefly but seriously considered using it. It may work for your needs.
As was suggested, I run my NFS server in a VM (passing in either drives or the drive controller directly), and consider that a solidly reasonable solution. SMB/NFS servers also often want user-directory (AD or FreeIPA) integration, and that’s also difficult or impossible in containers, so VMs again are the only real choice (other than bare metal, but who does that ?!?
Ah yeah, I’ve heard of it but never tried it. It should work in an unprivileged container if it’s not making use of any of the in-kernel NFS server stuff.
There is one reason for running the NFS server inside an incus container: each container has its own network namespace. You can expose one set of NFS exports in one container to network A and another set of NFS exports on anotther container to network B.
I use macvlan and don’t allow network traffic to the Incus hosts so all clients must be served by Incus instances for all kinds of network services and applications.
After some testing I also decided to use container instances only (so no VM’s anymore) which made me also choose to run everything on BTRFS (which I probably wouldn’t if I had VM’s also).
Unfortunately I forgot to test NFS because I assumed it would run unprivileged. How foolish.
The plan was to create a large custom volume which will be shared by separate NFS and Samba containers to non-instance i.e. Incus clients where Incus containers would have it attached if necessary. Plan B. Ganesha NFS (thank you @gringo) also failed. Although it looked promising I run into a bug which should have been fixed according to Excessive log generation observed in NFS-Ganesha. (#2637) · Issues · Sylva-projects / sylva-core · GitLab However after configuring /etc/ganesha/ganesha.conf and starting the service it’s logfile grew insanely fast to 3,5GB.
Reaching out to their forum on matrix.org is not so trivial it seems.
Still have to consider a plan C.
Thank you for your replies.
I went through some of the same issues (well, still going through them), as well.
Containers are very light weight and one can run many on a single server easily.
But, there are significant limitations to containers:
No live migration (if that’s a thing you’ll need)
No mounting remote files systems (Fuse is an option, but performance is … limited)
No membership in AD/SAMBA/FreeIPA domains (this is probably only required for network file shares when you need/want unified user auth., and isn’t really a separate issue)
Privileged containers can do more, but are …. unwise, and I reject them out of hand.
As such, I reached the conclusion that focusing exclusively on containers was not a useful goal. In my case, I always expected to run some full VMs of non-linux operating systems, so running some more was not any hardship. My current (linux) operating guidelines are simple:
Anything requiring NFS (or smb, though I use NFSv4 exclusively) is a VM
Anything I might need/want to live-migrate between servers is a VM
Everything else is a LXC container
I don’t over-allocate RAM at all (but I’m not too careful with CPU over-allocation)
With the unified management of containers and VMs, there’s very little difference between them other than VMs use a bit more RAM. For Minimal LXCs I allocate 1GB ram, and minimal VMs get 2GB. More as applications require, even though LXCs especially can use/require much less.
Hello guys,
The best thing will be implemented Linux network kernel target storage in Incus / IncusOS with NBD, iSCSI, nvme-over-tcp, NFS, S3 Object and why not Message Queues, IOT data over TCP to make possible a Converged Incus Cluster like with Ceph.
But that will be a new adventure …
Thank you for your explanation. I understand the limitations however for now I decided to choose for a FUSE-like solution and the ease of use with Incus containers using SSHFS. Disappointing performance might change my mind but I will give it a try.
So one container has a custom volume attached, openssh-server running, all the necessary clients have sshfs installed, groups/user/keys/fstab setup and testing…
SSHFS is a solid practical solution when you’re committed to containers. A few things worth knowing before you go too deep.
**Cipher selection matters** - the default is conservative. Add `Ciphers=chacha20-poly1305@openssh.com` to your fstab options and you’ll get noticeably better throughput.
**`_netdev` and `reconnect` are essential** in your fstab entry. Without `_netdev` the mount runs before the network is ready and fails silently at boot. With macvlan, container startup order matters too - if the server container isn’t up yet when clients try to mount, it fails. Using `x-systemd.automount` defers the mount until first access and sidesteps the timing issue entirely.
Your btrfs custom volume stays intact on the server side - SSHFS clients just see a plain filesystem, but snapshots and compression still work normally from the server container.
For high-throughput workloads you may eventually hit limits from SFTP single-threading and cipher overhead, but for typical home lab file sharing it’s completely fine. Hope that helps!