Incus and Proxmox on the same host

Hi all,

I know that it is not possible to run the Docker daemon and Incus on the same host due to networking conflicts (Update, it is possible How to configure your firewall - Incus documentation). Does this restriction also apply to Proxmox? I have a very simple bridge configuration for Proxmox.

sudo brctl show
bridge name	bridge id		STP enabled	interfaces
vmbr0		8000.1c697a011839	no		eno1
							tap100i0
vnet1		8000.000000000000	no		

It could be possible to run Docker and Incus on the same host, by following some of the workarounds.

The same should apply with Proxmox, as long as we now how what conflicts arise. If no one attempted it before, you can install both (Incus and Proxmox) and try to debug any issues found.

Ok. So first issue is coming from dependencies. Debian 12 + Incus from backports.

> sudo apt install incus/bookworm-backports
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Selected version '6.0.1-1~bpo12+1' (Debian Backports:stable-backports [amd64]) for 'incus'
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 lxc-pve : Conflicts: liblxc1
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.

Forgot to add earlier the URL to the Docker on Incus workarounds,

1 Like

FWIW, I’ve installed Proxmox nodes inside incus containers and VMs. It can be quite useful as a way to try out Proxmox, or to build a classroom proxmox lab, as it’s easy to tear down and recreate at will.

The main things to note are:

  1. Proxmox vms run at full speed on Proxmox nodes which are in incus containers.
    • However, proxmox vms run very slowly on Proxmox nodes which are in incus vms, due to the overhead of nested virtualization
  2. Proxmox containers work fine inside Proxmox nodes which are incus vms.
    • However, proxmox containers will not run at all inside Proxmox nodes which are incus containers. This is because Proxmox containers use a block device for their filesystem (unlike incus and docker containers which use the host filesystem); Proxmox needs to create and mount loopback devices, but does not have permissions to do so when running as an incus container. Running Proxmox inside a privileged container would risk trashing your host entirely.
  3. Proxmox nodes which are in incus containers cannot provide zfs storage or ceph OSDs. Again, this is due to Proxmox wanting to manipulate the host filesystem directly, and not having permissions.
    • Proxmox nodes which are in incus vms can provide local zfs pools, and can run as ceph OSDs (but you’ll need to attach some incus block storage to them for the OSDs to use)
    • Proxmox nodes in both incus vms and containers can be clients of Ceph storage, and can run Ceph monitors
  4. Clustering works fine between proxmox nodes running inside incus vms and containers.
    • The proviso is that an extra cluster-wide setting needs to be applied to allow corosync to run in a container node and join the cluster, but I couldn’t find a way to apply this setting successfully if that node is a container. Hence the first cluster node has to be an incus vm. The following script applies the setting to that node:
incus exec "$NAME" -- grep allow_knet_handle_fallback /etc/pve/corosync.conf >/dev/null || (
  # https://forum.proxmox.com/threads/corosync-conf-is-read-only-after-adding-qdevice.108946/
  incus exec "$NAME" -- systemctl stop pve-cluster
  incus exec "$NAME" -- systemctl stop corosync
  incus exec "$NAME" -- pmxcfs -l
  sleep 1

  incus exec "$NAME" -- bash -c "cat /etc/pve/corosync.conf - >/etc/pve/corosync.conf.new" <<EOS
system {
  allow_knet_handle_fallback: yes
}

EOS
  incus exec "$NAME" -- mv /etc/pve/corosync.conf.new /etc/pve/corosync.conf

  incus exec "$NAME" -- killall pmxcfs
  sleep 1
  incus exec "$NAME" -- systemctl restart corosync
  incus exec "$NAME" -- systemctl restart pve-cluster
)

References: 1 | 2 | 3 | 4 | 5

This means that a sensible setup has a mixture of vms and containers: incus vms for the initial clustering, running Proxmox containers, and Ceph OSDs; and incus containers for running Proxmox VMs.


Aside: I used image debian/12 for containers and debian/12/cloud for vms. The reason for using /cloud for VMs is because otherwise you get a fixed partitioning scheme with 4GB for sda2; I found that cloud-init is required to auto-expand the root partition to the size of the disk.

This issue is easy to reproduce:

# incus launch -d root,size=20GiB --vm images:debian/12 testdebvm
# incus shell testdebvm

root@testdebvm:~# df -H /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       4.0G  984M  3.0G  25% /
root@testdebvm:~# blockdev --getsize64 /dev/sda
21474836480

You could manually delete+recreate the sda2 partition and grow the filesystem, of course. But with cloud-init it’s automatic:

# incus launch -d root,size=20GiB --vm images:debian/12/cloud testdebvm2
# incus shell testdebvm2

root@testdebvm2:~# df -H /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        21G  1.2G   20G   6% /
root@testdebvm2:~# blockdev --getsize64 /dev/sda
21474836480