Incus clustering doubt

I have a doubt about incus clustering.

I created three identical VMs (32GB memory, 400GB root partition and a 3TB unused partition with Ubuntu 22.04.4 server installed) and attempted to form a cluster. These are inc0, inc1 and inc2.

All the three use incus 6.0.0 LTS.

incus admin init was done on inc0 as boot strap. Zfs partition was done on /dec/sda3 , br0 was used for network and incus cluster list showed inc0. Then sudo incus admin init was done on inc1 and inc2 (incus cluster add inc1 and incus cluster add inc2 was done on inc0 and the tokens were given). Default " local" was chosen when prompted , for the rest of the queries.
When I checked zpool list on the three , I found that it is 3T on inc0 but just about 25G for inc1 and inc2.
/dev/sda3 was unused and so is the bridge br0 for inc1 and inc2.

My doubt is : How do I make use of the br0 network and unused partition on a cluster member?

In incus clustering where is the storage and how do the members communicate ?

When I created a container c1 on the target inc0, it gets listed on all the three on incus list.

Kindly suggest some documentation for me to clarify my concepts.

I have created the VMs on a proxmox cluster which makes use the ceph storage. Is there an equivalent one for incus?

Best regards
NITK Surathkal

I think you need to show “zpool list” output. If the zpool is only 25G then there’s only a 25G block device (or file) backing it. It would be clearer if you could show the exact console session from incus admin init on inc1 and inc2 - what questions you were asked and how you answered them.

What do you mean by “unused” for br0? If you haven’t created any VM instances yet on inc1 and inc2 then there’s nothing to use it.

Aside: I believe you should make the br0 interface be an external bridge (e.g. using netplan), then all the nodes will be able to use it. But I don’t use incus clustering in production myself, only for experimentation. I was bitten by clustering problems when trying it with lxd.

If you have three nodes, then there are four steps to create an incus cluster network:

Which storage are you referring to?

The container/VM backing storage can either be local to each node (in which case, moving an instance from one node to another involves copying all the data), or shared (like Ceph, or clustered LVM talking to a shared block device over iSCSI or nbd)

If you mean the state storage, there’s a clustered SQL database (cowsql) which is replicated between the SQL database nodes.

The cluster nodes communicate over the network.

If you look carefully at the output of incus list, you should see a column which says which node the container is on.

1 Like

From your description, upon joining, those two other servers should have prompted for:

  • source property for local pool, that should have been set to /dev/sda3 in your case

Not sure about the details of your br0 bridge, if that’s a pre-existing bridge on the host, then it should just have been set as the parent for eth0 in incus profile show default and doesn’t really need any extra attention on a per-server basis, other than it needing to exist on all servers.

1 Like