Lvmcluster at second server - not allow create storage

Hi

I am not able to create storage pool at second server.

At first server zeus is all running.

what I have done

  • create drbd as primary/primary
resource drbd0 {
    net {
        allow-two-primaries;
        }
        device     /dev/drbd0;
        disk       /dev/nvme0n1p4;
        meta-disk  internal;

        on zeus {
        address    192.168.20.1:7788;
        }

        on pauloric {
        address   192.168.20.111:7788;
        }
}
  • force drbd0 as primary :
    drbdadm create-md drbd0
    drbdadm primary drbd0 --force

  • change lvm to accept lvmlock and sanlock.
    at /etc/lvm/lvm.conf insert/change:

        filter = [ "r|/dev/nvme0n1p4|" ]    --->  nvme0n1p4 is part of drbd0
        issue_discards = 1
}

global {
        use_lvmlockd = 1
        lvmlockd_lock_retries = 3
        system_id_source = "lvmlocal"
}

at /etc/lvm/lvmlocal.conf

local {
        system_id = "zeus"
        host_id = 8
}
  • vgcreate --shared incus drbd0
  • systemctl start lvmlockd
  • systemctl start sanlockd
  • vgchange --lock-start
  • create incus with lvmcluster

at second server called pauloric same order:

drbdadm create-md drbd0
drbdadm connect drbd0
drbdadm primary drbd0

but lvmlocal.conf is a little different:

local {
        system_id = "pauloric"
        host_id = 3
}

not create vg incus , of course 80)
---- no vgcreate --shared incus drbd0

ok so far so good.

lvmlockctl -i

VG incus lock_type=sanlock uAqoPm-vxzU-rwsm-PBVa-S2DY-aPUX-y00uOA
LS sanlock lvm_incus
LK VG un ver 64

ok I see sanlock running with lvm_incus

pauloric#) sanlock client status

daemon 75fa3d65-6a51-4812-89ba-7b6bd01dc761.pauloric
p -1 helper
p -1 listener
p 386478 lvmlockd
p -1 status
s lvm_incus:3:/dev/mapper/incus-lvmlock:0

and I seel all lvm from vg incus

pauloric# vgs
VG #PV #LV #SN Attr VSize VFree LockType VLockArgs
incus 1 12 6 wz–ns <299,99g 179,48g sanlock 1.0.0:lvmlock
ok
pauloric#) lvdisplay incus

— Logical volume —
LV Path /dev/incus/containers_mercurio
LV Name containers_mercurio
VG Name incus
LV UUID C7Anj9-jgOe-o16I-7Tb5-d521-oZye-mUmWYk
LV Write Access read/write
LV Creation host, time zeus, 2024-09-11 11:56:37 -0300
LV Status NOT available
LV Size 10,00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto

Lets see zeus storage at incus server:
zeus:~# incus storage list
±--------±-----------±-------±------------±--------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
±--------±-----------±-------±------------±--------±--------+
| default | lvmcluster | incus | | 7 | CREATED |
±--------±-----------±-------±------------±--------±--------+

now , lets see if I can create incus at server pauloric:

root@pauloric:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm, lvmcluster, btrfs) [default=btrfs]: lvmcluster
Create a new LVMCLUSTER pool? (yes/no) [default=yes]: no
Name of the existing LVMCLUSTER pool or dataset: incus
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: lan
Would you like the server to be available over the network? (yes/no) [default=no]: yes
Address to bind to (not including port) [default=all]:
Port to bind to [default=8443]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML “init” preseed to be printed? (yes/no) [default=no]:
Error: Failed to create storage pool “default”: Volume group “incus” is not empty

humm I think I am missing something…

any help is very appreciate

regards

lvmcluster is meant to be used with an Incus cluster, same concept as Ceph.

You can’t use the same Ceph pool with two standalone servers and the same is true with a clustered LVM VG.

Hi Graber

Is was following instructions from here:

Well my bad to not paying attention at this point.

So is there a solution to use 2 incus server as high availability server…?

For something that we’d be willing to support, no. You need a minimum of 3 servers and have them be in a cluster.

You can probably hack something together by using two drives on each server, use the local drive for your instances and replicate it to the second drive of the other server. That will get you your storage pool replicated, but you still won’t have any of the database content or other local files replicated, so should a server die, you’ll have to manually re-import everything on the other server.

humm thanks Graber.

I’ll do my homework and as soon I find a solution I’ll return here 8)

best regards

Hi

I m stil playing w/ incus.

What I have done

2 incus servers: pauloric and zeus

normal install at servers above

At first server ( pauloric) all containers are working.

Create 2 containers above using drbd as primary/secondary at storage as dir

incus storage list
±--------±-------±------------±--------±--------+
| NAME | DRIVER | DESCRIPTION | USED BY | STATE |
±--------±-------±------------±--------±--------+
| base | dir | | 1 | CREATED |
±--------±-------±------------±--------±--------+
| default | dir | | 1 | CREATED |
±--------±-------±------------±--------±--------+
| painel | dir | | 1 | CREATED |
±--------±-------±------------±--------±--------+

root@zeus:~# incus storage show base
config:
source: /srv/containers/base
description: “”
name: base
driver: dir
used_by:

  • /1.0/profiles/base
    status: Created
    locations:
  • none

root@zeus:~# incus profile show base
config:
limits.cpu: “1”
limits.memory: 1GiB
snapshots.expiry: 1d
snapshots.schedule: 5 0 * * *
description: “”
devices:
lan:
nictype: bridged
parent: lan
type: nic
root:
path: /
pool: base
type: disk
name: base
used_by:
project: default

So far so good. Both servers pauloric and Zeus are almost identical ( profiles and storages are equal).

Now I set at pauloric drbd ( base) as secondary and at zeus drbd( base) as primary.

I can see container base at zeus:

root@zeus:~# ls /srv/containers/base/containers/base/
backup.yaml metadata.yaml rootfs templates

All right…

Now that I can see base container, How can I start it ( base) at zeus?

Looking at /var/lib/incus/containers at first machine ( pauloric) I see a symbolic link to /var/lib/incus/storage/
root@pauloric:~# ls -l /var/lib/incus/containers/
total 0
lrwxrwxrwx 1 root root 49 set 18 09:26 base → /var/lib/incus/storage-pools/base/containers/base
lrwxrwxrwx 1 root root 53 set 18 10:04 painel → /var/lib/incus/storage-pools/painel/containers/painel

base is break now at pauloric

I tried to create same soft link at zeus but container base is not found…

At this point , what can I do to start container base? all files are at correct place…

best regards

You need to use incus admin recover to import the containers found on disk back into the database.

Thanks Graber

You are the man !

Now everything is working 80)

2 incus servers with High Availability using drbd + corosync + pacemaker… 80)

thanks for all working that you have been doing. Incus is an excellent software