Hi again
O.S. Ubuntu24.04LTS
Playing with lmvcluster + lvmlock + sanlock.
First incus is ok.
drbd as primary-primary - ok
#) cat /proc/drbd
version: 8.4.11 (api:1/proto:86-101)
srcversion: 211FB288A383ED945B83420
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
ns:2621440 nr:317066365 dw:317066245 dr:2646506 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
-
lvmlockd - ok
-
sanlockok - ok
I can create as many machines as I like.
as example:
— Logical volume —
LV Path /dev/incus/containers_painel
LV Name containers_painel
VG Name incus
LV UUID FzxnG1-zm8Y-FoHa-Cxos-YP0Q-0rTc-lBM56X
LV Write Access read/write
LV Creation host, time zeus, 2024-09-03 15:39:31 -0300
LV snapshot status source of
containers_painel-snap1 [active]
containers_painel-snap0 [active]
LV Status available
open 1
LV Size 10,00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:25
BUT
second machine:
- drbd ok:
#) cat /proc/drbd
version: 8.4.11 (api:1/proto:86-101)
srcversion: 211FB288A383ED945B83420
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
ns:2621440 nr:317066365 dw:317066245 dr:2646506 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
- lvmlockd I think it is ok. Do not kwon how to check,
systemctl status lvmlockd
● lvmlockd.service - LVM lock daemon
Loaded: loaded (/usr/lib/systemd/system/lvmlockd.service; disabled; preset: enabled)
Active: active (running) since Wed 2024-09-11 09:11:28 -03; 18min ago
Docs: man:lvmlockd(8)
Main PID: 182953 (lvmlockd)
Tasks: 3 (limit: 38157)
Memory: 628.0K (peak: 1.1M)
CPU: 9ms
CGroup: /system.slice/lvmlockd.service
└─182953 /usr/sbin/lvmlockd --foreground
set 11 09:11:28 pauloric systemd[1]: Starting lvmlockd.service - LVM lock daemon…
set 11 09:11:28 pauloric lvmlockd[182953]: [D] creating /run/lvm/lvmlockd.socket
set 11 09:11:28 pauloric lvmlockd[182953]: 1726056688 lvmlockd started
set 11 09:11:28 pauloric systemd[1]: Started lvmlockd.service - LVM lock daemon.
- sanlockd I think it is ok. Do not kwon how to check,
systemctl status sanlock.service
● sanlock.service - Shared Storage Lease Manager
Loaded: loaded (/usr/lib/systemd/system/sanlock.service; disabled; preset: enabled)
Active: active (running) since Wed 2024-09-11 09:11:55 -03; 20min ago
Docs: man:sanlock(8)
Process: 183001 ExecStart=/usr/sbin/sanlock daemon $sanlock_opts (code=exited, status=0/SUCCESS)
Main PID: 183002 (sanlock)
Tasks: 6 (limit: 38157)
Memory: 13.9M (peak: 14.9M)
CPU: 106ms
CGroup: /system.slice/sanlock.service
├─183002 /usr/sbin/sanlock daemon
└─183003 /usr/sbin/sanlock daemon
set 11 09:11:55 pauloric systemd[1]: Starting sanlock.service - Shared Storage Lease Manager…
set 11 09:11:55 pauloric systemd[1]: Started sanlock.service - Shared Storage Lease Manager.
#) lvdisplay. Humm I can see all machines, but they are “Not available.”
— Logical volume —
LV Path /dev/incus/containers_painel
LV Name containers_painel
VG Name incus
LV UUID FzxnG1-zm8Y-FoHa-Cxos-YP0Q-0rTc-lBM56X
LV Write Access read/write
LV Creation host, time zeus, 2024-09-03 15:39:31 -0300
LV snapshot status source of
containers_painel-snap1 [INACTIVE]
containers_painel-snap0 [INACTIVE]
LV Status NOT available
LV Size 10,00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
Questions.
- Should they be all as not available?
- at second incus I could not create incus using:
incus admin init… ( creating or not storage …).
Appreciate any help for this subject 8)