terryng
(Terry Ng)
November 28, 2020, 4:23pm
1
I have installed a CEPH cluster with 4 nodes. I then tried to initiate a LXD in one of the node “lxd init”. Here are the settings:
use LXD clustering: yes
joining an existing cluster: no
configure a new local storage pool: no
configure a new remote storage pool: yes
storage backend: ceph
create a new CEPH pool: yes
Name of existing CEPH cluster: ceph
Name of the OSD storage pool: lxd
Number of placement groups: 32
It hang after the last question about should the YAML “lxd init” be printed.
I stopped the init process and run “lxc storage list”. No pool was created.
Here is the output of “ceph -s”
cluster:
id: 31dde3e0-65ed-4ac3-b48d-a051541199e1
health: HEALTH_WARN
Reduced data availability: 164 pgs inactive
Degraded data redundancy: 164 pgs undersized
services:
mon: 4 daemons, quorum node1,node2,node3,node4 (age 104m)
mgr: node2(active, since 113m), standbys: node3,node4,node1
mds: 4 up:standby
osd: 8 osds: 8 up (since 109m), 8 in (since 9h); 1 remapped pgs
data:
pools: 4 pools, 165 pgs
objects: 0 objects, 0 B
usage: 8.1 GiB used, 13 TiB / 13 TiB avail
pgs: 99.394% pgs not active
164 undersized+peered
1 active+clean+remapped
Please help. Thanks!
stgraber
(Stéphane Graber)
November 28, 2020, 5:42pm
2
Can you show ps fauxww
when it hangs?
stgraber
(Stéphane Graber)
November 28, 2020, 5:43pm
3
Also, try running lxc info
prior to lxd init
just to confirm that LXD is online and responsive.
terryng
(Terry Ng)
November 29, 2020, 4:51am
4
There were some data output after running lxc info. So, I believe that LXD is online.
terryng
(Terry Ng)
November 29, 2020, 4:54am
5
Stephane,
Here is the output of “ps fauxww” when lxd init hang:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 2 0.0 0.0 0 0 ? S Nov28 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? I< Nov28 0:00 _ [rcu_gp]
root 4 0.0 0.0 0 0 ? I< Nov28 0:00 _ [rcu_par_gp]
root 6 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/0:0H-kblockd]
root 9 0.0 0.0 0 0 ? I< Nov28 0:00 _ [mm_percpu_wq]
root 10 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/0]
root 11 0.0 0.0 0 0 ? I Nov28 0:22 _ [rcu_sched]
root 12 0.0 0.0 0 0 ? S Nov28 0:00 _ [migration/0]
root 13 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/0]
root 14 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/0:1-rcu_gp]
root 15 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/0]
root 16 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/1]
root 17 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/1]
root 18 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/1]
root 19 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/1]
root 21 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/1:0H-kblockd]
root 23 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/2]
root 24 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/2]
root 25 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/2]
root 26 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/2]
root 28 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/2:0H-kblockd]
root 29 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/3]
root 30 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/3]
root 31 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/3]
root 32 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/3]
root 34 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/3:0H-kblockd]
root 35 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/4]
root 36 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/4]
root 37 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/4]
root 38 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/4]
root 40 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/4:0H-kblockd]
root 41 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/5]
root 42 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/5]
root 43 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/5]
root 44 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/5]
root 45 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/5:0-rcu_gp]
root 46 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/5:0H-kblockd]
root 47 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/6]
root 48 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/6]
root 49 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/6]
root 50 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/6]
root 52 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/6:0H-kblockd]
root 53 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/7]
root 54 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/7]
root 55 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/7]
root 56 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/7]
root 58 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/7:0H-kblockd]
root 59 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/8]
root 60 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/8]
root 61 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/8]
root 62 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/8]
root 64 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/8:0H-kblockd]
root 65 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/9]
root 66 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/9]
root 67 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/9]
root 68 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/9]
root 69 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/9:0-events]
root 70 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/9:0H-kblockd]
root 71 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/10]
root 72 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/10]
root 73 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/10]
root 74 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/10]
root 76 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/10:0H-kblockd]
root 77 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/11]
root 78 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/11]
root 79 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/11]
root 80 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/11]
root 82 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/11:0H-kblockd]
root 83 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/12]
root 84 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/12]
root 85 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/12]
root 86 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/12]
root 88 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/12:0H-kblockd]
root 89 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/13]
root 90 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/13]
root 91 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/13]
root 92 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/13]
root 94 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/13:0H-kblockd]
root 95 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/14]
root 96 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/14]
root 97 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/14]
root 98 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/14]
root 100 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/14:0H-kblockd]
root 101 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/15]
root 102 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/15]
root 103 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/15]
root 104 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/15]
root 106 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/15:0H-kblockd]
root 107 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/16]
root 108 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/16]
root 109 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/16]
root 110 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/16]
root 112 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/16:0H]
root 113 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/17]
root 114 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/17]
root 115 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/17]
root 116 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/17]
root 118 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/17:0H-kblockd]
root 119 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/18]
root 120 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/18]
root 121 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/18]
root 122 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/18]
root 124 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/18:0H-kblockd]
root 125 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/19]
root 126 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/19]
root 127 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/19]
root 128 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/19]
root 130 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/19:0H-kblockd]
root 131 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/20]
root 132 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/20]
root 133 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/20]
root 134 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/20]
root 136 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/20:0H-kblockd]
root 137 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/21]
root 138 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/21]
root 139 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/21]
root 140 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/21]
root 142 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/21:0H-kblockd]
root 143 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/22]
root 144 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/22]
root 145 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/22]
root 146 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/22]
root 148 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/22:0H-kblockd]
root 149 0.0 0.0 0 0 ? S Nov28 0:00 _ [cpuhp/23]
root 150 0.0 0.0 0 0 ? S Nov28 0:00 _ [idle_inject/23]
root 151 0.0 0.0 0 0 ? S Nov28 0:01 _ [migration/23]
root 152 0.0 0.0 0 0 ? S Nov28 0:00 _ [ksoftirqd/23]
root 154 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/23:0H]
root 155 0.0 0.0 0 0 ? S Nov28 0:00 _ [kdevtmpfs]
root 156 0.0 0.0 0 0 ? I< Nov28 0:00 _ [netns]
root 157 0.0 0.0 0 0 ? S Nov28 0:00 _ [rcu_tasks_kthre]
root 158 0.0 0.0 0 0 ? S Nov28 0:00 _ [kauditd]
root 162 0.0 0.0 0 0 ? S Nov28 0:00 _ [khungtaskd]
root 163 0.0 0.0 0 0 ? S Nov28 0:00 _ [oom_reaper]
root 164 0.0 0.0 0 0 ? I< Nov28 0:00 _ [writeback]
root 165 0.0 0.0 0 0 ? S Nov28 0:00 _ [kcompactd0]
root 166 0.0 0.0 0 0 ? S Nov28 0:00 _ [kcompactd1]
root 167 0.0 0.0 0 0 ? SN Nov28 0:00 _ [ksmd]
root 168 0.0 0.0 0 0 ? SN Nov28 0:00 _ [khugepaged]
root 215 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kintegrityd]
root 216 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kblockd]
root 217 0.0 0.0 0 0 ? I< Nov28 0:00 _ [blkcg_punt_bio]
root 218 0.0 0.0 0 0 ? I< Nov28 0:00 _ [tpm_dev_wq]
root 219 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ata_sff]
root 220 0.0 0.0 0 0 ? I< Nov28 0:00 _ [md]
root 221 0.0 0.0 0 0 ? I< Nov28 0:00 _ [edac-poller]
root 222 0.0 0.0 0 0 ? I< Nov28 0:00 _ [devfreq_wq]
root 223 0.0 0.0 0 0 ? S Nov28 0:00 _ [watchdogd]
root 226 0.0 0.0 0 0 ? S Nov28 0:00 _ [kswapd0]
root 227 0.0 0.0 0 0 ? S Nov28 0:00 _ [kswapd1]
root 228 0.0 0.0 0 0 ? S Nov28 0:00 _ [ecryptfs-kthrea]
root 230 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kthrotld]
root 231 0.0 0.0 0 0 ? I< Nov28 0:00 _ [acpi_thermal_pm]
root 244 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/16:1-events]
root 245 0.0 0.0 0 0 ? I Nov28 0:01 _ [kworker/16:2-events]
root 248 0.0 0.0 0 0 ? I< Nov28 0:00 _ [vfio-irqfd-clea]
root 258 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/15:1-rcu_gp]
root 260 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/23:1-events]
root 263 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ipv6_addrconf]
root 272 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kstrp]
root 275 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/u67:0]
root 276 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/u68:0]
root 277 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/u69:0]
root 290 0.0 0.0 0 0 ? I< Nov28 0:00 _ [charger_manager]
root 382 0.0 0.0 0 0 ? S Nov28 0:00 _ [scsi_eh_0]
root 383 0.0 0.0 0 0 ? I< Nov28 0:00 _ [scsi_tmf_0]
root 384 0.0 0.0 0 0 ? I< Nov28 0:00 _ [cryptd]
root 385 0.0 0.0 0 0 ? S Nov28 0:00 _ [scsi_eh_1]
root 386 0.0 0.0 0 0 ? I< Nov28 0:00 _ [scsi_tmf_1]
root 387 0.0 0.0 0 0 ? S Nov28 0:04 _ [usb-storage]
root 390 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/10:1H-kblockd]
root 391 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/12:1H-kblockd]
root 406 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/20:1H-kblockd]
root 407 0.0 0.0 0 0 ? I< Nov28 0:00 _ [uas]
root 410 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/2:1H-kblockd]
root 411 0.0 0.0 0 0 ? I< Nov28 0:00 _ [mlx4]
root 412 0.0 0.0 0 0 ? I< Nov28 0:00 _ [mlx4_health]
root 415 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ttm_swap]
root 416 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/16:1H-kblockd]
root 417 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/4:1H-kblockd]
root 442 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kdmflush]
root 445 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kdmflush]
root 462 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kdmflush]
root 478 0.0 0.0 0 0 ? I< Nov28 0:00 _ [mlx4_en]
root 481 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ib-comp-wq]
root 482 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ib-comp-unb-wq]
root 483 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ib_mcast]
root 484 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ib_nl_sa_wq]
root 486 0.0 0.0 0 0 ? I< Nov28 0:00 _ [mlx4_ib]
root 487 0.0 0.0 0 0 ? I< Nov28 0:00 _ [mlx4_ib_mcg]
root 491 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ib_mad1]
root 502 0.0 0.0 0 0 ? I< Nov28 0:00 _ [raid5wq]
root 542 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/22:1H-kblockd]
root 543 0.0 0.0 0 0 ? S Nov28 0:16 _ [jbd2/dm-2-8]
root 544 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ext4-rsv-conver]
root 574 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/6:1H-kblockd]
root 575 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/8:1H-kblockd]
root 579 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/0:1H-kblockd]
root 600 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/15:1H-kblockd]
root 603 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/18:1H-kblockd]
root 612 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/14:1H-kblockd]
root 623 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/17:1H-kblockd]
root 624 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/5:1H-kblockd]
root 633 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/3:1H-kblockd]
root 636 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/23:1H-kblockd]
root 642 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/11:1H-kblockd]
root 645 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/13:1H-kblockd]
root 646 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/1:1H-kblockd]
root 654 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/7:1H-kblockd]
root 657 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/19:1H-kblockd]
root 662 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/23:2-rcu_par_gp]
root 664 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/21:1H-kblockd]
root 730 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kworker/9:1H-kblockd]
root 892 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/1:2-events]
root 952 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kaluad]
root 953 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kmpath_rdacd]
root 954 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kmpathd]
root 955 0.0 0.0 0 0 ? I< Nov28 0:00 _ [kmpath_handlerd]
root 965 0.0 0.0 0 0 ? S Nov28 0:00 _ [jbd2/sda2-8]
root 966 0.0 0.0 0 0 ? I< Nov28 0:00 _ [ext4-rsv-conver]
root 968 0.0 0.0 0 0 ? S< Nov28 0:00 _ [loop0]
root 972 0.0 0.0 0 0 ? S< Nov28 0:00 _ [loop1]
root 976 0.0 0.0 0 0 ? S< Nov28 0:00 _ [loop3]
root 977 0.0 0.0 0 0 ? S< Nov28 0:00 _ [loop4]
root 978 0.0 0.0 0 0 ? S< Nov28 0:00 _ [loop5]
root 995 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/7:2-rcu_par_gp]
root 1007 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/11:2-mm_percpu_wq]
root 1967 0.0 0.0 0 0 ? I< Nov28 0:00 _ [dio/dm-2]
root 2101 0.0 0.0 2488 584 ? S Nov28 0:00 _ bpfilter_umh
root 2649 0.0 0.0 0 0 ? S< Nov28 0:00 _ [spl_system_task]
root 2650 0.0 0.0 0 0 ? S< Nov28 0:00 _ [spl_delay_taskq]
root 2651 0.0 0.0 0 0 ? S< Nov28 0:00 _ [spl_dynamic_tas]
root 2652 0.0 0.0 0 0 ? S< Nov28 0:00 _ [spl_kmem_cache]
root 2657 0.0 0.0 0 0 ? S< Nov28 0:00 _ [zvol]
root 2664 0.0 0.0 0 0 ? S Nov28 0:00 _ [arc_prune]
root 2665 0.0 0.0 0 0 ? SN Nov28 0:00 _ [zthr_procedure]
root 2666 0.0 0.0 0 0 ? SN Nov28 0:00 _ [zthr_procedure]
root 2667 0.0 0.0 0 0 ? S Nov28 0:00 _ [dbu_evict]
root 2668 0.0 0.0 0 0 ? SN Nov28 0:00 _ [dbuf_evict]
root 2669 0.0 0.0 0 0 ? SN Nov28 0:00 _ [z_vdev_file]
root 2670 0.0 0.0 0 0 ? S Nov28 0:00 _ [l2arc_feed]
root 2755 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/20:1-rcu_gp]
root 2790 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/22:0-events]
root 2822 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/4:0-rcu_gp]
root 2888 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/18:0-events]
root 2968 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/2:0-events]
root 3125 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/11:0-rcu_par_gp]
root 3128 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/12:1-events]
root 3129 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/14:0-cgroup_destroy]
root 3137 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/5:1-mm_percpu_wq]
root 3141 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/15:0-mm_percpu_wq]
root 3146 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/7:0-rcu_gp]
root 3165 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/4:1-mm_percpu_wq]
root 3482 0.0 0.0 0 0 ? I Nov28 0:01 _ [kworker/0:0-events]
root 3484 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/2:1-mm_percpu_wq]
root 3789 0.0 0.0 0 0 ? S< Nov28 0:00 _ [loop2]
root 4516 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/8:2]
root 5045 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/21:0-events]
root 5047 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/6:1-rcu_par_gp]
root 5060 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/8:1-mm_percpu_wq]
root 5061 0.0 0.0 0 0 ? I Nov28 0:01 _ [kworker/14:2-events]
root 5094 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/18:2-mm_percpu_wq]
root 5239 0.0 0.0 0 0 ? I Nov28 0:05 _ [kworker/13:2-events]
root 5270 0.0 0.0 0 0 ? I Nov28 0:00 _ [kworker/19:0-rcu_gp]
root 5559 0.0 0.0 0 0 ? I 00:00 0:00 _ [kworker/20:2-events]
root 5563 0.0 0.0 0 0 ? I 00:00 0:00 _ [kworker/22:2-events]
root 5566 0.0 0.0 0 0 ? I 00:00 0:00 _ [kworker/6:0-mm_percpu_wq]
root 5581 0.0 0.0 0 0 ? I 00:00 0:00 _ [kworker/12:2]
root 5829 0.0 0.0 0 0 ? I 00:24 0:00 _ [kworker/21:1-mm_percpu_wq]
root 6019 0.0 0.0 0 0 ? I 00:36 0:00 _ [kworker/19:2-events]
root 6028 0.0 0.0 0 0 ? I 00:36 0:00 _ [kworker/17:0-events]
root 6033 0.0 0.0 0 0 ? I 00:36 0:00 _ [kworker/17:3-events]
root 6136 0.0 0.0 0 0 ? I 03:11 0:00 _ [kworker/1:0-mm_percpu_wq]
root 6593 0.0 0.0 0 0 ? I 06:20 0:00 _ [kworker/9:2]
root 6651 0.0 0.0 0 0 ? I 06:25 0:05 _ [kworker/13:0-events]
root 6780 0.0 0.0 0 0 ? I 07:00 0:00 _ [kworker/10:0-events]
root 6894 0.0 0.0 0 0 ? I 08:54 0:00 _ [kworker/10:1-events]
root 6994 0.0 0.0 0 0 ? I 12:13 0:00 _ [kworker/u65:0-events_power_efficient]
root 7004 0.0 0.0 0 0 ? I 12:30 0:00 _ [kworker/u65:2-events_unbound]
root 7005 0.0 0.0 0 0 ? I 12:31 0:00 _ [kworker/u64:2-rescan_0_hpsa]
root 7007 0.0 0.0 0 0 ? I 12:38 0:00 _ [kworker/u66:1-events_freezable_power_]
root 7008 0.0 0.0 0 0 ? I 12:41 0:00 _ [kworker/u64:0-mlx4_en]
root 7009 0.0 0.0 0 0 ? I 12:43 0:00 _ [kworker/u66:2-events_power_efficient]
root 7030 0.0 0.0 0 0 ? I 12:45 0:00 _ [kworker/3:1-events]
root 7032 0.0 0.0 0 0 ? I 12:45 0:00 _ [kworker/3:3]
root 7187 0.0 0.0 0 0 ? I 12:45 0:00 _ [kworker/7:1-mm_percpu_wq]
root 7231 0.0 0.0 0 0 ? I 12:47 0:00 _ [kworker/u64:1-edac-poller]
root 7237 0.0 0.0 0 0 ? I 12:49 0:00 _ [kworker/u66:0-events_power_efficient]
root 1 0.0 0.0 169116 13124 ? Ss Nov28 0:12 /sbin/init maybe-ubiquity
root 621 0.0 0.0 61696 25172 ? S<s Nov28 0:01 /lib/systemd/systemd-journald
root 663 0.0 0.0 22104 6252 ? Ss Nov28 0:00 /lib/systemd/systemd-udevd
root 956 0.0 0.0 280304 18368 ? SLsl Nov28 0:02 /sbin/multipathd -d -s
systemd+ 1005 0.0 0.0 90424 6408 ? Ssl Nov28 0:00 /lib/systemd/systemd-timesyncd
systemd+ 1041 0.0 0.0 26920 8196 ? Ss Nov28 0:00 /lib/systemd/systemd-networkd
systemd+ 1044 0.0 0.0 24356 12628 ? Ss Nov28 0:00 /lib/systemd/systemd-resolved
root 1072 0.0 0.0 239280 9228 ? Ssl Nov28 0:00 /usr/lib/accountsservice/accounts-daemon
root 1075 0.0 0.0 18340 12388 ? Ss Nov28 0:00 /usr/bin/python3 /usr/bin/ceph-crash
message+ 1087 0.0 0.0 7512 4704 ? Ss Nov28 0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root 1096 0.0 0.0 81988 3888 ? Ssl Nov28 0:03 /usr/sbin/irqbalance --foreground
root 1099 0.0 0.0 29072 18108 ? Ss Nov28 0:00 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
syslog 1104 0.0 0.0 224348 5428 ? Ssl Nov28 0:00 /usr/sbin/rsyslogd -n -iNONE
root 1105 0.0 0.0 10392 5348 ? Ss Nov28 0:00 /usr/sbin/smartd -n
root 1107 0.0 0.0 3086188 38700 ? Ssl Nov28 0:11 /usr/lib/snapd/snapd
root 1114 0.0 0.0 16996 7992 ? Ss Nov28 0:00 /lib/systemd/systemd-logind
root 1135 0.0 0.0 6812 3048 ? Ss Nov28 0:00 /usr/sbin/cron -f
daemon 1137 0.0 0.0 3792 2220 ? Ss Nov28 0:00 /usr/sbin/atd -f
root 1156 0.0 0.0 5828 1824 tty1 Ss+ Nov28 0:00 /sbin/agetty -o -p – \u --noclear tty1 linux
root 1180 0.0 0.0 12176 7300 ? Ss Nov28 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root 7010 0.0 0.0 13952 9112 ? Ss 12:45 0:00 _ sshd: gccadmin [priv]
gccadmin 7148 0.0 0.0 13952 6152 ? S 12:45 0:00 | _ sshd: gccadmin@pts/0
gccadmin 7149 0.0 0.0 8276 5216 pts/0 Ss 12:45 0:00 | _ -bash
root 7188 0.0 0.0 9436 4676 pts/0 S+ 12:46 0:00 | _ sudo lxd init
root 7191 0.0 0.0 1823228 33620 pts/0 Sl+ 12:47 0:00 | _ lxd init
root 7238 0.0 0.0 13956 9236 ? Ss 12:49 0:00 _ sshd: gccadmin [priv]
gccadmin 7316 0.0 0.0 13956 6104 ? S 12:49 0:00 _ sshd: gccadmin@pts/1
gccadmin 7317 0.0 0.0 8276 5224 pts/1 Ss 12:49 0:00 _ -bash
gccadmin 7331 0.0 0.0 9316 3848 pts/1 R+ 12:50 0:00 _ ps fauxww
root 1185 0.0 0.0 107888 21012 ? Ssl Nov28 0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root 1248 0.0 0.0 236420 9336 ? Ssl Nov28 0:00 /usr/lib/policykit-1/polkitd --no-debug
ceph 1501 0.4 0.0 715744 64796 ? Ssl Nov28 3:53 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
ceph 1504 0.4 0.0 715740 64704 ? Ssl Nov28 4:10 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
ceph 2444 0.5 1.1 1422212 932556 ? Ssl Nov28 4:15 /usr/bin/ceph-mon -f --cluster ceph --id bvlgari --setuser ceph --setgroup ceph
ceph 2498 0.1 0.2 497920 184656 ? Ssl Nov28 1:04 /usr/bin/ceph-mgr -f --cluster ceph --id bvlgari --setuser ceph --setgroup ceph
ceph 2543 0.0 0.0 303700 38692 ? Ssl Nov28 0:14 /usr/bin/ceph-mds -f --cluster ceph --id bvlgari --setuser ceph --setgroup ceph
root 2699 0.0 0.0 859836 22468 ? Sl Nov28 0:03 rbd --id admin --cluster ceph --pool lxd info lxd_lxd
root 4350 0.0 0.0 4636 1752 ? Ss Nov28 0:00 /bin/sh /snap/lxd/18402/commands/daemon.start
root 4508 0.0 0.0 3413668 73840 ? Sl Nov28 0:20 _ lxd --logfile /var/snap/lxd/common/lxd/logs/lxd.log --group lxd
root 4676 0.0 0.0 786104 22540 ? Sl Nov28 0:03 _ rbd --id admin --image-feature layering, --cluster ceph --pool my-ceph --size 0B create lxd_my-ceph
lxd 5861 0.0 0.0 43628 3484 ? Ss 00:24 0:00 _ dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=lxdfan0 --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=240.21.0.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdfan0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdfan0/dnsmasq.hosts --dhcp-range 240.21.0.2,240.21.0.254,1h -s lxd -S /lxd/240.21.0.1#1053 --rev-server=240.0.0.0/8,240.21.0.1#1053 --conf-file=/var/snap/lxd/common/lxd/networks/lxdfan0/dnsmasq.raw -u lxd -g lxd
lxd 5862 0.0 0.0 2855148 34636 ? Ssl 00:24 0:15 _ /snap/lxd/current/bin/lxd forkdns 240.21.0.1:1053 lxd lxdfan0
root 4497 0.0 0.0 97804 1644 ? Sl Nov28 0:00 lxcfs /var/snap/lxd/common/var/lib/lxcfs -p /var/snap/lxd/common/lxcfs.pid
gccadmin 7021 0.0 0.0 18740 9900 ? Ss 12:45 0:00 /lib/systemd/systemd --user
gccadmin 7024 0.0 0.0 170548 4868 ? S 12:45 0:00 _ (sd-pam)
Thanks!
terryng
(Terry Ng)
November 29, 2020, 2:50pm
6
I have some updates please. I have 4 nodes, 8 osd and the default pool size is 3 in my cluster. I reckon that I should use pg around 250 to 266. So, I created an osd pool with
sudo ceph osd pool create lxd-ceph 250
The pool was created and the cluster health was OK after a while.
Tried to initiate the LXD again.
Name of the existing CEPH cluster [default=ceph]:
Name of the OSD storage pool [default=lxd]: lxd-ceph
Number of placement groups [default=32]: 250
It went well. The other nodes joined the cluster without problem.
I will try to summaries what I have done later. Thanks!
Terry
stgraber
(Stéphane Graber)
November 30, 2020, 3:05am
7
Looks like it’s stuck on your ceph cluster somehow:
root 4676 0.0 0.0 786104 22540 ? Sl Nov28 0:03 _ rbd --id admin --image-feature layering, --cluster ceph --pool my-ceph --size 0B create lxd_my-ceph