Cant run containers :(

Thank you for the reply.
“lxc storage ls” gives me:
±--------±-------±-------------------------------------------±------------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY |
±--------±-------±-------------------------------------------±------------±--------+
| default | zfs | /var/snap/lxd/common/lxd/disks/default.img | | 24 |
±--------±-------±-------------------------------------------±------------±--------+

In the video they talk about running “lxd recover”. I tried to run that, but it gave me an error:
"Error: Failed validation request: Failed mounting pool “default”: Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)"

What can I do?

Look at your dmesg output, this sounds like something bad happen to your storage.
You may also want to look at df -h for whether you maybe ran out of disk space.

Output from dmesg:
[ 0.000000] microcode: microcode updated early to revision 0x28, date = 2019-11-12
[ 0.000000] Linux version 5.4.0-150-generic (buildd@bos03-amd64-009) (gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)) #167-Ubuntu SMP Mon May 15 17:35:05 UTC 2023 (Ubuntu 5.4.0-150.167-generic 5.4.233)
[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.4.0-150-generic root=UUID=596cf640-ef01-11e8-ab98-002215f11538 ro maybe-ubiquity
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] AMD AuthenticAMD
[ 0.000000] Hygon HygonGenuine
[ 0.000000] Centaur CentaurHauls
**[ 0.000000] zhaoxin Shanghai **
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers’
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers’
[ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers’
[ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
[ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using ‘standard’ format.
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009d7ff] usable
[ 0.000000] BIOS-e820: [mem 0x000000000009d800-0x000000000009ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000099873fff] usable
[ 0.000000] BIOS-e820: [mem 0x0000000099874000-0x000000009987afff] ACPI NVS
[ 0.000000] BIOS-e820: [mem 0x000000009987b000-0x0000000099cb8fff] usable
[ 0.000000] BIOS-e820: [mem 0x0000000099cb9000-0x000000009a249fff] reserved
[ 0.000000] BIOS-e820: [mem 0x000000009a24a000-0x00000000abe4ffff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000abe50000-0x00000000ac057fff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000ac058000-0x00000000ac098fff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000ac099000-0x00000000ac149fff] ACPI NVS
[ 0.000000] BIOS-e820: [mem 0x00000000ac14a000-0x00000000acffefff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000acfff000-0x00000000acffffff] usable
[ 0.000000] BIOS-e820: [mem 0x00000000af800000-0x00000000bf9fffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000f8000000-0x00000000fbffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fed00000-0x00000000fed03fff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
[ 0.000000] BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved
[ 0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000063e5fffff] usable
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] SMBIOS 2.7 present.
[ 0.000000] DMI: MSI MS-7823/B85M-G43 (MS-7823), BIOS V3.0 04/16/2013
[ 0.000000] tsc: Fast TSC calibration using PIT
[ 0.000000] tsc: Detected 3399.670 MHz processor
[ 0.001517] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[ 0.001518] e820: remove [mem 0x000a0000-0x000fffff] usable
[ 0.001523] last_pfn = 0x63e600 max_arch_pfn = 0x400000000
[ 0.001527] MTRR default type: uncachable
[ 0.001527] MTRR fixed ranges enabled:
[ 0.001528] 00000-9FFFF write-back
[ 0.001529] A0000-BFFFF uncachable
[ 0.001529] C0000-CFFFF write-protect
[ 0.001530] D0000-E7FFF uncachable
[ 0.001530] E8000-FFFFF write-protect
[ 0.001531] MTRR variable ranges enabled:
[ 0.001532] 0 base 0000000000 mask 7C00000000 write-back
[ 0.001532] 1 base 0400000000 mask 7E00000000 write-back
[ 0.001533] 2 base 0600000000 mask 7FC0000000 write-back
[ 0.001533] 3 base 00C0000000 mask 7FC0000000 uncachable
[ 0.001534] 4 base 00B0000000 mask 7FF0000000 uncachable
[ 0.001534] 5 base 00AF800000 mask 7FFF800000 uncachable
[ 0.001535] 6 base 063F000000 mask 7FFF000000 uncachable
[ 0.001536] 7 base 063E800000 mask 7FFF800000 uncachable
[ 0.001536] 8 base 063E600000 mask 7FFFE00000 uncachable
[ 0.001536] 9 disabled
**[ 0.001818] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT **
[ 0.002186] total RAM covered: 24286M
[ 0.002285] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 998M
[ 0.002286] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 998M
[ 0.002287] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 998M
[ 0.002288] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 998M
[ 0.002289] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 998M
[ 0.002289] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 998M
[ 0.002290] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 998M
[ 0.002291] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 998M
[ 0.002292] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 102M
[ 0.002293] BADgran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M
[ 0.002293] BADgran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M
[ 0.002294] BADgran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: -16M
[ 0.002295] BADgran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: -16M
[ 0.002296] BADgran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: -16M
[ 0.002297] BADgran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: -16M
[ 0.002297] BADgran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1040M
[ 0.002298] gran_size: 128K chunk_size: 128K num_reg: 10 lose cover RAM: 998M
[ 0.002299] gran_size: 128K chunk_size: 256K num_reg: 10 lose cover RAM: 998M
[ 0.002300] gran_size: 128K chunk_size: 512K num_reg: 10 lose cover RAM: 998M
[ 0.002301] gran_size: 128K chunk_size: 1M num_reg: 10 lose cover RAM: 998M
[ 0.002302] gran_size: 128K chunk_size: 2M num_reg: 10 lose cover RAM: 998M
[ 0.002302] gran_size: 128K chunk_size: 4M num_reg: 10 lose cover RAM: 998M
[ 0.002303] gran_size: 128K chunk_size: 8M num_reg: 10 lose cover RAM: 998M
[ 0.002304] gran_size: 128K chunk_size: 16M num_reg: 10 lose cover RAM: 102M
[ 0.002305] BADgran_size: 128K chunk_size: 32M num_reg: 10 lose cover RAM: -16M
[ 0.002306] BADgran_size: 128K chunk_size: 64M num_reg: 10 lose cover RAM: -16M
[ 0.002306] BADgran_size: 128K chunk_size: 128M num_reg: 10 lose cover RAM: -16M
[ 0.002307] BADgran_size: 128K chunk_size: 256M num_reg: 10 lose cover RAM: -16M
[ 0.002308] BADgran_size: 128K chunk_size: 512M num_reg: 10 lose cover RAM: -16M
[ 0.002309] BADgran_size: 128K chunk_size: 1G num_reg: 10 lose cover RAM: -16M
[ 0.002310] BADgran_size: 128K chunk_size: 2G num_reg: 10 lose cover RAM: -1040M
[ 0.002310] gran_size: 256K chunk_size: 256K num_reg: 10 lose cover RAM: 998M
[ 0.002311] gran_size: 256K chunk_size: 512K num_reg: 10 lose cover RAM: 998M
[ 0.002312] gran_size: 256K chunk_size: 1M num_reg: 10 lose cover RAM: 998M
[ 0.002313] gran_size: 256K chunk_size: 2M num_reg: 10 lose cover RAM: 998M
[ 0.002314] gran_size: 256K chunk_size: 4M num_reg: 10 lose cover RAM: 998M
[ 0.002314] gran_size: 256K chunk_size: 8M num_reg: 10 lose cover RAM: 998M
[ 0.002315] gran_size: 256K chunk_size: 16M num_reg: 10 lose cover RAM: 102M
[ 0.002316] BADgran_size: 256K chunk_size: 32M num_reg: 10 lose cover RAM: -16M
[ 0.002317] BADgran_size: 256K chunk_size: 64M num_reg: 10 lose cover RAM: -16M
[ 0.002317] BADgran_size: 256K chunk_size: 128M num_reg: 10 lose cover RAM: -16M
[ 0.002318] BADgran_size: 256K chunk_size: 256M num_reg: 10 lose cover RAM: -16M
[ 0.002319] BADgran_size: 256K chunk_size: 512M num_reg: 10 lose cover RAM: -16M
[ 0.002320] BADgran_size: 256K chunk_size: 1G num_reg: 10 lose cover RAM: -16M
[ 0.002321] BADgran_size: 256K chunk_size: 2G num_reg: 10 lose cover RAM: -1040M
[ 0.002321] gran_size: 512K chunk_size: 512K num_reg: 10 lose cover RAM: 998M
[ 0.002322] gran_size: 512K chunk_size: 1M num_reg: 10 lose cover RAM: 998M
[ 0.002323] gran_size: 512K chunk_size: 2M num_reg: 10 lose cover RAM: 998M
[ 0.002324] gran_size: 512K chunk_size: 4M num_reg: 10 lose cover RAM: 998M
[ 0.002325] gran_size: 512K chunk_size: 8M num_reg: 10 lose cover RAM: 998M
[ 0.002325] gran_size: 512K chunk_size: 16M num_reg: 10 lose cover RAM: 102M
[ 0.002326] BADgran_size: 512K chunk_size: 32M num_reg: 10 lose cover RAM: -16M
[ 0.002327] BADgran_size: 512K chunk_size: 64M num_reg: 10 lose cover RAM: -16M
[ 0.002328] BADgran_size: 512K chunk_size: 128M num_reg: 10 lose cover RAM: -16M
[ 0.002328] BADgran_size: 512K chunk_size: 256M num_reg: 10 lose cover RAM: -16M
[ 0.002329] BADgran_size: 512K chunk_size: 512M num_reg: 10 lose cover RAM: -16M
[ 0.002330] BADgran_size: 512K chunk_size: 1G num_reg: 10 lose cover RAM: -16M
[ 0.002331] BADgran_size: 512K chunk_size: 2G num_reg: 10 lose cover RAM: -1040M
[ 0.002332] gran_size: 1M chunk_size: 1M num_reg: 10 lose cover RAM: 998M
[ 0.002333] gran_size: 1M chunk_size: 2M num_reg: 10 lose cover RAM: 998M
[ 0.002333] gran_size: 1M chunk_size: 4M num_reg: 10 lose cover RAM: 998M
[ 0.002334] gran_size: 1M chunk_size: 8M num_reg: 10 lose cover RAM: 998M
[ 0.002335] gran_size: 1M chunk_size: 16M num_reg: 10 lose cover RAM: 102M
[ 0.002336] BADgran_size: 1M chunk_size: 32M num_reg: 10 lose cover RAM: -16M
[ 0.002337] BADgran_size: 1M chunk_size: 64M num_reg: 10 lose cover RAM: -16M
[ 0.002337] BADgran_size: 1M chunk_size: 128M num_reg: 10 lose cover RAM: -16M
[ 0.002338] BADgran_size: 1M chunk_size: 256M num_reg: 10 lose cover RAM: -16M
[ 0.002339] BADgran_size: 1M chunk_size: 512M num_reg: 10 lose cover RAM: -16M
[ 0.002340] BADgran_size: 1M chunk_size: 1G num_reg: 10 lose cover RAM: -16M
[ 0.002341] BADgran_size: 1M chunk_size: 2G num_reg: 10 lose cover RAM: -1040M
[ 0.002341] gran_size: 2M chunk_size: 2M num_reg: 10 lose cover RAM: 998M
[ 0.002342] gran_size: 2M chunk_size: 4M num_reg: 10 lose cover RAM: 998M
[ 0.002343] gran_size: 2M chunk_size: 8M num_reg: 10 lose cover RAM: 998M
[ 0.002344] gran_size: 2M chunk_size: 16M num_reg: 10 lose cover RAM: 102M
[ 0.002344] BADgran_size: 2M chunk_size: 32M num_reg: 10 lose cover RAM: -16M
[ 0.002345] BADgran_size: 2M chunk_size: 64M num_reg: 10 lose cover RAM: -16M
[ 0.002346] BADgran_size: 2M chunk_size: 128M num_reg: 10 lose cover RAM: -16M
[ 0.002347] BADgran_size: 2M chunk_size: 256M num_reg: 10 lose cover RAM: -16M
[ 0.002348] BADgran_size: 2M chunk_size: 512M num_reg: 10 lose cover RAM: -16M
[ 0.002348] BADgran_size: 2M chunk_size: 1G num_reg: 10 lose cover RAM: -16M
[ 0.002349] BADgran_size: 2M chunk_size: 2G num_reg: 10 lose cover RAM: -1040M
[ 0.002350] gran_size: 4M chunk_size: 4M num_reg: 10 lose cover RAM: 998M
[ 0.002351] gran_size: 4M chunk_size: 8M num_reg: 10 lose cover RAM: 998M
[ 0.002351] gran_size: 4M chunk_size: 16M num_reg: 10 lose cover RAM: 102M
[ 0.002352] BADgran_size: 4M chunk_size: 32M num_reg: 10 lose cover RAM: -14M
[ 0.002353] BADgran_size: 4M chunk_size: 64M num_reg: 10 lose cover RAM: -14M
[ 0.002354] BADgran_size: 4M chunk_size: 128M num_reg: 10 lose cover RAM: -14M
[ 0.002355] BADgran_size: 4M chunk_size: 256M num_reg: 10 lose cover RAM: -14M
[ 0.002355] BADgran_size: 4M chunk_size: 512M num_reg: 10 lose cover RAM: -14M
[ 0.002356] BADgran_size: 4M chunk_size: 1G num_reg: 10 lose cover RAM: -14M
[ 0.002357] BADgran_size: 4M chunk_size: 2G num_reg: 10 lose cover RAM: -1038M
[ 0.002358] gran_size: 8M chunk_size: 8M num_reg: 10 lose cover RAM: 998M
[ 0.002359] gran_size: 8M chunk_size: 16M num_reg: 10 lose cover RAM: 102M
[ 0.002359] gran_size: 8M chunk_size: 32M num_reg: 10 lose cover RAM: 102M
[ 0.002360] gran_size: 8M chunk_size: 64M num_reg: 9 lose cover RAM: 6M
[ 0.002361] gran_size: 8M chunk_size: 128M num_reg: 9 lose cover RAM: 6M
[ 0.002362] gran_size: 8M chunk_size: 256M num_reg: 9 lose cover RAM: 6M
[ 0.002363] gran_size: 8M chunk_size: 512M num_reg: 9 lose cover RAM: 6M
[ 0.002363] gran_size: 8M chunk_size: 1G num_reg: 9 lose cover RAM: 6M
[ 0.002364] gran_size: 8M chunk_size: 2G num_reg: 10 lose cover RAM: 6M
[ 0.002365] gran_size: 16M chunk_size: 16M num_reg: 10 lose cover RAM: 494M
[ 0.002366] gran_size: 16M chunk_size: 32M num_reg: 10 lose cover RAM: 110M
[ 0.002367] gran_size: 16M chunk_size: 64M num_reg: 9 lose cover RAM: 14M
[ 0.002367] gran_size: 16M chunk_size: 128M num_reg: 9 lose cover RAM: 14M
[ 0.002368] gran_size: 16M chunk_size: 256M num_reg: 9 lose cover RAM: 14M
[ 0.002369] gran_size: 16M chunk_size: 512M num_reg: 9 lose cover RAM: 14M
[ 0.002370] gran_size: 16M chunk_size: 1G num_reg: 9 lose cover RAM: 14M
[ 0.002371] gran_size: 16M chunk_size: 2G num_reg: 10 lose cover RAM: 14M
[ 0.002371] gran_size: 32M chunk_size: 32M num_reg: 10 lose cover RAM: 254M
[ 0.002372] gran_size: 32M chunk_size: 64M num_reg: 9 lose cover RAM: 30M
[ 0.002373] gran_size: 32M chunk_size: 128M num_reg: 9 lose cover RAM: 30M
[ 0.002374] gran_size: 32M chunk_size: 256M num_reg: 9 lose cover RAM: 30M
[ 0.002374] gran_size: 32M chunk_size: 512M num_reg: 9 lose cover RAM: 30M
[ 0.002375] gran_size: 32M chunk_size: 1G num_reg: 9 lose cover RAM: 30M
[ 0.002376] gran_size: 32M chunk_size: 2G num_reg: 10 lose cover RAM: 30M
[ 0.002377] gran_size: 64M chunk_size: 64M num_reg: 10 lose cover RAM: 158M
[ 0.002378] gran_size: 64M chunk_size: 128M num_reg: 9 lose cover RAM: 94M
[ 0.002378] gran_size: 64M chunk_size: 256M num_reg: 9 lose cover RAM: 94M
[ 0.002379] gran_size: 64M chunk_size: 512M num_reg: 9 lose cover RAM: 94M
[ 0.002380] gran_size: 64M chunk_size: 1G num_reg: 9 lose cover RAM: 94M
[ 0.002381] gran_size: 64M chunk_size: 2G num_reg: 10 lose cover RAM: 94M
[ 0.002382] gran_size: 128M chunk_size: 128M num_reg: 9 lose cover RAM: 222M
[ 0.002382] gran_size: 128M chunk_size: 256M num_reg: 9 lose cover RAM: 222M
[ 0.002383] gran_size: 128M chunk_size: 512M num_reg: 9 lose cover RAM: 222M
[ 0.002384] gran_size: 128M chunk_size: 1G num_reg: 9 lose cover RAM: 222M
[ 0.002385] gran_size: 128M chunk_size: 2G num_reg: 10 lose cover RAM: 222M
[ 0.002385] gran_size: 256M chunk_size: 256M num_reg: 7 lose cover RAM: 478M
[ 0.002386] gran_size: 256M chunk_size: 512M num_reg: 7 lose cover RAM: 478M
[ 0.002387] gran_size: 256M chunk_size: 1G num_reg: 8 lose cover RAM: 478M
[ 0.002388] gran_size: 256M chunk_size: 2G num_reg: 9 lose cover RAM: 478M
[ 0.002389] gran_size: 512M chunk_size: 512M num_reg: 6 lose cover RAM: 734M
[ 0.002389] gran_size: 512M chunk_size: 1G num_reg: 8 lose cover RAM: 734M
[ 0.002390] gran_size: 512M chunk_size: 2G num_reg: 9 lose cover RAM: 734M
[ 0.002391] gran_size: 1G chunk_size: 1G num_reg: 4 lose cover RAM: 1758M
[ 0.002392] gran_size: 1G chunk_size: 2G num_reg: 4 lose cover RAM: 1758M
[ 0.002392] gran_size: 2G chunk_size: 2G num_reg: 4 lose cover RAM: 1758M
[ 0.002393] mtrr_cleanup: can not find optimal value
[ 0.002394] please specify mtrr_gran_size/mtrr_chunk_size
[ 0.002397] e820: update [mem 0xaf800000-0xffffffff] usable ==> reserved
[ 0.002399] last_pfn = 0xad000 max_arch_pfn = 0x400000000
[ 0.008183] found SMP MP-table at [mem 0x000fd6c0-0x000fd6cf]
[ 0.014561] check: Scanning 1 areas for low memory corruption
[ 0.014565] Using GB pages for direct mapping
[ 0.014907] RAMDISK: [mem 0x2da73000-0x32d30fff]
[ 0.014912] ACPI: Early table checksum verification disabled
[ 0.014914] ACPI: RSDP 0x00000000000F0490 000024 (v02 ALASKA)
[ 0.014917] ACPI: XSDT 0x00000000AC121080 000084 (v01 ALASKA A M I 01072009 AMI 00010013)
[ 0.014921] ACPI: FACP 0x00000000AC12C548 00010C (v05 ALASKA A M I 01072009 AMI 00010013)
[ 0.014925] ACPI: DSDT 0x00000000AC1211A0 00B3A8 (v02 ALASKA A M I 00000021 INTL 20091112)
[ 0.014927] ACPI: FACS 0x00000000AC148080 000040
[ 0.014929] ACPI: APIC 0x00000000AC12C658 000092 (v03 ALASKA A M I 01072009 AMI 00010013)
[ 0.014931] ACPI: FPDT 0x00000000AC12C6F0 000044 (v01 ALASKA A M I 01072009 AMI 00010013)
[ 0.014933] ACPI: SSDT 0x00000000AC12C738 000539 (v01 PmRef Cpu0Ist 00003000 INTL 20051117)
[ 0.014935] ACPI: SSDT 0x00000000AC12CC78 000AD8 (v01 PmRef CpuPm 00003000 INTL 20051117)
[ 0.014938] ACPI: MCFG 0x00000000AC12D750 00003C (v01 ALASKA A M I 01072009 MSFT 00000097)
[ 0.014940] ACPI: HPET 0x00000000AC12D790 000038 (v01 ALASKA A M I 01072009 AMI. 00000005)
[ 0.014942] ACPI: SSDT 0x00000000AC12D7C8 00036D (v01 SataRe SataTabl 00001000 INTL 20091112)
[ 0.014944] ACPI: SSDT 0x00000000AC12DB38 00327D (v01 SaSsdt SaSsdt 00003000 INTL 20091112)
[ 0.014946] ACPI: ASF! 0x00000000AC130DB8 0000A5 (v32 INTEL HCG 00000001 TFSM 000F4240)
[ 0.014948] ACPI: DMAR 0x00000000AC130E60 0000B8 (v01 INTEL HSW 00000001 INTL 00000001)
[ 0.014950] ACPI: SSDT 0x00000000AC130F18 000803 (v01 Intel_ IsctTabl 00001000 INTL 20091112)
[ 0.014952] ACPI: Reserving FACP table memory at [mem 0xac12c548-0xac12c653]
[ 0.014953] ACPI: Reserving DSDT table memory at [mem 0xac1211a0-0xac12c547]
[ 0.014954] ACPI: Reserving FACS table memory at [mem 0xac148080-0xac1480bf]
[ 0.014955] ACPI: Reserving APIC table memory at [mem 0xac12c658-0xac12c6e9]
[ 0.014955] ACPI: Reserving FPDT table memory at [mem 0xac12c6f0-0xac12c733]
[ 0.014956] ACPI: Reserving SSDT table memory at [mem 0xac12c738-0xac12cc70]
[ 0.014957] ACPI: Reserving SSDT table memory at [mem 0xac12cc78-0xac12d74f]
[ 0.014957] ACPI: Reserving MCFG table memory at [mem 0xac12d750-0xac12d78b]
[ 0.014958] ACPI: Reserving HPET table memory at [mem 0xac12d790-0xac12d7c7]
[ 0.014959] ACPI: Reserving SSDT table memory at [mem 0xac12d7c8-0xac12db34]
[ 0.014959] ACPI: Reserving SSDT table memory at [mem 0xac12db38-0xac130db4]
[ 0.014960] ACPI: Reserving ASF! table memory at [mem 0xac130db8-0xac130e5c]
[ 0.014961] ACPI: Reserving DMAR table memory at [mem 0xac130e60-0xac130f17]
[ 0.014962] ACPI: Reserving SSDT table memory at [mem 0xac130f18-0xac13171a]
[ 0.014969] ACPI: Local APIC address 0xfee00000
[ 0.015034] No NUMA configuration found
[ 0.015035] Faking a node at [mem 0x0000000000000000-0x000000063e5fffff]
[ 0.015043] NODE_DATA(0) allocated [mem 0x63e5d4000-0x63e5fefff]
[ 0.015219] Zone ranges:
[ 0.015220] DMA [mem 0x0000000000001000-0x0000000000ffffff]
[ 0.015221] DMA32 [mem 0x0000000001000000-0x00000000ffffffff]
[ 0.015222] Normal [mem 0x0000000100000000-0x000000063e5fffff]
[ 0.015223] Device empty
[ 0.015224] Movable zone start for each node
[ 0.015226] Early memory node ranges
[ 0.015227] node 0: [mem 0x0000000000001000-0x000000000009cfff]
[ 0.015228] node 0: [mem 0x0000000000100000-0x0000000099873fff]
[ 0.015229] node 0: [mem 0x000000009987b000-0x0000000099cb8fff]
[ 0.015230] node 0: [mem 0x000000009a24a000-0x00000000abe4ffff]
[ 0.015230] node 0: [mem 0x00000000ac058000-0x00000000ac098fff]
[ 0.015231] node 0: [mem 0x00000000acfff000-0x00000000acffffff]
[ 0.015231] node 0: [mem 0x0000000100000000-0x000000063e5fffff]
[ 0.015422] Zeroed struct page in unavailable ranges: 24938 pages
[ 0.015423] Initmem setup node 0 [mem 0x0000000000001000-0x000000063e5fffff]
[ 0.015424] On node 0 totalpages: 6200982
[ 0.015425] DMA zone: 64 pages used for memmap
[ 0.015426] DMA zone: 21 pages reserved
[ 0.015426] DMA zone: 3996 pages, LIFO batch:0
[ 0.015464] DMA32 zone: 10916 pages used for memmap
[ 0.015465] DMA32 zone: 698618 pages, LIFO batch:63
[ 0.022803] Normal zone: 85912 pages used for memmap
[ 0.022804] Normal zone: 5498368 pages, LIFO batch:63
[ 0.072419] Reserving Intel graphics memory at [mem 0xafa00000-0xbf9fffff]
[ 0.072483] ACPI: PM-Timer IO Port: 0x1808
[ 0.072485] ACPI: Local APIC address 0xfee00000
[ 0.072491] ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
[ 0.072500] IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
[ 0.072502] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.072503] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.072504] ACPI: IRQ0 used by override.
[ 0.072505] ACPI: IRQ9 used by override.
[ 0.072506] Using ACPI (MADT) for SMP configuration information
[ 0.072507] ACPI: HPET id: 0x8086a701 base: 0xfed00000
[ 0.072511] TSC deadline timer available
[ 0.072512] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[ 0.072528] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
[ 0.072529] PM: Registered nosave memory: [mem 0x0009d000-0x0009dfff]
[ 0.072530] PM: Registered nosave memory: [mem 0x0009e000-0x0009ffff]
[ 0.072530] PM: Registered nosave memory: [mem 0x000a0000-0x000dffff]
[ 0.072531] PM: Registered nosave memory: [mem 0x000e0000-0x000fffff]
[ 0.072532] PM: Registered nosave memory: [mem 0x99874000-0x9987afff]
[ 0.072533] PM: Registered nosave memory: [mem 0x99cb9000-0x9a249fff]
[ 0.072534] PM: Registered nosave memory: [mem 0xabe50000-0xac057fff]
[ 0.072536] PM: Registered nosave memory: [mem 0xac099000-0xac149fff]
[ 0.072536] PM: Registered nosave memory: [mem 0xac14a000-0xacffefff]
[ 0.072538] PM: Registered nosave memory: [mem 0xad000000-0xaf7fffff]
[ 0.072538] PM: Registered nosave memory: [mem 0xaf800000-0xbf9fffff]
[ 0.072539] PM: Registered nosave memory: [mem 0xbfa00000-0xf7ffffff]
[ 0.072539] PM: Registered nosave memory: [mem 0xf8000000-0xfbffffff]
[ 0.072540] PM: Registered nosave memory: [mem 0xfc000000-0xfebfffff]
[ 0.072540] PM: Registered nosave memory: [mem 0xfec00000-0xfec00fff]
[ 0.072541] PM: Registered nosave memory: [mem 0xfec01000-0xfecfffff]
[ 0.072541] PM: Registered nosave memory: [mem 0xfed00000-0xfed03fff]
[ 0.072542] PM: Registered nosave memory: [mem 0xfed04000-0xfed1bfff]
[ 0.072543] PM: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
[ 0.072543] PM: Registered nosave memory: [mem 0xfed20000-0xfedfffff]
[ 0.072544] PM: Registered nosave memory: [mem 0xfee00000-0xfee00fff]
[ 0.072544] PM: Registered nosave memory: [mem 0xfee01000-0xfeffffff]
[ 0.072545] PM: Registered nosave memory: [mem 0xff000000-0xffffffff]
[ 0.072546] [mem 0xbfa00000-0xf7ffffff] available for PCI devices
[ 0.072547] Booting paravirtualized kernel on bare hardware
[ 0.072550] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
[ 0.072554] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
[ 0.072754] percpu: Embedded 60 pages/cpu s208896 r8192 d28672 u262144
[ 0.072759] pcpu-alloc: s208896 r8192 d28672 u262144 alloc=1*2097152
**[ 0.072760] pcpu-alloc: [0] 0 1 2 3 4 5 6 7 **
[ 0.072782] Built 1 zonelists, mobility grouping on. Total pages: 6104069
[ 0.072783] Policy zone: Normal
[ 0.072784] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.4.0-150-generic root=UUID=596cf640-ef01-11e8-ab98-002215f11538 ro maybe-ubiquity
[ 0.074453] Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear)
[ 0.075267] Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
[ 0.075331] mem auto-init: stack:off, heap alloc:on, heap free:off
[ 0.078624] Calgary: detecting Calgary via BIOS EBDA area
[ 0.078625] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
[ 0.137758] Memory: 24175852K/24803928K available (14339K kernel code, 2394K rwdata, 9504K rodata, 2764K init, 4944K bss, 628076K reserved, 0K cma-reserved)
[ 0.138165] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
[ 0.138186] Kernel/User page tables isolation: enabled
[ 0.138198] ftrace: allocating 44667 entries in 175 pages
[ 0.151407] rcu: Hierarchical RCU implementation.
[ 0.151408] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
[ 0.151409] Tasks RCU enabled.
[ 0.151410] rcu: RCU calculated value of scheduler-enlistment delay is 25 jiffies.
[ 0.151410] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
[ 0.153620] NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
[ 0.153813] random: crng init done
[ 0.155190] Console: colour VGA+ 80x25
[ 0.167976] printk: console [tty0] enabled
[ 0.168019] ACPI: Core revision 20190816
[ 0.168129] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns
[ 0.168191] APIC: Switch to symmetric I/O mode setup
[ 0.168224] DMAR: Host address width 39
[ 0.168255] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.168291] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a
[ 0.168340] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.168375] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008020660462 ecap f010da
[ 0.168423] DMAR: RMRR base: 0x000000abfe0000 end: 0x000000abfedfff
[ 0.168458] DMAR: RMRR base: 0x000000af800000 end: 0x000000bf9fffff
[ 0.168494] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.168529] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.168561] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.168993] DMAR-IR: Enabled IRQ remapping in x2apic mode
[ 0.169027] x2apic enabled
[ 0.169060] Switched APIC routing to cluster x2apic.
[ 0.169484] …TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.188193] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x31011767d91, max_idle_ns: 440795372106 ns
[ 0.188249] Calibrating delay loop (skipped), value calculated using timer frequency… 6799.34 BogoMIPS (lpj=13598680)
[ 0.188303] pid_max: default: 32768 minimum: 301
[ 0.188355] LSM: Security Framework initializing
[ 0.188394] Yama: becoming mindful.
[ 0.188457] AppArmor: AppArmor initialized
[ 0.188562] Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
[ 0.188665] Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
**[ 0.188725] *** VALIDATE tmpfs *****
**[ 0.188859] *** VALIDATE proc *****
**[ 0.188930] *** VALIDATE cgroup1 *****
**[ 0.188961] *** VALIDATE cgroup2 *****
[ 0.189036] mce: CPU0: Thermal monitoring enabled (TM1)
[ 0.189080] process: using mwait in idle threads
[ 0.189113] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
[ 0.189147] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
[ 0.189184] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
[ 0.189234] Spectre V2 : Mitigation: Retpolines
[ 0.189265] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
[ 0.189315] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
[ 0.189350] Spectre V2 : Enabling Restricted Speculation for firmware calls
[ 0.189386] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
[ 0.189436] Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl
[ 0.189472] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
[ 0.189526] MDS: Mitigation: Clear CPU buffers
[ 0.189557] MMIO Stale Data: Unknown: No mitigations
[ 0.189590] SRBDS: Mitigation: Microcode
[ 0.189758] Freeing SMP alternatives memory: 40K
[ 0.192304] smpboot: CPU0: Intel(R) Core™ i7-4770 CPU @ 3.40GHz (family: 0x6, model: 0x3c, stepping: 0x3)
[ 0.192417] Performance Events: PEBS fmt2+, Haswell events, 16-deep LBR, full-width counters, Intel PMU driver.
[ 0.192482] … version: 3
[ 0.192513] … bit width: 48
[ 0.192544] … generic registers: 4
[ 0.192575] … value mask: 0000ffffffffffff
[ 0.192608] … max period: 00007fffffffffff
[ 0.192641] … fixed-purpose events: 3
[ 0.192672] … event mask: 000000070000000f
[ 0.192730] rcu: Hierarchical SRCU implementation.
[ 0.193565] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
[ 0.193653] smp: Bringing up secondary CPUs …
[ 0.193746] x86: Booting SMP configuration:
[ 0.193777] … node #0, CPUs: #1 #2 #3 #4
[ 0.197826] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
[ 0.200277] #5 #6 #7
[ 0.200664] smp: Brought up 1 node, 8 CPUs
[ 0.200664] smpboot: Max logical packages: 1
[ 0.200664] smpboot: Total of 8 processors activated (54394.72 BogoMIPS)
[ 0.204756] devtmpfs: initialized
[ 0.204756] x86/mm: Memory block size: 128MB
[ 0.206254] PM: Registering ACPI NVS region [mem 0x99874000-0x9987afff] (28672 bytes)
[ 0.206254] PM: Registering ACPI NVS region [mem 0xac099000-0xac149fff] (724992 bytes)
[ 0.206254] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[ 0.206254] futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
[ 0.206254] pinctrl core: initialized pinctrl subsystem
[ 0.206254] PM: RTC time: 22:59:45, date: 2023-06-09
[ 0.206254] NET: Registered protocol family 16
[ 0.206254] audit: initializing netlink subsys (disabled)
[ 0.206254] audit: type=2000 audit(1686351585.036:1): state=initialized audit_enabled=0 res=1
[ 0.206254] EISA bus registered
[ 0.206254] cpuidle: using governor ladder
[ 0.206254] cpuidle: using governor menu
[ 0.206254] ACPI FADT declares the system doesn’t support PCIe ASPM, so disable it
[ 0.206254] ACPI: bus type PCI registered
[ 0.206254] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[ 0.206254] PCI: MMCONFIG for domain 0000 [bus 00-3f] at [mem 0xf8000000-0xfbffffff] (base 0xf8000000)
[ 0.206254] PCI: MMCONFIG at [mem 0xf8000000-0xfbffffff] reserved in E820
[ 0.206254] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override.
[ 0.206254] PCI: Using configuration type 1 for base access
[ 0.206254] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on
[ 0.209229] ENERGY_PERF_BIAS: Set to ‘normal’, was 'performance’
[ 0.209300] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[ 0.209300] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[ 0.209300] ACPI: Added _OSI(Module Device)
[ 0.209300] ACPI: Added _OSI(Processor Device)
[ 0.209300] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 0.209300] ACPI: Added _OSI(Processor Aggregator Device)
[ 0.212251] ACPI: Added _OSI(Linux-Dell-Video)
[ 0.212283] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[ 0.212316] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[ 0.220748] ACPI: 6 ACPI AML tables successfully acquired and loaded
[ 0.221809] ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
[ 0.222379] ACPI: Dynamic OEM Table Load:
[ 0.222414] ACPI: SSDT 0xFFFF9F83A20CE000 0003D3 (v01 PmRef Cpu0Cst 00003001 INTL 20051117)
[ 0.223289] ACPI: Dynamic OEM Table Load:
[ 0.223323] ACPI: SSDT 0xFFFF9F83A22D6000 0005AA (v01 PmRef ApIst 00003000 INTL 20051117)
[ 0.224238] ACPI: Dynamic OEM Table Load:
[ 0.224251] ACPI: SSDT 0xFFFF9F83A20C5C00 000119 (v01 PmRef ApCst 00003000 INTL 20051117)
[ 0.226596] ACPI: Interpreter enabled
[ 0.226647] ACPI: (supports S0 S3 S4 S5)
[ 0.226678] ACPI: Using IOAPIC for interrupt routing
[ 0.226731] PCI: Using host bridge windows from ACPI; if necessary, use “pci=nocrs” and report a bug
[ 0.227101] ACPI: Enabled 7 GPEs in block 00 to 3F
[ 0.235636] ACPI: Power Resource [FN00] (off)
[ 0.235731] ACPI: Power Resource [FN01] (off)
[ 0.235827] ACPI: Power Resource [FN02] (off)
[ 0.235920] ACPI: Power Resource [FN03] (off)
[ 0.236012] ACPI: Power Resource [FN04] (off)
[ 0.236723] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
[ 0.236762] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
[ 0.237042] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME]
[ 0.237239] acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability LTR]
[ 0.237276] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration
[ 0.237734] PCI host bridge to bus 0000:00
[ 0.237767] pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window]
[ 0.237803] pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window]
[ 0.237839] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[ 0.237888] pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window]
[ 0.237936] pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window]
[ 0.237984] pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window]
[ 0.238033] pci_bus 0000:00: root bus resource [mem 0x000dc000-0x000dffff window]
[ 0.238082] pci_bus 0000:00: root bus resource [mem 0x000e0000-0x000e3fff window]
[ 0.238131] pci_bus 0000:00: root bus resource [mem 0x000e4000-0x000e7fff window]
[ 0.238179] pci_bus 0000:00: root bus resource [mem 0xbfa00000-0xfeafffff window]
[ 0.238228] pci_bus 0000:00: root bus resource [bus 00-3e]
[ 0.238267] pci 0000:00:00.0: [8086:0c00] type 00 class 0x060000
[ 0.238377] pci 0000:00:02.0: [8086:0412] type 00 class 0x030000
[ 0.238419] pci 0000:00:02.0: reg 0x10: [mem 0xf0000000-0xf03fffff 64bit]
[ 0.238458] pci 0000:00:02.0: reg 0x18: [mem 0xe0000000-0xefffffff 64bit pref]
[ 0.238509] pci 0000:00:02.0: reg 0x20: [io 0xf000-0xf03f]
[ 0.238619] pci 0000:00:03.0: [8086:0c0c] type 00 class 0x040300
[ 0.238660] pci 0000:00:03.0: reg 0x10: [mem 0xf0510000-0xf0513fff 64bit]
[ 0.238792] pci 0000:00:14.0: [8086:8c31] type 00 class 0x0c0330
[ 0.238846] pci 0000:00:14.0: reg 0x10: [mem 0xf0500000-0xf050ffff 64bit]
[ 0.238933] pci 0000:00:14.0: PME# supported from D3hot D3cold
[ 0.239040] pci 0000:00:16.0: [8086:8c3a] type 00 class 0x078000
[ 0.239092] pci 0000:00:16.0: reg 0x10: [mem 0xf051a000-0xf051a00f 64bit]
[ 0.239179] pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
[ 0.239281] pci 0000:00:1a.0: [8086:8c2d] type 00 class 0x0c0320
[ 0.239333] pci 0000:00:1a.0: reg 0x10: [mem 0xf0518000-0xf05183ff]
[ 0.239437] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
[ 0.239542] pci 0000:00:1c.0: [8086:8c10] type 01 class 0x060400
[ 0.239654] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
[ 0.239810] pci 0000:00:1c.4: [8086:8c18] type 01 class 0x060400
[ 0.239912] pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold
[ 0.240063] pci 0000:00:1d.0: [8086:8c26] type 00 class 0x0c0320
[ 0.240116] pci 0000:00:1d.0: reg 0x10: [mem 0xf0517000-0xf05173ff]
[ 0.240220] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
[ 0.240324] pci 0000:00:1f.0: [8086:8c50] type 00 class 0x060100
[ 0.240517] pci 0000:00:1f.2: [8086:8c02] type 00 class 0x010601
[ 0.240564] pci 0000:00:1f.2: reg 0x10: [io 0xf0b0-0xf0b7]
[ 0.240602] pci 0000:00:1f.2: reg 0x14: [io 0xf0a0-0xf0a3]
[ 0.240641] pci 0000:00:1f.2: reg 0x18: [io 0xf090-0xf097]
[ 0.240679] pci 0000:00:1f.2: reg 0x1c: [io 0xf080-0xf083]
[ 0.240717] pci 0000:00:1f.2: reg 0x20: [io 0xf060-0xf07f]
[ 0.240755] pci 0000:00:1f.2: reg 0x24: [mem 0xf0516000-0xf05167ff]
[ 0.240818] pci 0000:00:1f.2: PME# supported from D3hot
[ 0.240915] pci 0000:00:1f.3: [8086:8c22] type 00 class 0x0c0500
[ 0.240963] pci 0000:00:1f.3: reg 0x10: [mem 0xf0515000-0xf05150ff 64bit]
[ 0.241014] pci 0000:00:1f.3: reg 0x20: [io 0xf040-0xf05f]
[ 0.241171] acpiphp: Slot [1] registered
[ 0.241205] pci 0000:00:1c.0: PCI bridge to [bus 01]
[ 0.241301] pci 0000:02:00.0: [10ec:8168] type 00 class 0x020000
[ 0.241363] pci 0000:02:00.0: reg 0x10: [io 0xe000-0xe0ff]
[ 0.241421] pci 0000:02:00.0: reg 0x18: [mem 0xf0404000-0xf0404fff 64bit]
[ 0.241472] pci 0000:02:00.0: reg 0x20: [mem 0xf0400000-0xf0403fff 64bit pref]
[ 0.241616] pci 0000:02:00.0: supports D1 D2
[ 0.241647] pci 0000:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold
[ 0.241785] pci 0000:00:1c.4: PCI bridge to [bus 02]
[ 0.241820] pci 0000:00:1c.4: bridge window [io 0xe000-0xefff]
[ 0.241856] pci 0000:00:1c.4: bridge window [mem 0xf0400000-0xf04fffff]
*[ 0.242624] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 11 12 14 15)
*[ 0.242712] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 10 11 12 14 15) 0, disabled.
*[ 0.242811] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 10 11 12 14 15)
*[ 0.242896] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 6 10 11 12 14 15)
*[ 0.242980] ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 10 11 12 14 15) 0, disabled.
*[ 0.243078] ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 15) 0, disabled.
*[ 0.243176] ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 10 11 12 14 15) 0, disabled.
*[ 0.243274] ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 10 11 12 14 15)
**[ 0.243738] iommu: Default domain type: Translated **
[ 0.243738] SCSI subsystem initialized
[ 0.243738] libata version 3.00 loaded.
[ 0.243738] pci 0000:00:02.0: vgaarb: setting as boot VGA device
[ 0.243738] pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[ 0.243738] pci 0000:00:02.0: vgaarb: bridge control possible
[ 0.243738] vgaarb: loaded
[ 0.243738] ACPI: bus type USB registered
[ 0.243738] usbcore: registered new interface driver usbfs
[ 0.243738] usbcore: registered new interface driver hub
[ 0.243738] usbcore: registered new device driver usb
[ 0.244267] pps_core: LinuxPPS API ver. 1 registered
[ 0.244299] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti giometti@linux.it
[ 0.244351] PTP clock support registered
[ 0.244390] EDAC MC: Ver: 3.0.0
[ 0.244390] PCI: Using ACPI for IRQ routing
[ 0.245520] PCI: pci_cache_line_size set to 64 bytes
[ 0.245553] e820: reserve RAM buffer [mem 0x0009d800-0x0009ffff]
[ 0.245554] e820: reserve RAM buffer [mem 0x99874000-0x9bffffff]
[ 0.245555] e820: reserve RAM buffer [mem 0x99cb9000-0x9bffffff]
[ 0.245556] e820: reserve RAM buffer [mem 0xabe50000-0xabffffff]
[ 0.245556] e820: reserve RAM buffer [mem 0xac099000-0xafffffff]
[ 0.245557] e820: reserve RAM buffer [mem 0xad000000-0xafffffff]
[ 0.245557] e820: reserve RAM buffer [mem 0x63e600000-0x63fffffff]
[ 0.245635] NetLabel: Initializing
[ 0.245665] NetLabel: domain hash size = 128
[ 0.245696] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO
[ 0.245739] NetLabel: unlabeled traffic allowed by default
[ 0.245782] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
[ 0.245782] hpet0: 8 comparators, 64-bit 14.318180 MHz counter
[ 0.248249] clocksource: Switched to clocksource tsc-early
**[ 0.257107] *** VALIDATE bpf *****
[ 0.257186] VFS: Disk quotas dquot_6.6.0
[ 0.257228] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
**[ 0.257282] *** VALIDATE ramfs *****
**[ 0.257315] *** VALIDATE hugetlbfs *****
[ 0.257406] AppArmor: AppArmor Filesystem Enabled
[ 0.257458] pnp: PnP ACPI init
[ 0.257569] system 00:00: [mem 0xfed40000-0xfed44fff] has been reserved
[ 0.257608] system 00:00: Plug and Play ACPI device, IDs PNP0c01 (active)
[ 0.257759] system 00:01: [io 0x0680-0x069f] has been reserved
[ 0.257794] system 00:01: [io 0xffff] has been reserved
[ 0.257828] system 00:01: [io 0xffff] has been reserved
[ 0.257862] system 00:01: [io 0xffff] has been reserved
[ 0.257896] system 00:01: [io 0x1c00-0x1cfe] has been reserved
[ 0.257930] system 00:01: [io 0x1d00-0x1dfe] has been reserved
[ 0.257965] system 00:01: [io 0x1e00-0x1efe] has been reserved
[ 0.258000] system 00:01: [io 0x1f00-0x1ffe] has been reserved
[ 0.258034] system 00:01: [io 0x1800-0x18fe] has been reserved
[ 0.258069] system 00:01: [io 0x164e-0x164f] has been reserved
[ 0.258105] system 00:01: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.258121] pnp 00:02: Plug and Play ACPI device, IDs PNP0b00 (active)
[ 0.258162] system 00:03: [io 0x1854-0x1857] has been reserved
[ 0.258198] system 00:03: Plug and Play ACPI device, IDs INT3f0d PNP0c02 (active)
[ 0.258303] system 00:04: [io 0x0a00-0x0a0f] has been reserved
[ 0.258338] system 00:04: [io 0x0a10-0x0a1f] has been reserved
[ 0.258374] system 00:04: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.258534] system 00:05: [io 0x04d0-0x04d1] has been reserved
[ 0.258570] system 00:05: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.258968] system 00:06: [mem 0xfed1c000-0xfed1ffff] has been reserved
[ 0.259004] system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved
[ 0.259040] system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved
[ 0.259076] system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved
[ 0.259112] system 00:06: [mem 0xf8000000-0xfbffffff] has been reserved
[ 0.259147] system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved
[ 0.259183] system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved
[ 0.259220] system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved
[ 0.259255] system 00:06: [mem 0xff000000-0xffffffff] has been reserved
[ 0.259291] system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved
[ 0.259328] system 00:06: [mem 0xf7fef000-0xf7feffff] has been reserved
[ 0.259363] system 00:06: [mem 0xf7ff0000-0xf7ff0fff] has been reserved
[ 0.259401] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active)
[ 0.259676] pnp: PnP ACPI: found 7 devices
[ 0.260591] thermal_sys: Registered thermal governor 'fair_share’
[ 0.260591] thermal_sys: Registered thermal governor 'bang_bang’
[ 0.260627] thermal_sys: Registered thermal governor 'step_wise’
[ 0.260661] thermal_sys: Registered thermal governor 'user_space’
[ 0.260695] thermal_sys: Registered thermal governor 'power_allocator’
[ 0.265217] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[ 0.265324] pci 0000:00:1c.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000
[ 0.266795] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000
[ 0.266851] pci 0000:00:1c.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000
[ 0.266910] pci 0000:00:1c.0: BAR 14: assigned [mem 0xbfa00000-0xbfbfffff]
[ 0.266949] pci 0000:00:1c.0: BAR 15: assigned [mem 0xbfc00000-0xbfdfffff 64bit pref]
[ 0.266999] pci 0000:00:1c.0: BAR 13: assigned [io 0x2000-0x2fff]
[ 0.267035] pci 0000:00:1c.0: PCI bridge to [bus 01]
[ 0.267070] pci 0000:00:1c.0: bridge window [io 0x2000-0x2fff]
[ 0.267107] pci 0000:00:1c.0: bridge window [mem 0xbfa00000-0xbfbfffff]
[ 0.267145] pci 0000:00:1c.0: bridge window [mem 0xbfc00000-0xbfdfffff 64bit pref]
[ 0.267198] pci 0000:00:1c.4: PCI bridge to [bus 02]
[ 0.267231] pci 0000:00:1c.4: bridge window [io 0xe000-0xefff]
[ 0.267268] pci 0000:00:1c.4: bridge window [mem 0xf0400000-0xf04fffff]
[ 0.267309] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window]
[ 0.267344] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window]
[ 0.267379] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[ 0.267416] pci_bus 0000:00: resource 7 [mem 0x000d0000-0x000d3fff window]
[ 0.267452] pci_bus 0000:00: resource 8 [mem 0x000d4000-0x000d7fff window]
[ 0.267487] pci_bus 0000:00: resource 9 [mem 0x000d8000-0x000dbfff window]
[ 0.267524] pci_bus 0000:00: resource 10 [mem 0x000dc000-0x000dffff window]
[ 0.267560] pci_bus 0000:00: resource 11 [mem 0x000e0000-0x000e3fff window]
[ 0.267596] pci_bus 0000:00: resource 12 [mem 0x000e4000-0x000e7fff window]
[ 0.267632] pci_bus 0000:00: resource 13 [mem 0xbfa00000-0xfeafffff window]
[ 0.267668] pci_bus 0000:01: resource 0 [io 0x2000-0x2fff]
[ 0.267702] pci_bus 0000:01: resource 1 [mem 0xbfa00000-0xbfbfffff]
[ 0.267736] pci_bus 0000:01: resource 2 [mem 0xbfc00000-0xbfdfffff 64bit pref]
[ 0.267785] pci_bus 0000:02: resource 0 [io 0xe000-0xefff]
[ 0.267819] pci_bus 0000:02: resource 1 [mem 0xf0400000-0xf04fffff]
[ 0.267946] NET: Registered protocol family 2
[ 0.268103] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[ 0.269693] tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear)
[ 0.269896] TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear)
[ 0.270221] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
[ 0.270377] TCP: Hash tables configured (established 262144 bind 65536)
[ 0.270464] UDP hash table entries: 16384 (order: 7, 524288 bytes, linear)
[ 0.270565] UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear)
[ 0.270677] NET: Registered protocol family 1
[ 0.270713] NET: Registered protocol family 44
[ 0.270752] pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[ 0.271206] PCI: CLS 64 bytes, default 64
[ 0.271265] Trying to unpack rootfs image as initramfs…
[ 0.412822] Freeing initrd memory: 84728K
[ 0.436255] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[ 0.436292] software IO TLB: mapped [mem 0xa7e50000-0xabe50000] (64MB)
[ 0.436544] check: Scanning for low memory corruption every 60 seconds
[ 0.436929] Initialise system trusted keyrings
[ 0.436969] Key type blacklist registered
[ 0.437022] workingset: timestamp_bits=36 max_order=23 bucket_order=0
[ 0.437931] zbud: loaded
[ 0.438209] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[ 0.438351] fuse: init (API version 7.31)
**[ 0.438392] *** VALIDATE fuse *****
**[ 0.438423] *** VALIDATE fuse *****
[ 0.438517] Platform Keyring initialized
[ 0.441440] Key type asymmetric registered
[ 0.441472] Asymmetric key parser ‘x509’ registered
[ 0.441510] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 244)
[ 0.441577] io scheduler mq-deadline registered
[ 0.442011] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
[ 0.442084] intel_idle: MWAIT substates: 0x42120
[ 0.442084] intel_idle: v0.4.1 model 0x3C
[ 0.442316] intel_idle: lapic_timer_reliable_states 0xffffffff
[ 0.442434] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
[ 0.442510] ACPI: Power Button [PWRB]
[ 0.442565] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1
[ 0.442628] ACPI: Power Button [PWRF]
[ 0.443412] thermal LNXTHERM:00: registered as thermal_zone0
[ 0.443447] ACPI: Thermal Zone [TZ00] (28 C)
[ 0.443702] thermal LNXTHERM:01: registered as thermal_zone1
[ 0.443736] ACPI: Thermal Zone [TZ01] (30 C)
[ 0.443876] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[ 0.445028] Linux agpgart interface v0.103
[ 0.572300] loop: module loaded
[ 0.572545] tun: Universal TUN/TAP device driver, 1.6
[ 0.572668] PPP generic driver version 2.4.2
[ 0.572908] VFIO - User Level meta-driver version: 0.3
[ 0.573110] ehci_hcd: USB 2.0 ‘Enhanced’ Host Controller (EHCI) Driver
[ 0.573148] ehci-pci: EHCI PCI platform driver
[ 0.573266] ehci-pci 0000:00:1a.0: EHCI Host Controller
[ 0.573304] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 1
[ 0.573361] ehci-pci 0000:00:1a.0: debug port 2
[ 0.577298] ehci-pci 0000:00:1a.0: cache line size of 64 is not supported
[ 0.577345] ehci-pci 0000:00:1a.0: irq 16, io mem 0xf0518000
[ 0.592291] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00
[ 0.592357] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.04
[ 0.592408] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.592457] usb usb1: Product: EHCI Host Controller
[ 0.592490] usb usb1: Manufacturer: Linux 5.4.0-150-generic ehci_hcd
[ 0.592525] usb usb1: SerialNumber: 0000:00:1a.0
[ 0.592709] hub 1-0:1.0: USB hub found
[ 0.592744] hub 1-0:1.0: 2 ports detected
[ 0.592955] ehci-pci 0000:00:1d.0: EHCI Host Controller
[ 0.592990] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 2
[ 0.593047] ehci-pci 0000:00:1d.0: debug port 2
[ 0.596984] ehci-pci 0000:00:1d.0: cache line size of 64 is not supported
[ 0.597028] ehci-pci 0000:00:1d.0: irq 23, io mem 0xf0517000
[ 0.612290] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00
[ 0.612348] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.04
[ 0.612399] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.612447] usb usb2: Product: EHCI Host Controller
[ 0.612480] usb usb2: Manufacturer: Linux 5.4.0-150-generic ehci_hcd
[ 0.612515] usb usb2: SerialNumber: 0000:00:1d.0
[ 0.612690] hub 2-0:1.0: USB hub found
[ 0.612725] hub 2-0:1.0: 2 ports detected
[ 0.612856] ehci-platform: EHCI generic platform driver
[ 0.612896] ohci_hcd: USB 1.1 ‘Open’ Host Controller (OHCI) Driver
[ 0.612936] ohci-pci: OHCI PCI platform driver
[ 0.612972] ohci-platform: OHCI generic platform driver
[ 0.613010] uhci_hcd: USB Universal Host Controller Interface driver
[ 0.613136] xhci_hcd 0000:00:14.0: xHCI Host Controller
[ 0.613173] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 3
[ 0.614266] xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x100 quirks 0x0000000000009810
[ 0.614323] xhci_hcd 0000:00:14.0: cache line size of 64 is not supported
[ 0.614480] xhci_hcd 0000:00:14.0: xHCI Host Controller
[ 0.614515] xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 4
[ 0.614565] xhci_hcd 0000:00:14.0: Host supports USB 3.0 SuperSpeed
[ 0.614621] usb usb3: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.04
[ 0.614671] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.614720] usb usb3: Product: xHCI Host Controller
[ 0.614752] usb usb3: Manufacturer: Linux 5.4.0-150-generic xhci-hcd
[ 0.614788] usb usb3: SerialNumber: 0000:00:14.0
[ 0.614958] hub 3-0:1.0: USB hub found
[ 0.615003] hub 3-0:1.0: 12 ports detected
[ 0.615849] usb usb4: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.04
[ 0.615900] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[ 0.615949] usb usb4: Product: xHCI Host Controller
[ 0.615982] usb usb4: Manufacturer: Linux 5.4.0-150-generic xhci-hcd
[ 0.616017] usb usb4: SerialNumber: 0000:00:14.0
[ 0.616119] hub 4-0:1.0: USB hub found
[ 0.616160] hub 4-0:1.0: 6 ports detected
[ 0.616652] i8042: PNP: No PS/2 controller found.
[ 0.616820] mousedev: PS/2 mouse device common for all mice
[ 0.616959] rtc_cmos 00:02: RTC can wake from S4
[ 0.617149] rtc_cmos 00:02: registered as rtc0
[ 0.617192] rtc_cmos 00:02: alarms up to one month, y3k, 242 bytes nvram, hpet irqs
[ 0.617245] i2c /dev entries driver
[ 0.617298] device-mapper: uevent: version 1.0.3
[ 0.617379] device-mapper: ioctl: 4.41.0-ioctl (2019-09-16) initialised: dm-devel@redhat.com
[ 0.617442] platform eisa.0: Probing EISA bus 0
[ 0.617475] platform eisa.0: EISA: Cannot allocate resource for mainboard
[ 0.617510] platform eisa.0: Cannot allocate resource for EISA slot 1
[ 0.617546] platform eisa.0: Cannot allocate resource for EISA slot 2
[ 0.617581] platform eisa.0: Cannot allocate resource for EISA slot 3
[ 0.617617] platform eisa.0: Cannot allocate resource for EISA slot 4
[ 0.617652] platform eisa.0: Cannot allocate resource for EISA slot 5
[ 0.617688] platform eisa.0: Cannot allocate resource for EISA slot 6
[ 0.617723] platform eisa.0: Cannot allocate resource for EISA slot 7
[ 0.617758] platform eisa.0: Cannot allocate resource for EISA slot 8
[ 0.617794] platform eisa.0: EISA: Detected 0 cards
[ 0.617830] intel_pstate: Intel P-state driver initializing
[ 0.618189] ledtrig-cpu: registered to indicate activity on CPUs
[ 0.618267] drop_monitor: Initializing network drop monitor service
[ 0.618404] NET: Registered protocol family 10
[ 0.623227] Segment Routing with IPv6
[ 0.624721] NET: Registered protocol family 17
[ 0.624851] Key type dns_resolver registered
[ 0.625376] RAS: Correctable Errors collector initialized.
[ 0.625444] microcode: sig=0x306c3, pf=0x2, revision=0x28
[ 0.625604] microcode: Microcode Update Driver: v2.2.
[ 0.625608] IPI shorthand broadcast: enabled
[ 0.625682] sched_clock: Marking stable (611154774, 14446883)->(638565491, -12963834)
[ 0.625823] registered taskstats version 1
[ 0.625861] Loading compiled-in X.509 certificates
[ 0.626525] Loaded X.509 cert 'Build time autogenerated kernel key: e1ed4cc5910e977a83ba9dbd2b26566dbd0b90a9’
[ 0.627062] Loaded X.509 cert 'Canonical Ltd. Live Patch Signing: 14df34d1a87cf37625abec039ef2bf521249b969’
[ 0.627578] Loaded X.509 cert 'Canonical Ltd. Kernel Module Signing: 88f752e560a1e0737e31163a466ad7b70a850c19’
[ 0.627630] blacklist: Loading compiled-in revocation X.509 certificates
[ 0.627676] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing: 61482aa2830d0ab2ad5af10b7250da9033ddcef0’
[ 0.627738] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing (2017): 242ade75ac4a15e50d50c84b0d45ff3eae707a03’
[ 0.627801] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing (ESM 2018): 365188c1d374d6b07c3c8f240f8ef722433d6a8b’
[ 0.627863] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing (2019): c0746fd6c5da3ae827864651ad66ae47fe24b3e8’
[ 0.627925] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing (2021 v1): a8d54bbb3825cfb94fa13c9f8a594a195c107b8d’
[ 0.627987] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing (2021 v2): 4cf046892d6fd3c9a5b03f98d845f90851dc6a8c’
[ 0.628048] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing (2021 v3): 100437bb6de6e469b581e61cd66bce3ef4ed53af’
[ 0.628110] Loaded X.509 cert 'Canonical Ltd. Secure Boot Signing (Ubuntu Core 2019): c1d57b8f6b743f23ee41f4f7ee292f06eecadfb9’
[ 0.628187] zswap: loaded using pool lzo/zbud
[ 0.628316] Key type ._fscrypt registered
[ 0.628347] Key type .fscrypt registered
[ 0.632150] Key type big_key registered
[ 0.634084] Key type encrypted registered
[ 0.634117] AppArmor: AppArmor sha1 policy hashing enabled
[ 0.634155] ima: No TPM chip found, activating TPM-bypass!
[ 0.634191] ima: Allocated hash algorithm: sha1
[ 0.634226] ima: No architecture policies found
[ 0.634265] evm: Initialising EVM extended attributes:
[ 0.634298] evm: security.selinux
[ 0.634327] evm: security.SMACK64
[ 0.634357] evm: security.SMACK64EXEC
[ 0.634387] evm: security.SMACK64TRANSMUTE
[ 0.634419] evm: security.SMACK64MMAP
[ 0.634476] evm: security.apparmor
[ 0.634506] evm: security.ima
[ 0.634535] evm: security.capability
[ 0.634566] evm: HMAC attrs: 0x1
[ 0.634923] PM: Magic number: 11:453:1007
[ 0.635021] memory memory46: hash matches
[ 0.635101] rtc_cmos 00:02: setting system clock to 2023-06-09T22:59:45 UTC (1686351585)
[ 0.635943] Freeing unused decrypted memory: 2040K
[ 0.636391] Freeing unused kernel image memory: 2764K
[ 0.672344] Write protecting the kernel read-only data: 26624k
[ 0.672744] Freeing unused kernel image memory: 2036K
[ 0.672876] Freeing unused kernel image memory: 736K
[ 0.679472] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 0.679507] x86/mm: Checking user space page tables
[ 0.685707] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 0.685743] Run /init as init process
[ 0.752668] ACPI Warning: SystemIO range 0x0000000000001828-0x000000000000182F conflicts with OpRegion 0x0000000000001800-0x000000000000187F (\PMIO) (20190816/utaddress-204)
[ 0.752753] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 0.752817] ACPI Warning: SystemIO range 0x0000000000001C40-0x0000000000001C4F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20190816/utaddress-204)
[ 0.752899] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 0.752960] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20190816/utaddress-204)
[ 0.753040] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20190816/utaddress-204)
[ 0.753117] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 0.753172] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20190816/utaddress-204)
[ 0.753254] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20190816/utaddress-204)
[ 0.753332] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
[ 0.753386] lpc_ich: Resource conflict(s) found affecting gpio_ich
[ 0.753610] i801_smbus 0000:00:1f.3: SPD Write Disable is set
[ 0.753678] i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
[ 0.754347] r8169 0000:02:00.0: can’t disable ASPM; OS doesn’t have ASPM control
[ 0.755012] ahci 0000:00:1f.2: version 3.0
[ 0.755291] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x3f impl SATA mode
**[ 0.755358] ahci 0000:00:1f.2: flags: 64bit ncq pm led clo pio slum part ems apst **
[ 0.760263] cryptd: max_cpu_qlen set to 1000
[ 0.765708] AVX2 version of gcm_enc/dec engaged.
[ 0.765744] AES CTR mode by8 optimization enabled
[ 0.770118] r8169 0000:02:00.0 eth0: RTL8168g/8111g, d4:3d:7e:ba:6a:2f, XID 4c0, IRQ 30
[ 0.770190] r8169 0000:02:00.0 eth0: jumbo features [frames: 9200 bytes, tx checksumming: ko]
[ 0.782076] r8169 0000:02:00.0 enp2s0: renamed from eth0
[ 0.811162] i915 0000:00:02.0: vgaarb: deactivate vga console
[ 0.812162] Console: switching to colour dummy device 80x25
[ 0.813048] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 0.813051] [drm] Driver supports precise vblank timestamp query.
[ 0.813091] scsi host0: ahci
[ 0.813208] scsi host1: ahci
[ 0.813292] scsi host2: ahci
[ 0.813350] scsi host3: ahci
[ 0.813381] i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 0.813553] scsi host4: ahci
[ 0.813616] scsi host5: ahci
[ 0.813662] ata1: SATA max UDMA/133 abar m2048@0xf0516000 port 0xf0516100 irq 29
[ 0.813666] ata2: SATA max UDMA/133 abar m2048@0xf0516000 port 0xf0516180 irq 29
[ 0.813669] ata3: SATA max UDMA/133 abar m2048@0xf0516000 port 0xf0516200 irq 29
[ 0.813671] ata4: SATA max UDMA/133 abar m2048@0xf0516000 port 0xf0516280 irq 29
[ 0.813674] ata5: SATA max UDMA/133 abar m2048@0xf0516000 port 0xf0516300 irq 29
[ 0.813677] ata6: SATA max UDMA/133 abar m2048@0xf0516000 port 0xf0516380 irq 29
[ 0.824162] [drm] Initialized i915 1.6.0 20190822 for 0000:00:02.0 on minor 0
[ 0.825529] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no)
[ 0.825775] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input2
[ 0.840041] fbcon: i915drmfb (fb0) is primary device
[ 0.852292] usb 1-1: new high-speed USB device number 2 using ehci-pci
[ 0.868288] usb 2-1: new high-speed USB device number 2 using ehci-pci
[ 0.868298] usb 3-2: new high-speed USB device number 2 using xhci_hcd
[ 0.918155] Console: switching to colour frame buffer device 128x48
[ 0.936113] i915 0000:00:02.0: fb0: i915drmfb frame buffer device
[ 1.008765] usb 1-1: New USB device found, idVendor=8087, idProduct=8008, bcdDevice= 0.04
[ 1.008800] usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[ 1.009178] hub 1-1:1.0: USB hub found
[ 1.009377] hub 1-1:1.0: 6 ports detected
[ 1.019387] usb 3-2: New USB device found, idVendor=04fc, idProduct=0c15, bcdDevice=ec.02
[ 1.019409] usb 3-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[ 1.019429] usb 3-2: Product: USB to Serial-ATA bridge
[ 1.019444] usb 3-2: Manufacturer: Sunplus Technology Inc.
[ 1.019459] usb 3-2: SerialNumber: WDC WD5000 WD-WCC6Z5NPZLLR
[ 1.024625] usb 2-1: New USB device found, idVendor=8087, idProduct=8000, bcdDevice= 0.04
[ 1.024668] usb 2-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
[ 1.024882] hub 2-1:1.0: USB hub found
[ 1.024970] hub 2-1:1.0: 6 ports detected
[ 1.127115] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 1.127147] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 1.127180] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 1.127223] ata6: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 1.127252] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 1.128270] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 1.129029] ACPI BIOS Error (bug): Could not resolve symbol [_SB.PCI0.SAT0.SPT4._GTF.DSSP], AE_NOT_FOUND (20190816/psargs-330)
[ 1.130199] No Local Variables are initialized for Method [_GTF]
[ 1.130781] No Arguments are initialized for method [_GTF]
[ 1.131360] ACPI Error: Aborting method _SB.PCI0.SAT0.SPT4._GTF due to previous error (AE_NOT_FOUND) (20190816/psparse-529)
[ 1.132004] ACPI BIOS Error (bug): Could not resolve symbol [_SB.PCI0.SAT0.SPT5._GTF.DSSP], AE_NOT_FOUND (20190816/psargs-330)
[ 1.133365] No Local Variables are initialized for Method [_GTF]
[ 1.133980] No Arguments are initialized for method [_GTF]
[ 1.134569] ACPI Error: Aborting method _SB.PCI0.SAT0.SPT5._GTF due to previous error (AE_NOT_FOUND) (20190816/psparse-529)
[ 1.135319] ACPI BIOS Error (bug): Could not resolve symbol [_SB.PCI0.SAT0.SPT2._GTF.DSSP], AE_NOT_FOUND (20190816/psargs-330)
[ 1.136561] No Local Variables are initialized for Method [_GTF]
[ 1.137181] No Arguments are initialized for method [_GTF]
[ 1.137795] ACPI Error: Aborting method _SB.PCI0.SAT0.SPT2._GTF due to previous error (AE_NOT_FOUND) (20190816/psparse-529)
[ 1.138436] ata2.00: ATA-10: WDC WD30EFRX-68N32N0, 82.00A82, max UDMA/133
[ 1.139416] ata2.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 32), AA
[ 1.140511] ata4.00: ATA-8: WDC WD6400AAKS-22A7B2, 01.03B01, max UDMA/133
[ 1.141161] ata4.00: 1250263728 sectors, multi 16: LBA48 NCQ (depth 32), AA
[ 1.141803] ata1.00: supports DRM functions and may not be fully accessible
[ 1.142902] ata3.00: ATA-10: WDC WD30EFRX-68N32N0, 82.00A82, max UDMA/133
[ 1.143539] ata3.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 32), AA
[ 1.144767] ata6.00: ATA-10: ST4000VN008-2DR166, SC60, max UDMA/133
[ 1.145865] ata6.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 32), AA
[ 1.146946] ata2.00: configured for UDMA/133
[ 1.147942] ata5.00: ATA-10: ST4000VN008-2DR166, SC60, max UDMA/133
[ 1.148255] usb 3-6: new high-speed USB device number 3 using xhci_hcd
[ 1.148592] ata5.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 32), AA
[ 1.150392] ACPI BIOS Error (bug): Could not resolve symbol [_SB.PCI0.SAT0.SPT2._GTF.DSSP], AE_NOT_FOUND (20190816/psargs-330)
[ 1.152626] No Local Variables are initialized for Method [_GTF]
[ 1.153720] No Arguments are initialized for method [_GTF]
[ 1.154361] ACPI Error: Aborting method _SB.PCI0.SAT0.SPT2._GTF due to previous error (AE_NOT_FOUND) (20190816/psparse-529)
[ 1.155027] ata1.00: NCQ Send/Recv Log not supported
[ 1.155682] ata1.00: ATA-9: Samsung SSD 840 EVO 250GB, EXT0BB6Q, max UDMA/133
[ 1.156360] ata1.00: 488397168 sectors, multi 1: LBA48 NCQ (depth 32), AA
[ 1.157132] ata3.00: configured for UDMA/133
[ 1.157984] ata4.00: configured for UDMA/133
[ 1.158818] ACPI BIOS Error (bug): Could not resolve symbol [_SB.PCI0.SAT0.SPT4._GTF.DSSP], AE_NOT_FOUND (20190816/psargs-330)
[ 1.160182] No Local Variables are initialized for Method [_GTF]
[ 1.160873] No Arguments are initialized for method [_GTF]
[ 1.161553] ACPI Error: Aborting method _SB.PCI0.SAT0.SPT4._GTF due to previous error (AE_NOT_FOUND) (20190816/psparse-529)
[ 1.162331] ACPI BIOS Error (bug): Could not resolve symbol [_SB.PCI0.SAT0.SPT5._GTF.DSSP], AE_NOT_FOUND (20190816/psargs-330)
[ 1.163807] No Local Variables are initialized for Method [_GTF]
[ 1.164556] No Arguments are initialized for method [_GTF]
[ 1.165311] ACPI Error: Aborting method _SB.PCI0.SAT0.SPT5._GTF due to previous error (AE_NOT_FOUND) (20190816/psparse-529)
[ 1.166203] ata1.00: supports DRM functions and may not be fully accessible
[ 1.167313] ata5.00: configured for UDMA/133
[ 1.168145] ata1.00: NCQ Send/Recv Log not supported
[ 1.168973] ata6.00: configured for UDMA/133
[ 1.169802] ata1.00: configured for UDMA/133
[ 1.170691] scsi 0:0:0:0: Direct-Access ATA Samsung SSD 840 BB6Q PQ: 0 ANSI: 5
[ 1.171671] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 1.171683] sd 0:0:0:0: [sda] 488397168 512-byte logical blocks: (250 GB/233 GiB)
[ 1.172645] scsi 1:0:0:0: Direct-Access ATA WDC WD30EFRX-68N 0A82 PQ: 0 ANSI: 5
[ 1.173678] sd 0:0:0:0: [sda] Write Protect is off
[ 1.174999] sd 1:0:0:0: Attached scsi generic sg1 type 0
[ 1.175059] sd 1:0:0:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
[ 1.175060] sd 1:0:0:0: [sdb] 4096-byte physical blocks
[ 1.175107] sd 1:0:0:0: [sdb] Write Protect is off
[ 1.175108] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 1.175128] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[ 1.175622] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[ 1.176471] scsi 2:0:0:0: Direct-Access ATA WDC WD30EFRX-68N 0A82 PQ: 0 ANSI: 5
[ 1.177520] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[ 1.178773] sd 2:0:0:0: Attached scsi generic sg2 type 0
[ 1.178846] sd 2:0:0:0: [sdc] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
[ 1.178847] sd 2:0:0:0: [sdc] 4096-byte physical blocks
[ 1.178893] sd 2:0:0:0: [sdc] Write Protect is off
[ 1.178895] sd 2:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[ 1.178913] sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[ 1.185110] scsi 3:0:0:0: Direct-Access ATA WDC WD6400AAKS-2 3B01 PQ: 0 ANSI: 5
[ 1.186020] sd 3:0:0:0: Attached scsi generic sg3 type 0
[ 1.186045] sd 3:0:0:0: [sdd] 1250263728 512-byte logical blocks: (640 GB/596 GiB)
[ 1.186893] scsi 4:0:0:0: Direct-Access ATA ST4000VN008-2DR1 SC60 PQ: 0 ANSI: 5
[ 1.187477] sd 3:0:0:0: [sdd] Write Protect is off
[ 1.188313] sd 4:0:0:0: Attached scsi generic sg4 type 0
[ 1.188321] sd 4:0:0:0: [sde] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[ 1.188322] sd 4:0:0:0: [sde] 4096-byte physical blocks
[ 1.188326] sd 4:0:0:0: [sde] Write Protect is off
[ 1.188327] sd 4:0:0:0: [sde] Mode Sense: 00 3a 00 00
[ 1.188333] sd 4:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[ 1.189398] sd 3:0:0:0: [sdd] Mode Sense: 00 3a 00 00
[ 1.190663] scsi 5:0:0:0: Direct-Access ATA ST4000VN008-2DR1 SC60 PQ: 0 ANSI: 5
[ 1.191259] sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[ 1.192056] sd 5:0:0:0: Attached scsi generic sg5 type 0
[ 1.192124] sd 5:0:0:0: [sdf] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
[ 1.192125] sd 5:0:0:0: [sdf] 4096-byte physical blocks
[ 1.192134] sd 5:0:0:0: [sdf] Write Protect is off
[ 1.192136] sd 5:0:0:0: [sdf] Mode Sense: 00 3a 00 00
[ 1.192150] sd 5:0:0:0: [sdf] Write cache: enabled, read cache: enabled, doesn’t support DPO or FUA
[ 1.196840] sda: sda1
[ 1.220738] sd 0:0:0:0: [sda] supports TCG Opal
[ 1.221453] sd 0:0:0:0: [sda] Attached SCSI disk
[ 1.225820] sdd: sdd1
[ 1.236572] sdc: sdc1
[ 1.238442] sdb: sdb1
[ 1.248348] sd 3:0:0:0: [sdd] Attached SCSI disk
[ 1.256887] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 1.260355] sd 2:0:0:0: [sdc] Attached SCSI disk
[ 1.268814] sdf: sdf1 sdf9
[ 1.275036] sde: sde1 sde9
[ 1.284343] sd 5:0:0:0: [sdf] Attached SCSI disk
[ 1.296876] sd 4:0:0:0: [sde] Attached SCSI disk
[ 1.300794] usb 3-6: New USB device found, idVendor=1d6b, idProduct=0104, bcdDevice= 1.00
[ 1.301776] usb 3-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 1.302763] usb 3-6: Product: Composite KVM Device
[ 1.303730] usb 3-6: Manufacturer: PiKVM
[ 1.304813] usb 3-6: SerialNumber: CAFEBABE
[ 1.432297] usb 3-8: new low-speed USB device number 4 using xhci_hcd
[ 1.444272] tsc: Refined TSC clocksource calibration: 3399.996 MHz
[ 1.444859] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x31024b3bec5, max_idle_ns: 440795366697 ns
[ 1.445551] clocksource: Switched to clocksource tsc
[ 1.449526] md/raid1:md0: active with 2 out of 2 mirrors
[ 1.460198] md0: detected capacity change from 0 to 3000456642560
[ 1.591436] usb 3-8: New USB device found, idVendor=045e, idProduct=0750, bcdDevice= 1.10
[ 1.592035] usb 3-8: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[ 1.592621] usb 3-8: Product: Wired Keyboard 600
[ 1.593183] usb 3-8: Manufacturer: Microsoft
[ 1.598612] hidraw: raw HID events driver (C) Jiri Kosina
[ 1.599337] usb-storage 3-2:1.0: USB Mass Storage device detected
[ 1.599977] scsi host6: usb-storage 3-2:1.0
[ 1.600587] usb-storage 3-6:1.2: USB Mass Storage device detected
[ 1.601221] scsi host7: usb-storage 3-6:1.2
[ 1.601785] usbcore: registered new interface driver usb-storage
[ 1.603208] usbcore: registered new interface driver uas
[ 1.609275] usbcore: registered new interface driver usbhid
[ 1.609891] usbhid: USB HID core driver
[ 1.612296] input: PiKVM Composite KVM Device as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6:1.0/0003:1D6B:0104.0001/input/input3
[ 1.672462] hid-generic 0003:1D6B:0104.0001: input,hidraw0: USB HID v1.01 Keyboard [PiKVM Composite KVM Device] on usb-0000:00:14.0-6/input0
[ 1.673684] input: PiKVM Composite KVM Device as /devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6:1.1/0003:1D6B:0104.0002/input/input4
[ 1.675050] hid-generic 0003:1D6B:0104.0002: input,hidraw1: USB HID v1.01 Mouse [PiKVM Composite KVM Device] on usb-0000:00:14.0-6/input1
[ 1.677901] input: Microsoft Wired Keyboard 600 as /devices/pci0000:00/0000:00:14.0/usb3/3-8/3-8:1.0/0003:045E:0750.0003/input/input5
[ 1.736453] microsoft 0003:045E:0750.0003: input,hidraw2: USB HID v1.11 Keyboard [Microsoft Wired Keyboard 600] on usb-0000:00:14.0-8/input0
[ 1.738014] input: Microsoft Wired Keyboard 600 as /devices/pci0000:00/0000:00:14.0/usb3/3-8/3-8:1.1/0003:045E:0750.0004/input/input6
[ 1.796453] microsoft 0003:045E:0750.0004: input,hidraw3: USB HID v1.11 Device [Microsoft Wired Keyboard 600] on usb-0000:00:14.0-8/input1
[ 1.912270] raid6: avx2x4 gen() 33288 MB/s
[ 1.960269] raid6: avx2x4 xor() 21640 MB/s
[ 2.008270] raid6: avx2x2 gen() 29341 MB/s
[ 2.056269] raid6: avx2x2 xor() 18016 MB/s
[ 2.104269] raid6: avx2x1 gen() 25564 MB/s
[ 2.152269] raid6: avx2x1 xor() 17234 MB/s
[ 2.200270] raid6: sse2x4 gen() 18570 MB/s
[ 2.248270] raid6: sse2x4 xor() 12018 MB/s
[ 2.296268] raid6: sse2x2 gen() 15612 MB/s
[ 2.344269] raid6: sse2x2 xor() 10254 MB/s
[ 2.392270] raid6: sse2x1 gen() 13333 MB/s
[ 2.440268] raid6: sse2x1 xor() 9369 MB/s
[ 2.440949] raid6: using algorithm avx2x4 gen() 33288 MB/s
[ 2.441609] raid6: … xor() 21640 MB/s, rmw enabled
[ 2.442265] raid6: using avx2x2 recovery algorithm
**[ 2.443672] xor: automatically using best checksumming function avx **
[ 2.445088] async_tx: api initialized (async)
[ 2.481412] Btrfs loaded, crc32c=crc32c-intel
[ 2.562055] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[ 2.628706] scsi 7:0:0:0: CD-ROM PiKVM CD-ROM Drive 0515 PQ: 0 ANSI: 2
[ 2.631698] scsi 6:0:0:0: Direct-Access WDC WD50 00AZLX-07K2TA0 PQ: 0 ANSI: 2
[ 2.633192] sd 6:0:0:0: Attached scsi generic sg6 type 0
[ 2.633796] sd 6:0:0:0: [sdg] 976773168 512-byte logical blocks: (500 GB/466 GiB)
[ 2.635311] sr 7:0:0:0: Power-on or device reset occurred
[ 2.636990] sr 7:0:0:0: [sr0] scsi-1 drive
[ 2.637119] sd 6:0:0:0: [sdg] Write Protect is off
[ 2.637682] cdrom: Uniform CD-ROM driver Revision: 3.20
[ 2.638283] sd 6:0:0:0: [sdg] Mode Sense: 38 00 00 00
[ 2.640857] sd 6:0:0:0: [sdg] No Caching mode page found
[ 2.641429] sd 6:0:0:0: [sdg] Assuming drive cache: write through
[ 2.704315] sr 7:0:0:0: Attached scsi CD-ROM sr0
[ 2.704393] sr 7:0:0:0: Attached scsi generic sg7 type 5
[ 3.907868] sdg: sdg1
[ 3.929156] sd 6:0:0:0: [sdg] Attached SCSI disk
[ 3.929773] systemd[1]: Inserted module 'autofs4’
[ 3.941672] systemd[1]: systemd 245.4-4ubuntu3.21 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
[ 3.960372] systemd[1]: Detected architecture x86-64.
[ 3.984483] systemd[1]: Set hostname to .
[ 4.030131] systemd-sysv-generator[472]: stat() failed on /etc/init.d/lpd, ignoring: No such file or directory
[ 4.104123] systemd[1]: /lib/systemd/system/run-qemu.mount:10: Unknown key name ‘ReadWriteOnly’ in section ‘Mount’, ignoring.
[ 4.153962] systemd[1]: Created slice Virtual Machine and Container Slice.
[ 4.155839] systemd[1]: Created slice system-modprobe.slice.
[ 4.157536] systemd[1]: Created slice system-syncthing.slice.
[ 4.159260] systemd[1]: Created slice User and Session Slice.
[ 4.160813] systemd[1]: Started Forward Password Requests to Wall Directory Watch.
[ 4.162479] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
[ 4.164081] systemd[1]: Reached target User and Group Name Lookups.
[ 4.165676] systemd[1]: Reached target Slices.
[ 4.167230] systemd[1]: Reached target Libvirt guests shutdown.
[ 4.168836] systemd[1]: Listening on Device-mapper event daemon FIFOs.
[ 4.170441] systemd[1]: Listening on LVM2 poll daemon socket.
[ 4.175077] systemd[1]: Listening on RPCbind Server Activation Socket.
[ 4.177470] systemd[1]: Listening on Syslog Socket.
[ 4.179183] systemd[1]: Listening on initctl Compatibility Named Pipe.
[ 4.180863] systemd[1]: Listening on Journal Audit Socket.
[ 4.182460] systemd[1]: Listening on Journal Socket (/dev/log).
[ 4.184076] systemd[1]: Listening on Journal Socket.
[ 4.185747] systemd[1]: Listening on Network Service Netlink Socket.
[ 4.187362] systemd[1]: Listening on udev Control Socket.
[ 4.188963] systemd[1]: Listening on udev Kernel Socket.
[ 4.191019] systemd[1]: Mounting Huge Pages File System…
[ 4.193124] systemd[1]: Mounting POSIX Message Queue File System…
[ 4.195142] systemd[1]: Mounting NFSD configuration filesystem…
[ 4.197243] systemd[1]: Mounting RPC Pipe File System…
[ 4.199556] systemd[1]: Mounting Kernel Debug File System…
[ 4.202241] systemd[1]: Mounting Kernel Trace File System…
[ 4.205370] systemd[1]: Starting Journal Service…
[ 4.207053] systemd[1]: Condition check resulted in Kernel Module supporting RPCSEC_GSS being skipped.
[ 4.208418] systemd[1]: Starting Set the console keyboard layout…
[ 4.210464] RPC: Registered named UNIX socket transport module.
[ 4.210997] systemd[1]: Starting Create list of static device nodes for the current kernel…
[ 4.211146] RPC: Registered udp transport module.
[ 4.212477] RPC: Registered tcp transport module.
[ 4.212477] RPC: Registered tcp NFSv4.1 backchannel transport module.
[ 4.215520] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling…
[ 4.218315] systemd[1]: Starting Load Kernel Module chromeos_pstore…
[ 4.219926] systemd[1]: Condition check resulted in Load Kernel Module drm being skipped.
[ 4.221230] systemd[1]: Starting Load Kernel Module efi_pstore…
[ 4.223557] systemd[1]: Starting Load Kernel Module pstore_blk…
[ 4.225781] systemd[1]: Starting Load Kernel Module pstore_zone…
[ 4.227891] Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
[ 4.227964] systemd[1]: Starting Load Kernel Module ramoops…
[ 4.230849] systemd[1]: Condition check resulted in OpenVSwitch configuration for cleanup being skipped.
[ 4.232286] systemd[1]: Started Nameserver information manager.
[ 4.234046] systemd[1]: Reached target Network (Pre).
[ 4.236074] systemd[1]: Condition check resulted in Set Up Additional Binary Formats being skipped.
[ 4.238186] systemd[1]: Starting Load Kernel Modules…
[ 4.240777] systemd[1]: Starting Remount Root and Kernel File Systems…
[ 4.243853] systemd[1]: Starting udev Coldplug all Devices…
[ 4.246948] systemd[1]: Starting Uncomplicated firewall…
[ 4.249670] EXT4-fs (sda1): re-mounted. Opts: (null)
[ 4.249899] systemd[1]: Started Read required files in advance.
[ 4.255461] systemd[1]: Mounted Huge Pages File System.
[ 4.258066] systemd[1]: Mounted POSIX Message Queue File System.
[ 4.260433] systemd[1]: Mounted NFSD configuration filesystem.
[ 4.262171] systemd[1]: Mounted RPC Pipe File System.
[ 4.263884] systemd[1]: Mounted Kernel Debug File System.
[ 4.265424] systemd[1]: Mounted Kernel Trace File System.
[ 4.267326] systemd[1]: Finished Create list of static device nodes for the current kernel.
[ 4.269321] systemd[1]: Finished Set the console keyboard layout.
[ 4.270868] systemd[1]: modprobe@efi_pstore.service: Succeeded.
[ 4.271738] systemd[1]: Finished Load Kernel Module efi_pstore.
[ 4.273683] systemd[1]: modprobe@pstore_blk.service: Succeeded.
[ 4.274516] systemd[1]: Finished Load Kernel Module pstore_blk.
[ 4.276190] systemd[1]: modprobe@pstore_zone.service: Succeeded.
[ 4.277078] systemd[1]: Finished Load Kernel Module pstore_zone.
[ 4.278475] systemd[1]: Started Journal Service.
[ 4.293746] systemd-journald[493]: Received client request to flush runtime journal.
[ 4.529986] Adding 8388604k swap on /swap.img. Priority:-2 extents:117 across:223297532k SSFS
[ 4.686198] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[ 4.693425] br0: port 1(enp2s0) entered blocking state
[ 4.693426] br0: port 1(enp2s0) entered disabled state
[ 4.693493] device enp2s0 entered promiscuous mode
[ 4.695764] Generic FE-GE Realtek PHY r8169-0-200:00: attached PHY driver [Generic FE-GE Realtek PHY] (mii_bus:phy_addr=r8169-0-200:00, irq=IGNORE)
[ 4.718587] snd_hda_intel 0000:00:03.0: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
[ 4.720223] RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer
[ 4.720224] RAPL PMU: hw unit of domain pp0-core 2^-14 Joules
[ 4.720224] RAPL PMU: hw unit of domain package 2^-14 Joules
[ 4.720225] RAPL PMU: hw unit of domain dram 2^-14 Joules
[ 4.720225] RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules
[ 4.730637] input: HDA Intel HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:03.0/sound/card0/input7
[ 4.730694] input: HDA Intel HDMI HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:03.0/sound/card0/input8
[ 4.730786] input: HDA Intel HDMI HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:03.0/sound/card0/input9
[ 4.730873] input: HDA Intel HDMI HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:03.0/sound/card0/input10
[ 4.730956] input: HDA Intel HDMI HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:03.0/sound/card0/input11
[ 4.821131] r8169 0000:02:00.0 enp2s0: Link is Down
[ 4.896459] intel_rapl_common: Found RAPL domain package
[ 4.896461] intel_rapl_common: Found RAPL domain core
[ 4.896462] intel_rapl_common: Found RAPL domain uncore
[ 4.896463] intel_rapl_common: Found RAPL domain dram
[ 4.899277] br0: port 1(enp2s0) entered blocking state
[ 4.899279] br0: port 1(enp2s0) entered forwarding state
[ 5.430749] spl: loading out-of-tree module taints kernel.
[ 5.434565] znvpair: module license ‘CDDL’ taints kernel.
[ 5.434566] Disabling lock debugging due to kernel taint
[ 6.042069] br0: port 1(enp2s0) entered disabled state
[ 6.052375] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[ 7.403888] ZFS: Loaded module v0.8.3-1ubuntu12.14, ZFS pool version 5000, ZFS filesystem version 5
[ 7.830936] r8169 0000:02:00.0 enp2s0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 7.830944] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0: link becomes ready
[ 7.830973] br0: port 1(enp2s0) entered blocking state
[ 7.830975] br0: port 1(enp2s0) entered forwarding state
[ 7.831032] IPv6: ADDRCONF(NETDEV_CHANGE): br0: link becomes ready
[ 11.016477] /var/lib/snapd/snaps/snapd_13270.snap: Can’t open blockdev
[ 12.454902] audit: type=1400 audit(1686351597.314:2): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name=“virt-aa-helper” pid=3769 comm="apparmor_parser"
[ 12.455283] audit: type=1400 audit(1686351597.314:3): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name="/usr/bin/man" pid=3768 comm="apparmor_parser"
[ 12.455285] audit: type=1400 audit(1686351597.314:4): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name=“man_filter” pid=3768 comm="apparmor_parser"
[ 12.455287] audit: type=1400 audit(1686351597.314:5): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name=“man_groff” pid=3768 comm="apparmor_parser"
[ 12.455473] audit: type=1400 audit(1686351597.314:6): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name="/usr/bin/lxc-start" pid=3767 comm="apparmor_parser"
[ 12.455975] audit: type=1400 audit(1686351597.314:7): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name="/usr/lib/snapd/snap-confine" pid=3771 comm="apparmor_parser"
[ 12.455978] audit: type=1400 audit(1686351597.314:8): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name="/usr/lib/snapd/snap-confine//mount-namespace-capture-helper" pid=3771 comm="apparmor_parser"
[ 12.456208] audit: type=1400 audit(1686351597.314:9): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name=“nvidia_modprobe” pid=3772 comm="apparmor_parser"
[ 12.456211] audit: type=1400 audit(1686351597.314:10): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name=“nvidia_modprobe//kmod” pid=3772 comm="apparmor_parser"
[ 12.458069] audit: type=1400 audit(1686351597.318:11): apparmor=“STATUS” operation=“profile_load” profile=“unconfined” name="/usr/sbin/mysqld" pid=3770 comm="apparmor_parser"
[ 12.689841] new mount options do not match the existing superblock, will be ignored
[ 13.078368] vboxdrv: module verification failed: signature and/or required key missing - tainting kernel
[ 13.085705] vboxdrv: Found 8 processor cores
[ 13.109005] vboxdrv: TSC mode is Invariant, tentative frequency 3399995434 Hz
[ 13.109006] vboxdrv: Successfully loaded version 6.0.24 (interface 0x00290008)
[ 13.320760] VBoxNetFlt: Successfully started.
[ 13.340574] VBoxNetAdp: Successfully started.
[ 13.599126] aufs 5.4.3-20200302
[ 13.627660] bpfilter: Loaded bpfilter_umh pid 4374
[ 13.627894] Started bpfilter
[ 14.053902] NFSD: Using UMH upcall client tracking operations.
[ 14.053905] NFSD: starting 90-second grace period (net f00000a8)
[ 15.334529] Bridge firewalling registered
[ 15.391069] Initializing XFRM netlink socket
[ 15.661665] systemd-sysv-generator[5144]: stat() failed on /etc/init.d/lpd, ignoring: No such file or directory
[ 16.150562] br-802e37f9b7f0: port 1(veth37dbab9) entered blocking state
[ 16.150565] br-802e37f9b7f0: port 1(veth37dbab9) entered disabled state
[ 16.159145] device veth37dbab9 entered promiscuous mode
[ 16.159367] br-802e37f9b7f0: port 1(veth37dbab9) entered blocking state
[ 16.159369] br-802e37f9b7f0: port 1(veth37dbab9) entered forwarding state
[ 16.192310] IPv6: ADDRCONF(NETDEV_CHANGE): br-802e37f9b7f0: link becomes ready
[ 16.192459] br-802e37f9b7f0: port 1(veth37dbab9) entered disabled state
[ 16.212778] br-da35a1d43d4c: port 1(vethb13c4d7) entered blocking state
[ 16.212780] br-da35a1d43d4c: port 1(vethb13c4d7) entered disabled state
[ 16.213945] device vethb13c4d7 entered promiscuous mode
[ 16.226452] br-da35a1d43d4c: port 1(vethb13c4d7) entered blocking state
[ 16.226456] br-da35a1d43d4c: port 1(vethb13c4d7) entered forwarding state
[ 16.226635] IPv6: ADDRCONF(NETDEV_CHANGE): br-da35a1d43d4c: link becomes ready
[ 16.226756] br-da35a1d43d4c: port 1(vethb13c4d7) entered disabled state
[ 16.230253] br-f040ebb1cd72: port 1(veth11aae48) entered blocking state
[ 16.230256] br-f040ebb1cd72: port 1(veth11aae48) entered disabled state
[ 16.230546] device veth11aae48 entered promiscuous mode
[ 16.244377] br-f040ebb1cd72: port 1(veth11aae48) entered blocking state
[ 16.244380] br-f040ebb1cd72: port 1(veth11aae48) entered forwarding state
[ 16.245045] IPv6: ADDRCONF(NETDEV_CHANGE): br-f040ebb1cd72: link becomes ready
[ 16.247506] br-f040ebb1cd72: port 1(veth11aae48) entered disabled state
[ 16.257443] br-5fb4807d5cd6: port 1(vetha50a937) entered blocking state
[ 16.257445] br-5fb4807d5cd6: port 1(vetha50a937) entered disabled state
[ 16.260121] device vetha50a937 entered promiscuous mode
[ 16.260699] br-5fb4807d5cd6: port 1(vetha50a937) entered blocking state
[ 16.260701] br-5fb4807d5cd6: port 1(vetha50a937) entered forwarding state
[ 16.262152] br-2542a4d463e3: port 1(vethe75435c) entered blocking state
[ 16.262154] br-2542a4d463e3: port 1(vethe75435c) entered disabled state
[ 16.263661] device vethe75435c entered promiscuous mode
[ 16.293523] IPv6: ADDRCONF(NETDEV_CHANGE): br-5fb4807d5cd6: link becomes ready
[ 16.294762] br-5fb4807d5cd6: port 1(vetha50a937) entered disabled state
[ 16.309621] br-2542a4d463e3: port 1(vethe75435c) entered blocking state
[ 16.309623] br-2542a4d463e3: port 1(vethe75435c) entered forwarding state
[ 16.313825] IPv6: ADDRCONF(NETDEV_CHANGE): br-2542a4d463e3: link becomes ready
[ 16.316274] br-2542a4d463e3: port 1(vethe75435c) entered disabled state
[ 16.333321] br-5e6d16fe27cc: port 1(veth2dbc873) entered blocking state
[ 16.333324] br-5e6d16fe27cc: port 1(veth2dbc873) entered disabled state
[ 16.333479] device veth2dbc873 entered promiscuous mode
[ 16.345426] br-81ae22d7a0bc: port 1(veth9145550) entered blocking state
[ 16.345429] br-81ae22d7a0bc: port 1(veth9145550) entered disabled state
[ 16.345515] device veth9145550 entered promiscuous mode
[ 16.345736] br-5e6d16fe27cc: port 1(veth2dbc873) entered blocking state
[ 16.345738] br-5e6d16fe27cc: port 1(veth2dbc873) entered forwarding state
[ 16.345829] br-e1cb1bd8c1a9: port 1(vethdd3d25b) entered blocking state
[ 16.345830] br-e1cb1bd8c1a9: port 1(vethdd3d25b) entered disabled state
[ 16.345929] device vethdd3d25b entered promiscuous mode
[ 16.346303] br-be134bd72fd1: port 1(veth2337af4) entered blocking state
[ 16.346305] br-be134bd72fd1: port 1(veth2337af4) entered disabled state
[ 16.346405] device veth2337af4 entered promiscuous mode
[ 16.346614] br-be134bd72fd1: port 1(veth2337af4) entered blocking state
[ 16.346616] br-be134bd72fd1: port 1(veth2337af4) entered forwarding state
[ 16.346728] br-e1cb1bd8c1a9: port 1(vethdd3d25b) entered blocking state
[ 16.346730] br-e1cb1bd8c1a9: port 1(vethdd3d25b) entered forwarding state
[ 16.346846] br-81ae22d7a0bc: port 1(veth9145550) entered blocking state
[ 16.346848] br-81ae22d7a0bc: port 1(veth9145550) entered forwarding state
[ 16.347010] br-d6ed445b22c9: port 1(veth947fd2d) entered blocking state
[ 16.347011] br-d6ed445b22c9: port 1(veth947fd2d) entered disabled state
[ 16.347098] device veth947fd2d entered promiscuous mode
[ 16.347281] br-d6ed445b22c9: port 1(veth947fd2d) entered blocking state
[ 16.347283] br-d6ed445b22c9: port 1(veth947fd2d) entered forwarding state
[ 16.347432] br-285e7f8661cc: port 1(vethaa8353a) entered blocking state
[ 16.347433] br-285e7f8661cc: port 1(vethaa8353a) entered disabled state
[ 16.347527] device vethaa8353a entered promiscuous mode
[ 16.347775] br-285e7f8661cc: port 1(vethaa8353a) entered blocking state
[ 16.347776] br-285e7f8661cc: port 1(vethaa8353a) entered forwarding state
[ 16.347959] br-47e409f489a1: port 1(veth62b4d8b) entered blocking state
[ 16.347960] br-47e409f489a1: port 1(veth62b4d8b) entered disabled state
[ 16.348015] device veth62b4d8b entered promiscuous mode
[ 16.348152] br-47e409f489a1: port 1(veth62b4d8b) entered blocking state
[ 16.348154] br-47e409f489a1: port 1(veth62b4d8b) entered forwarding state
[ 16.348214] docker0: port 1(veth9b04415) entered blocking state
[ 16.348215] docker0: port 1(veth9b04415) entered disabled state
[ 16.348590] device veth9b04415 entered promiscuous mode
[ 16.348715] docker0: port 1(veth9b04415) entered blocking state
[ 16.348716] docker0: port 1(veth9b04415) entered forwarding state
[ 16.369202] IPv6: ADDRCONF(NETDEV_CHANGE): br-81ae22d7a0bc: link becomes ready
[ 16.369281] IPv6: ADDRCONF(NETDEV_CHANGE): br-be134bd72fd1: link becomes ready
[ 16.369315] IPv6: ADDRCONF(NETDEV_CHANGE): br-d6ed445b22c9: link becomes ready
[ 16.369350] IPv6: ADDRCONF(NETDEV_CHANGE): br-285e7f8661cc: link becomes ready
[ 16.369381] IPv6: ADDRCONF(NETDEV_CHANGE): br-5e6d16fe27cc: link becomes ready
[ 16.369411] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
[ 16.369444] IPv6: ADDRCONF(NETDEV_CHANGE): br-e1cb1bd8c1a9: link becomes ready
[ 16.369520] br-81ae22d7a0bc: port 1(veth9145550) entered disabled state
[ 16.373224] br-e1cb1bd8c1a9: port 1(vethdd3d25b) entered disabled state
[ 16.373313] docker0: port 1(veth9b04415) entered disabled state
[ 16.373379] br-d6ed445b22c9: port 1(veth947fd2d) entered disabled state
[ 16.373443] br-285e7f8661cc: port 1(vethaa8353a) entered disabled state
[ 16.373501] br-47e409f489a1: port 1(veth62b4d8b) entered disabled state
[ 16.373552] br-5e6d16fe27cc: port 1(veth2dbc873) entered disabled state
[ 16.373606] br-be134bd72fd1: port 1(veth2337af4) entered disabled state
[ 16.384145] br-f040ebb1cd72: port 2(vethf9d2f0f) entered blocking state
[ 16.384146] br-f040ebb1cd72: port 2(vethf9d2f0f) entered disabled state
[ 16.384552] device vethf9d2f0f entered promiscuous mode
[ 16.385094] br-f040ebb1cd72: port 2(vethf9d2f0f) entered blocking state
[ 16.385095] br-f040ebb1cd72: port 2(vethf9d2f0f) entered forwarding state
[ 16.412907] br-f040ebb1cd72: port 2(vethf9d2f0f) entered disabled state
[ 16.674863] EXT4-fs (sdg1): mounted filesystem with ordered data mode. Opts: (null)
[ 16.903187] br-285e7f8661cc: port 2(veth23aa385) entered blocking state
[ 16.903191] br-285e7f8661cc: port 2(veth23aa385) entered disabled state
[ 16.903566] device veth23aa385 entered promiscuous mode
[ 16.903686] br-285e7f8661cc: port 2(veth23aa385) entered blocking state
[ 16.903688] br-285e7f8661cc: port 2(veth23aa385) entered forwarding state
[ 16.905451] br-285e7f8661cc: port 2(veth23aa385) entered disabled state
[ 17.318936] cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation
[ 17.615619] eth0: renamed from vethdedc872
[ 17.672339] IPv6: ADDRCONF(NETDEV_CHANGE): veth37dbab9: link becomes ready
[ 17.672396] br-802e37f9b7f0: port 1(veth37dbab9) entered blocking state
[ 17.672398] br-802e37f9b7f0: port 1(veth37dbab9) entered forwarding state
[ 17.755435] eth0: renamed from veth483f47c
[ 17.801052] IPv6: ADDRCONF(NETDEV_CHANGE): vethb13c4d7: link becomes ready
[ 17.801112] br-da35a1d43d4c: port 1(vethb13c4d7) entered blocking state
[ 17.801114] br-da35a1d43d4c: port 1(vethb13c4d7) entered forwarding state
[ 17.968848] eth0: renamed from veth4719b34
[ 18.008442] eth0: renamed from vethb8dda9e
[ 18.075321] IPv6: ADDRCONF(NETDEV_CHANGE): veth2dbc873: link becomes ready
[ 18.075404] br-5e6d16fe27cc: port 1(veth2dbc873) entered blocking state
[ 18.075406] br-5e6d16fe27cc: port 1(veth2dbc873) entered forwarding state
[ 18.075466] IPv6: ADDRCONF(NETDEV_CHANGE): vethe75435c: link becomes ready
[ 18.075497] br-2542a4d463e3: port 1(vethe75435c) entered blocking state
[ 18.075498] br-2542a4d463e3: port 1(vethe75435c) entered forwarding state
[ 18.145107] eth0: renamed from veth0ead2e1
[ 18.177724] eth0: renamed from veth13e48b0
[ 18.212852] IPv6: ADDRCONF(NETDEV_CHANGE): veth11aae48: link becomes ready
[ 18.212898] br-f040ebb1cd72: port 1(veth11aae48) entered blocking state
[ 18.212899] br-f040ebb1cd72: port 1(veth11aae48) entered forwarding state
[ 18.212942] IPv6: ADDRCONF(NETDEV_CHANGE): vetha50a937: link becomes ready
[ 18.212967] br-5fb4807d5cd6: port 1(vetha50a937) entered blocking state
[ 18.212968] br-5fb4807d5cd6: port 1(vetha50a937) entered forwarding state
[ 18.328836] eth0: renamed from veth6cb5579
[ 18.362188] IPv6: ADDRCONF(NETDEV_CHANGE): veth9b04415: link becomes ready
[ 18.362244] docker0: port 1(veth9b04415) entered blocking state
[ 18.362245] docker0: port 1(veth9b04415) entered forwarding state
[ 18.448862] eth0: renamed from vethaaf9a29
[ 18.597698] eth0: renamed from vethe475892
[ 18.632638] eth0: renamed from veth2aed310
[ 18.653824] IPv6: ADDRCONF(NETDEV_CHANGE): vethf9d2f0f: link becomes ready
[ 18.653898] br-f040ebb1cd72: port 2(vethf9d2f0f) entered blocking state
[ 18.653899] br-f040ebb1cd72: port 2(vethf9d2f0f) entered forwarding state
[ 18.653952] IPv6: ADDRCONF(NETDEV_CHANGE): vethdd3d25b: link becomes ready
[ 18.653986] br-e1cb1bd8c1a9: port 1(vethdd3d25b) entered blocking state
[ 18.653987] br-e1cb1bd8c1a9: port 1(vethdd3d25b) entered forwarding state
[ 18.660715] IPv6: ADDRCONF(NETDEV_CHANGE): veth9145550: link becomes ready
[ 18.660783] br-81ae22d7a0bc: port 1(veth9145550) entered blocking state
[ 18.660784] br-81ae22d7a0bc: port 1(veth9145550) entered forwarding state
[ 18.703055] eth0: renamed from veth8157792
[ 18.729160] IPv6: ADDRCONF(NETDEV_CHANGE): veth947fd2d: link becomes ready
[ 18.729230] br-d6ed445b22c9: port 1(veth947fd2d) entered blocking state
[ 18.729232] br-d6ed445b22c9: port 1(veth947fd2d) entered forwarding state
[ 18.764129] eth0: renamed from veth9c65408
[ 18.828645] IPv6: ADDRCONF(NETDEV_CHANGE): vethaa8353a: link becomes ready
[ 18.828719] br-285e7f8661cc: port 1(vethaa8353a) entered blocking state
[ 18.828720] br-285e7f8661cc: port 1(vethaa8353a) entered forwarding state
[ 18.879240] eth0: renamed from vethfe089aa
[ 18.897012] eth0: renamed from veth3aef7b4
[ 18.924539] IPv6: ADDRCONF(NETDEV_CHANGE): veth2337af4: link becomes ready
[ 18.924607] br-be134bd72fd1: port 1(veth2337af4) entered blocking state
[ 18.924608] br-be134bd72fd1: port 1(veth2337af4) entered forwarding state
[ 18.957952] eth0: renamed from veth53d17c9
[ 18.972545] IPv6: ADDRCONF(NETDEV_CHANGE): veth23aa385: link becomes ready
[ 18.972620] br-285e7f8661cc: port 2(veth23aa385) entered blocking state
[ 18.972621] br-285e7f8661cc: port 2(veth23aa385) entered forwarding state
[ 18.977613] IPv6: ADDRCONF(NETDEV_CHANGE): veth62b4d8b: link becomes ready
[ 18.977672] br-47e409f489a1: port 1(veth62b4d8b) entered blocking state
[ 18.977674] br-47e409f489a1: port 1(veth62b4d8b) entered forwarding state
[ 18.977716] IPv6: ADDRCONF(NETDEV_CHANGE): br-47e409f489a1: link becomes ready
[ 19.625714] new mount options do not match the existing superblock, will be ignored
[ 20.680523] NET: Registered protocol family 40
[ 21.757367] L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.
[ 63.609879] TCP: request_sock_TCP: Possible SYN flooding on port 5535. Sending cookies. Check SNMP counters.

df -h shows 76% available on root.

I think the storage is broken. I’m actually quite sure of it.
THe question is. Is there some way I can get something usefull out of it? And how?

What do you mean by that?
Regards.

Sorry. What I mean is: Even if I can’t get my containers back, I hope to extract the data from the storage. I tought I had working backups but some how I can’t use them😕 Even if I restore from an old timeshift snapshot it fails…

Please examine this post, it may help you. Regards.
https://discuss.linuxcontainers.org/t/mount-container-directory-in-the-host/12385/1

I tried running “mount /var/snap/lxd/common/lxd/disks/default.img /root/test/” as instructed in the thread, but it gave me “mount: /root/test: unknown filesystem type ‘zfs_member’.”

How about /var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/default/containers/<conatain_name>/rootfs directory.
Regards.

Hi
That directory does not exist.
/var/snap/lxd/common/mntns/var/snap/lxd/common/lxd/storage-pools/default/containers/<container_name> exists, but nothing beyond that.

Humm there is something wrong with your default storage image, i think.

Yes, I believe there is…
But I should some how be able to extract something from it, right?

I have timeshift snapshots from back in may.
In theory, should I be able to restore the default.img, and it should work?

Yes, but that backup or default.img is not damaged. You can init definitely same lxd configuration and copy your default.img backup to the directory.
Regards.

Hi
I tried to restore the img, but it does not work. I even tried to restore the whole system, but no good result. What options do I have left?
I really need the files in one of the containers…

What does it mean, what are the steps you are trying? Error messages you get? Please write the steps you are doing?
Regards.

Ok, now I tried as I asked before to restore the default.img from early may via timeshift (I also tried to restore the whole system). When I try to run "lxc start " it still gives me “Error: Storage pool “default” unavailable on this server”

I just can’t figure out what why this isn’t working…

Post those command outputs please, lxc storage ls and lxd sql global "select * from storage_pools;"
ls -alh /var/snap/lxd/common/lxd/disks/default.img and lxd version please.
And one last thing, /var/snap/lxd/common/lxd/logs/lxd.log. :slight_smile:
Regards.

lxc storage ls
±--------±-------±-------------------------------------------±------------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY |
±--------±-------±-------------------------------------------±------------±--------+
| default | zfs | /var/snap/lxd/common/lxd/disks/default.img | | 24 |
±--------±-------±-------------------------------------------±------------±--------+

lxd sql global "select * from storage_pools;"
±—±--------±-------±------------±------+
| id | name | driver | description | state |
±—±--------±-------±------------±------+
| 1 | default | zfs | | 1 |
±—±--------±-------±------------±------+

ls -alh /var/snap/lxd/common/lxd/disks/default.img
-rw------- 1 root root 53G Jun 10 00:09 /var/snap/lxd/common/lxd/disks/default.img

cat /var/snap/lxd/common/lxd/logs/lxd.log
time=“2023-06-13T09:56:06+02:00” level=warning msg=" - Couldn’t find the CGroup blkio.weight, disk priority will be ignored"
time=“2023-06-13T09:56:06+02:00” level=warning msg=" - Couldn’t find the CGroup memory swap accounting, swap limits will be ignored"
time=“2023-06-13T09:56:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T09:56:11+02:00” level=warning msg=“Failed to initialize fanotify, falling back on inotify” err=“Failed to initialize fanotify: invalid argument”
time=“2023-06-13T09:56:11+02:00” level=warning msg=“Failed to create warning” err=“Failed to retrieve warnings: Failed to fetch from “warnings” table: Failed to fetch from “warnings” table: id”
time=“2023-06-13T09:56:11+02:00” level=warning msg=“Failed to create warning” err=“Failed to retrieve warnings: Failed to fetch from “warnings” table: Failed to fetch from “warnings” table: id”
time=“2023-06-13T09:56:11+02:00” level=warning msg=“Failed to resolve warnings” err=“Failed to resolve warnings: Failed to fetch from “warnings” table: Failed to fetch from “warnings” table: id”
time=“2023-06-13T09:56:26+02:00” level=error msg=“Failed to update the image” err=“Storage pool is unavailable on this server” fingerprint=46c0b8bf83411ce5cc2eb7f27dead107b1699c7f8391b4ec1986ae782b1a045a
time=“2023-06-13T09:56:26+02:00” level=error msg=“Failed to update the image” err=“Failed to create image “be054e6cefeca692405840dbc81d192c8a650f576caead7e1a852c0f0bda00fc” on storage pool “default”: Storage pool is unavailable on this server” fingerprint=561195dedea294fc279824c25b18bdc21efed94db63a7749c49378a1e94cdf41
time=“2023-06-13T09:56:28+02:00” level=error msg=“Failed to update the image” err=“Failed to create image “884a62161ef52d74fa2e588a650455b2f7acf7030797263cb1c06e44bc102ee6” on storage pool “default”: Storage pool is unavailable on this server” fingerprint=c51241b9673c1fd4d206caf8fc49bb62e445b67c647af0d37c567753b774325b
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=bedrock instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=cups instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=minecraft instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=duckdns instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=gimme-iphotos instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=warning msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=homeassistant instanceType=virtual-machine project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=pihole instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=warning msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=windows10 instanceType=virtual-machine project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=darkweb instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=PodcastGenerator instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=dircaster instanceType=container project=default
time=“2023-06-13T09:56:59+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=centos9-stream instanceType=container project=default
time=“2023-06-13T09:57:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T09:58:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T09:59:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=duckdns instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=minecraft instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=bedrock instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=gimme-iphotos instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=darkweb instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=cups instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=warning msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=homeassistant instanceType=virtual-machine project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=pihole instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=warning msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=windows10 instanceType=virtual-machine project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=PodcastGenerator instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=dircaster instanceType=container project=default
time=“2023-06-13T09:59:51+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=centos9-stream instanceType=container project=default
time=“2023-06-13T10:00:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:01:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:02:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:03:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:04:11+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:05:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:06:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:07:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:08:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:09:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:10:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:11:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:12:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:13:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:14:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:15:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:16:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:17:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:18:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:19:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:20:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:21:12+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:22:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:23:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:24:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:25:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:26:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:27:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:28:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:29:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:30:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:31:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:32:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:33:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:34:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:35:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:36:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:37:13+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:38:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:39:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:40:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:41:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:42:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:43:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:44:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:45:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:46:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:47:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:48:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:49:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:50:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:51:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:52:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:53:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:54:14+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:55:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:56:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:57:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:58:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T10:59:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:00:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:01:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=cups instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=minecraft instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=bedrock instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=duckdns instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=gimme-iphotos instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=darkweb instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=warning msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=homeassistant instanceType=virtual-machine project=default
time=“2023-06-13T11:01:19+02:00” level=warning msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=windows10 instanceType=virtual-machine project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=pihole instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=dircaster instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=centos9-stream instanceType=container project=default
time=“2023-06-13T11:01:19+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=PodcastGenerator instanceType=container project=default
time=“2023-06-13T11:02:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:03:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:04:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:05:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:06:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:07:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:08:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:09:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:10:15+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:11:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:12:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:13:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:14:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:15:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:16:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:17:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:18:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:19:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:20:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:21:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:22:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:23:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:24:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:25:16+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:26:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:27:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:28:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:29:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:30:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:31:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:32:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:33:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:34:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:35:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:36:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:37:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:38:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:39:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:40:17+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:41:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:42:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:43:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:44:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:45:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:46:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:47:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:48:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:49:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:50:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:51:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:52:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:53:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:54:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:55:18+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:56:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:57:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:58:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T11:59:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:00:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:01:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:02:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:03:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:04:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:05:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:06:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:07:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:08:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:09:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:10:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:11:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:12:19+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:13:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:14:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:15:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:16:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:17:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:18:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:19:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:20:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:21:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:22:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:23:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:24:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:25:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:26:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:27:20+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:28:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:29:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:30:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:31:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:32:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:33:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:34:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:35:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:36:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:37:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:38:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:39:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:40:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:41:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:42:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:43:21+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:44:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:45:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:46:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:47:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:48:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:49:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:50:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:51:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:52:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:53:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:54:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:55:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:56:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:57:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:58:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T12:59:22+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:00:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:01:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:02:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:03:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:04:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:05:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:06:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:07:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:08:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:09:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:10:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:11:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:12:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:13:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:14:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:15:23+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:16:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:17:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:18:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:19:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:20:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:21:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:22:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:23:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:24:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:25:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:26:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:27:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:28:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:29:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:30:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:31:24+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:32:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:33:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:34:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:35:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:36:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:37:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:38:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:39:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:40:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:41:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:42:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:43:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:44:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:45:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:46:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:47:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:48:25+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:49:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:50:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:51:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:52:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:53:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:54:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:55:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:56:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:57:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:58:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T13:59:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:00:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:01:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:02:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:03:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:04:26+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:05:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:06:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=duckdns instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=bedrock instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=cups instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=minecraft instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=darkweb instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=pihole instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=warning msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=homeassistant instanceType=virtual-machine project=default
time=“2023-06-13T14:07:00+02:00” level=warning msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=windows10 instanceType=virtual-machine project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=PodcastGenerator instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=dircaster instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=centos9-stream instanceType=container project=default
time=“2023-06-13T14:07:00+02:00” level=error msg=“Error getting disk usage” err=“Storage pool is unavailable on this server” instance=gimme-iphotos instanceType=container project=default
time=“2023-06-13T14:07:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:08:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:09:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:10:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:11:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:12:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:13:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:14:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:15:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:16:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:17:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:18:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:19:27+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:20:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:21:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:22:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:23:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:24:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:25:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:26:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:27:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:28:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:29:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:30:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:31:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:32:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:33:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:34:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:35:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:36:28+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:37:29+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:38:29+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:39:24+02:00” level=warning msg=“Failed to resolve warning” driver=bridge err=“Failed to resolve warnings: Failed to fetch from “warnings” table: Failed to fetch from “warnings” table: id” network=lxdbr0 project=default
time=“2023-06-13T14:39:24+02:00” level=warning msg=“Failed to resolve warning” driver=bridge err=“Failed to resolve warnings: Failed to fetch from “warnings” table: Failed to fetch from “warnings” table: id” network=lxdbr0 project=default
time=“2023-06-13T14:39:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:40:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:40:45+02:00” level=warning msg=“Failed to resolve warning” driver=bridge err=“Failed to resolve warnings: Failed to fetch from “warnings” table: Failed to fetch from “warnings” table: id” network=lxdbr0 project=default
time=“2023-06-13T14:40:46+02:00” level=warning msg=“Failed to resolve warning” driver=bridge err=“Failed to resolve warnings: Failed to fetch from “warnings” table: Failed to fetch from “warnings” table: id” network=lxdbr0 project=default
time=“2023-06-13T14:41:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:42:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:43:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:44:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:45:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:46:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:47:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:48:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:49:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:50:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:51:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:52:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:53:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:54:31+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:55:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:56:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:57:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:58:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T14:59:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:00:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:01:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:02:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:03:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:04:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:05:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:06:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:07:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:08:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:09:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:10:32+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:11:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:12:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:13:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:14:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:15:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:16:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:17:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:18:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:19:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:20:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:21:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:22:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:23:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:24:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:25:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:26:33+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:27:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:28:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:29:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:30:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:31:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:32:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:33:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:34:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:35:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:36:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:37:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:38:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:39:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:40:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:41:34+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:42:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:43:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:44:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:45:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:46:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:47:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:48:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:49:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:50:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:51:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:52:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:53:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default
time=“2023-06-13T15:54:35+02:00” level=error msg=“Failed mounting storage pool” err=“Failed to run: zpool import -f -d /var/snap/lxd/common/lxd/disks default: exit status 1 (cannot import ‘default’: I/O error)” pool=default

Can you post the df -hT output please.

Yes :slight_smile:
df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 12G 0 12G 0% /dev
tmpfs tmpfs 2.4G 5.3M 2.4G 1% /run
/dev/sda1 ext4 210G 129G 72G 65% /
tmpfs tmpfs 12G 12K 12G 1% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 12G 0 12G 0% /sys/fs/cgroup
tmpfs tmpfs 12G 0 12G 0% /run/qemu
/dev/md0 ext4 2.7T 2.1T 485G 82% /Volumes/Media
Media2 zfs 3.6T 2.5T 1.1T 70% /Volumes/Media2
/dev/loop2 squashfs 9.7M 9.7M 0 100% /snap/canonical-livepatch/229
/dev/loop3 squashfs 303M 303M 0 100% /snap/code/129
/dev/loop1 squashfs 2.0M 2.0M 0 100% /snap/btop/617
/dev/loop5 squashfs 117M 117M 0 100% /snap/core/14946
/dev/loop4 squashfs 303M 303M 0 100% /snap/code/130
/dev/loop6 squashfs 56M 56M 0 100% /snap/core18/2745
/dev/loop0 squashfs 2.0M 2.0M 0 100% /snap/btop/612
/dev/loop7 squashfs 56M 56M 0 100% /snap/core18/2751
/dev/loop9 squashfs 39M 39M 0 100% /snap/thelounge/280
/dev/loop8 squashfs 117M 117M 0 100% /snap/core/14784
/dev/loop10 squashfs 125M 125M 0 100% /snap/yt-dlp/233
/dev/loop11 squashfs 8.5M 8.5M 0 100% /snap/distrobuilder/1125
/dev/loop12 squashfs 171M 171M 0 100% /snap/lxd/24918
/dev/loop13 squashfs 167M 167M 0 100% /snap/lxd/24846
/dev/loop14 squashfs 74M 74M 0 100% /snap/core22/750
/dev/loop15 squashfs 9.7M 9.7M 0 100% /snap/canonical-livepatch/216
/dev/loop16 squashfs 125M 125M 0 100% /snap/yt-dlp/220
/dev/loop17 squashfs 54M 54M 0 100% /snap/snapd/19122
/dev/loop18 squashfs 64M 64M 0 100% /snap/core20/1879
/dev/loop19 squashfs 8.7M 8.7M 0 100% /snap/distrobuilder/1364
/dev/loop20 squashfs 64M 64M 0 100% /snap/core20/1891
/dev/loop21 squashfs 54M 54M 0 100% /snap/snapd/19361
/dev/loop22 squashfs 74M 74M 0 100% /snap/core22/634
tmpfs tmpfs 2.4G 0 2.4G 0% /run/user/1000
tmpfs tmpfs 1.0M 0 1.0M 0% /var/snap/lxd/common/ns
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/3e7599447063f1ced6dbf6982aef39bd2a79c208c6bb9bb52f723fb888c070ab/merged
/dev/sdg1 ext4 458G 395G 40G 91% /run/timeshift/backup
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/cefed74d03ee25c22e47239dbe336cf81100852715a573b43fdcf8b9a9e799f3/merged
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/3bf2e85985da8f83d831831a1aab6e375266ff603776b6d1533f86e54e18f167/merged
shm tmpfs 64M 16K 64M 1% /var/lib/docker/containers/051666a07682d4a63fb203032a67c732bdce83be110c9f26a259c28f55611226/mounts/shm
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/fe6e72526f1aa96ad78ae7272e875bab004d9ff02beee5dea4cb0cc286d00bc4/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/074143dc35689a75ea08a413a5d5f5ad658cb668f2ef1d2816788ed04f37de4c/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/e58fb60e63909cb2b2a8ded96e2e5aa9dceed6c6f9759e31cf6fb9aac8b8a141/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/e451d0ca4f9e1904e37969aa159b814bec7b2e0dc5eaf6cc25c37f834ddd9d74/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/7fadfaa2de9b156965b704caca900837673e08d4083e42dd18326906fe8019e7/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/91fd8246f4f1d12c656d688a85d5ecd7b94175114e16844ca2b2b644647f714f/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/84d7abf5cf1a0cd12fbd3fcc4e3f975e33c6cb65f97379b74584769d8998d39a/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/8db3e17f529d69ae663914a2dd4f6bf4a56c02e40df2973514c83dcafa20473f/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/521a080fb196f4d0a37c207118ae381348e9eba63be91aee271ed27a059bd81e/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/e8120d1628b65090450f09b69e08982e6ecb2bcc54883d73d8325b2221627c78/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/059c0649a18cbbda3f665c074d01757586fc9c53622ba3586c44c6912b8b0c97/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/a0e7faaa10a7c5860da2a657c26d1a1b26f18157c3bee6f071c698c494d4bd2e/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/0032b6f7ba1b9ca867fa48f0cf66eefd5fa889e42dbe497850abc31e25f3c18f/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/813fa90e94abbef99036a63691c21f9ef2e87bacdc0950c5801ae38ee58773e9/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/e402f2aed7259f7acbf86d18d16f7d42f2bd73040baf8a609e1e7ba216fc015f/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/7d6c7b9e4ead1046a531876f94d8d39d8e0eb356f39d3e965a4bc7cbfae7f284/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/a0727fbbf304999e1e053b8e1674147734a0d681accc7310dcd3f1d31b195f70/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/4740cda75c7f7c02623f164c645bbd7a52e68d91474bd53103ba9c4b4fa2f9ba/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/0bf4a064b56340b2033d0019b55933c45aa9be6c672ac7134650bd44d8e7f0d6/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/8167f085718f14e2d008c35dfa30d75754f7f64a15832e45f1d0fc11d194c19f/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/efe6d2e4f2e35bc55372b2c6446606b2168436675a0a8829639590ceff2e4c49/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/fe7ec8904c633dfbcb07709c5e49a86f14e49b3c1b3c5a585407d40144aab094/merged
shm tmpfs 64M 4.0K 64M 1% /var/lib/docker/containers/e827dfe3988d87e464eb839485b3ec5a05653bedcd00db6f6708fe3edadb09f2/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/d391027ee210de51471179c6ad4edac4397f1a1cc1148bb1f245f9a260c4f8ee/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/bf5f7cb2a021dc866ce9f5a5a7a1630843cfdd31d894b4ed8eed68c4a34d3a5b/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/ee73d057ea023c2eeee211a9bcc4456cdf681f40c9a1c5dc6297893be0343b90/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/5103d5c01516aeffc96c1d809175d3146a28319401a1ad78a0950ec6fcddbf14/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/d641f6cfc9e40705fa6d6cf316030e7c2c8dfd80c12116ca04b08da193e46afe/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/b5940ab39890301e4820fb625b75196ae67b6208875b0bc24212588600105803/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/2ccae3801265b7369ee13dc8bf46881872605c34fe963f62cc182aae03c11fd5/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/f84ce44ed92acd694c282fffcd790e656168144c2629d83de5815caf4eed54a3/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/294413bcde6b45d01e5983f01e6278a7e7459e1c890c64beb342ea7cbbf1fecd/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/a42943dc60027a507328878af6b0c6b907b872512f54c7f1676f265d9378597f/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/1bcfccd5155a99d148c913797cde63ca2f3810dace219ee9da83dcc37bb2ef83/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/a14374eb1527fdcade2a323b3c038c6758773cf98cc15b5609b0a3aab4a0399b/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/2e85b1ce40f4ffb29381078478e6cf465b2bb7d65d73fd2f28492d812e8459b0/merged
shm tmpfs 64M 1.3M 63M 2% /var/lib/docker/containers/fe34550f9d91b1c5cb44219709a2c80e22cc83b913d432d661f0a2b94bcb6dce/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/0689a0bcceb9bef65d1afcb76bccc2d49c298f1afffd283e9c7ff72896668076/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/044bc9645ccdce0590adea331b4f9047f4794298fca7fd7d9c282d2e5fc88d54/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/4f38bbd79edb09e40466aa1fc3021bd69f5900578f565adca43dae63e20d0e26/merged
shm tmpfs 64M 28K 64M 1% /var/lib/docker/containers/1afc12d92666982696d175ad949b011f2bc222ce355d5d0c273a8df72a9fd039/mounts/shm
overlay overlay 210G 129G 72G 65% /var/lib/docker/overlay2/a92921cd2c25e9cdc168a006de44a6597f692da872be27b1d76c2ec7f3d94338/merged
shm tmpfs 64M 0 64M 0% /var/lib/docker/containers/8e6f4c5f67bcff0036de4bd03076a8fc48e2557c435a0e6d8c58506ab338ccdd/mounts/shm