Hello,
I copy one of my container to another incus host which has not ovn network, after copying the container I cant start it.
Here is the error.
indiana@incusrv01:~$ incus start nanogpt
Error: Failed to run: /opt/incus/bin/incusd forkstart nanogpt /var/lib/incus/containers /run/incus/nanogpt/lxc.conf: exit status 1
Try `incus info --show-log nanogpt` for more info
indiana@incusrv01:~$ incus info --show-log nanogpt
Name: nanogpt
Description:
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2025/04/07 19:11 UTC
Last Used: 2025/04/07 21:05 UTC
Log:
lxc nanogpt 20250407210500.270 ERROR utils - ../src/lxc/utils.c:run_buffer:571 - Script exited with status 1
lxc nanogpt 20250407210500.270 ERROR conf - ../src/lxc/conf.c:lxc_setup:3948 - Failed to run mount hooks
lxc nanogpt 20250407210500.270 ERROR start - ../src/lxc/start.c:do_start:1273 - Failed to setup container "nanogpt"
lxc nanogpt 20250407210500.271 ERROR sync - ../src/lxc/sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 4)
lxc nanogpt 20250407210500.276 WARN network - ../src/lxc/network.c:lxc_delete_network_priv:3674 - Failed to rename interface with index 0 from "eth0" to its initial name "vethf58eda79"
lxc nanogpt 20250407210500.276 ERROR lxccontainer - ../src/lxc/lxccontainer.c:wait_on_daemonized_start:837 - Received container state "ABORTING" instead of "RUNNING"
lxc nanogpt 20250407210500.276 ERROR start - ../src/lxc/start.c:__lxc_start:2119 - Failed to spawn container "nanogpt"
lxc nanogpt 20250407210500.276 WARN start - ../src/lxc/start.c:lxc_abort:1037 - No such process - Failed to send SIGKILL via pidfd 17 for process 4724
lxc 20250407210500.336 ERROR af_unix - ../src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20250407210500.336 ERROR commands - ../src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_init_pid"
Anyone can assist me?
Regards.
stgraber
(Stéphane Graber)
April 8, 2025, 12:03am
2
Can you run incus monitor --pretty
at the same time as trying to start the container?
It’s failing in the mount hook so that would normally suggest more of a storage issue of some kind.
Thanks for the reply,
Strange, I powered on the machine (incusrv01) and start the container, it starts.
Here is the output of the monitor output.
DEBUG [2025-04-08T04:34:29Z] Handling API request ip=@ method=GET protocol=unix url=/1.0 username=indiana
DEBUG [2025-04-08T04:34:29Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/instances/nanogpt username=indiana
DEBUG [2025-04-08T04:34:29Z] Event listener server handler started id=6b347fa6-2b11-4fbc-97aa-f1e38ec5c435 local=/var/lib/incus/unix.socket remote=@
DEBUG [2025-04-08T04:34:29Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/events username=indiana
DEBUG [2025-04-08T04:34:29Z] Handling API request ip=@ method=PUT protocol=unix url=/1.0/instances/nanogpt/state username=indiana
INFO [2025-04-08T04:34:29Z] ID: 8b984eb2-cd0a-404b-8aef-9a70a2c85a23, Class: task, Description: Starting instance CreatedAt="2025-04-08 04:34:29.905741115 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/nanogpt]]" Status=Pending StatusCode=Pending UpdatedAt="2025-04-08 04:34:29.905741115 +0000 UTC"
DEBUG [2025-04-08T04:34:29Z] New operation class=task description="Starting instance" operation=8b984eb2-cd0a-404b-8aef-9a70a2c85a23 project=default
DEBUG [2025-04-08T04:34:29Z] Started operation class=task description="Starting instance" operation=8b984eb2-cd0a-404b-8aef-9a70a2c85a23 project=default
INFO [2025-04-08T04:34:29Z] ID: 8b984eb2-cd0a-404b-8aef-9a70a2c85a23, Class: task, Description: Starting instance CreatedAt="2025-04-08 04:34:29.905741115 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/nanogpt]]" Status=Running StatusCode=Running UpdatedAt="2025-04-08 04:34:29.905741115 +0000 UTC"
DEBUG [2025-04-08T04:34:29Z] Start started instance=nanogpt instanceType=container project=default stateful=false
DEBUG [2025-04-08T04:34:29Z] Instance operation lock created action=start instance=nanogpt project=default reusable=false
DEBUG [2025-04-08T04:34:29Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/operations/8b984eb2-cd0a-404b-8aef-9a70a2c85a23 username=indiana
INFO [2025-04-08T04:34:29Z] Starting instance action=start created="2025-04-07 19:11:02.319665927 +0000 UTC" ephemeral=false instance=nanogpt instanceType=container project=default stateful=false used="2025-04-07 21:05:00.187733534 +0000 UTC"
DEBUG [2025-04-08T04:34:29Z] MountInstance started driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:29Z] MountInstance finished driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:29Z] Mounted ZFS dataset dev=zfspool/containers/nanogpt driver=zfs path=/var/lib/incus/storage-pools/default/containers/nanogpt pool=default volName=nanogpt
DEBUG [2025-04-08T04:34:29Z] Starting device device=eth0 instance=nanogpt instanceType=container project=default type=nic
DEBUG [2025-04-08T04:34:29Z] Starting device device=root instance=nanogpt instanceType=container project=default type=disk
DEBUG [2025-04-08T04:34:29Z] UpdateInstanceBackupFile started driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:29Z] CacheInstanceSnapshots started driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:29Z] CacheInstanceSnapshots finished driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:29Z] UpdateInstanceBackupFile finished driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:29Z] Skipping unmount as in use driver=zfs pool=default refCount=1 volName=nanogpt
DEBUG [2025-04-08T04:34:30Z] Handling API request ip=@ method=GET protocol=unix url="/internal/containers/nanogpt/onstart?project=default" username=root
DEBUG [2025-04-08T04:34:30Z] Scheduler: container nanogpt started: re-balancing
INFO [2025-04-08T04:34:31Z] Started instance action=start created="2025-04-07 19:11:02.319665927 +0000 UTC" ephemeral=false instance=nanogpt instanceType=container project=default stateful=false used="2025-04-07 21:05:00.187733534 +0000 UTC"
DEBUG [2025-04-08T04:34:31Z] Start finished instance=nanogpt instanceType=container project=default stateful=false
DEBUG [2025-04-08T04:34:31Z] Instance operation lock finished action=start err="<nil>" instance=nanogpt project=default reusable=false
INFO [2025-04-08T04:34:31Z] Action: instance-started, Source: /1.0/instances/nanogpt, Requestor: unix/indiana (@)
INFO [2025-04-08T04:34:31Z] ID: 8b984eb2-cd0a-404b-8aef-9a70a2c85a23, Class: task, Description: Starting instance CreatedAt="2025-04-08 04:34:29.905741115 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/nanogpt]]" Status=Success StatusCode=Success UpdatedAt="2025-04-08 04:34:29.905741115 +0000 UTC"
DEBUG [2025-04-08T04:34:31Z] Success for operation class=task description="Starting instance" operation=8b984eb2-cd0a-404b-8aef-9a70a2c85a23 project=default
DEBUG [2025-04-08T04:34:31Z] Event listener server handler stopped listener=6b347fa6-2b11-4fbc-97aa-f1e38ec5c435 local=/var/lib/incus/unix.socket remote=@
DEBUG [2025-04-08T04:34:39Z] Handling API request ip=@ method=GET protocol=unix url=/1.0 username=indiana
DEBUG [2025-04-08T04:34:39Z] Handling API request ip=@ method=GET protocol=unix url="/1.0/instances?filter=&recursion=2" username=indiana
DEBUG [2025-04-08T04:34:39Z] GetInstanceUsage started driver=zfs instance=oi pool=default project=default
DEBUG [2025-04-08T04:34:39Z] GetInstanceUsage started driver=zfs instance=sd pool=default project=default
DEBUG [2025-04-08T04:34:39Z] GetInstanceUsage started driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:39Z] GetInstanceUsage finished driver=zfs instance=oi pool=default project=default
DEBUG [2025-04-08T04:34:39Z] GetInstanceUsage finished driver=zfs instance=sd pool=default project=default
DEBUG [2025-04-08T04:34:39Z] GetInstanceUsage finished driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:39Z] CacheInstanceSnapshots started driver=zfs instance=oi pool=default project=default
DEBUG [2025-04-08T04:34:39Z] CacheInstanceSnapshots started driver=zfs instance=sd pool=default project=default
DEBUG [2025-04-08T04:34:39Z] CacheInstanceSnapshots started driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:39Z] CacheInstanceSnapshots finished driver=zfs instance=nanogpt pool=default project=default
DEBUG [2025-04-08T04:34:39Z] CacheInstanceSnapshots finished driver=zfs instance=sd pool=default project=default
DEBUG [2025-04-08T04:34:39Z] CacheInstanceSnapshots finished driver=zfs instance=oi pool=default project=default
I’ll test with another container and share the result as soon as possible.
Regards.
I have replicated the error again, here is monitor output. The virtual machine name is test in this case.
DEBUG [2025-04-08T16:45:37Z] Handling API request ip=@ method=GET protocol=unix url=/1.0 username=indiana
DEBUG [2025-04-08T16:45:37Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/instances/test username=indiana
DEBUG [2025-04-08T16:45:37Z] Event listener server handler started id=e9751ea7-7c71-488e-97ac-68534f89fbba local=/var/lib/incus/unix.socket remote=@
DEBUG [2025-04-08T16:45:37Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/events username=indiana
DEBUG [2025-04-08T16:45:37Z] Handling API request ip=@ method=PUT protocol=unix url=/1.0/instances/test/state username=indiana
DEBUG [2025-04-08T16:45:37Z] Start started instance=test instanceType=virtual-machine project=default stateful=false
INFO [2025-04-08T16:45:37Z] ID: 33940ed4-8228-4913-aeee-dcec49aa3530, Class: task, Description: Starting instance CreatedAt="2025-04-08 16:45:37.486247447 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/test]]" Status=Running StatusCode=Running UpdatedAt="2025-04-08 16:45:37.486247447 +0000 UTC"
DEBUG [2025-04-08T16:45:37Z] Instance operation lock created action=start instance=test project=default reusable=false
DEBUG [2025-04-08T16:45:37Z] Started operation class=task description="Starting instance" operation=33940ed4-8228-4913-aeee-dcec49aa3530 project=default
DEBUG [2025-04-08T16:45:37Z] New operation class=task description="Starting instance" operation=33940ed4-8228-4913-aeee-dcec49aa3530 project=default
INFO [2025-04-08T16:45:37Z] ID: 33940ed4-8228-4913-aeee-dcec49aa3530, Class: task, Description: Starting instance CreatedAt="2025-04-08 16:45:37.486247447 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/test]]" Status=Pending StatusCode=Pending UpdatedAt="2025-04-08 16:45:37.486247447 +0000 UTC"
DEBUG [2025-04-08T16:45:37Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/operations/33940ed4-8228-4913-aeee-dcec49aa3530 username=indiana
DEBUG [2025-04-08T16:45:37Z] MountInstance started driver=zfs instance=test pool=default project=default
DEBUG [2025-04-08T16:45:37Z] Activated ZFS volume dev=zfspool/virtual-machines/test.block driver=zfs pool=default volName=test
DEBUG [2025-04-08T16:45:37Z] MountInstance finished driver=zfs instance=test pool=default project=default
DEBUG [2025-04-08T16:45:37Z] Mounted ZFS dataset dev=zfspool/virtual-machines/test driver=zfs path=/var/lib/incus/storage-pools/default/virtual-machines/test pool=default volName=test
DEBUG [2025-04-08T16:45:37Z] Starting device device=eth0 instance=test instanceType=virtual-machine project=default type=nic
DEBUG [2025-04-08T16:45:37Z] Starting device device=agent instance=test instanceType=virtual-machine project=default type=disk
DEBUG [2025-04-08T16:45:37Z] Starting device device=root instance=test instanceType=virtual-machine project=default type=disk
DEBUG [2025-04-08T16:45:37Z] Instance operation lock finished action=start err="Unable to locate matching firmware: [{Code:/opt/incus/share/qemu/OVMF_CODE.4MB.fd Vars:/opt/incus/share/qemu/OVMF_VARS.4MB.ms.fd} {Code:/usr/share/OVMF/OVMF_CODE_4M.ms.fd Vars:/usr/share/OVMF/OVMF_VARS_4M.ms.fd}]" instance=test project=default reusable=false
DEBUG [2025-04-08T16:45:37Z] Stopping device device=eth0 instance=test instanceType=virtual-machine project=default type=nic
DEBUG [2025-04-08T16:45:37Z] Stopping device device=agent instance=test instanceType=virtual-machine project=default type=disk
DEBUG [2025-04-08T16:45:37Z] Stopping device device=root instance=test instanceType=virtual-machine project=default type=disk
DEBUG [2025-04-08T16:45:37Z] UnmountInstance started driver=zfs instance=test pool=default project=default
DEBUG [2025-04-08T16:45:37Z] Failed to unmount attempt=0 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:38Z] Failed to unmount attempt=1 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:38Z] Failed to unmount attempt=2 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:39Z] Failed to unmount attempt=3 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:39Z] Failed to unmount attempt=4 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:40Z] Failed to unmount attempt=5 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:40Z] Failed to unmount attempt=6 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:41Z] Failed to unmount attempt=7 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:41Z] Failed to unmount attempt=8 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:42Z] Failed to unmount attempt=9 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:42Z] Failed to unmount attempt=10 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:43Z] Failed to unmount attempt=11 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:43Z] Failed to unmount attempt=12 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:44Z] Failed to unmount attempt=13 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:44Z] Failed to unmount attempt=14 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:45Z] Failed to unmount attempt=15 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:45Z] Failed to unmount attempt=16 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:46Z] Failed to unmount attempt=17 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:46Z] Failed to unmount attempt=18 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:47Z] Failed to unmount attempt=19 err="device or resource busy" path=/var/lib/incus/storage-pools/default/virtual-machines/test
DEBUG [2025-04-08T16:45:47Z] Failure for operation class=task description="Starting instance" err="Unable to locate matching firmware: [{Code:/opt/incus/share/qemu/OVMF_CODE.4MB.fd Vars:/opt/incus/share/qemu/OVMF_VARS.4MB.ms.fd} {Code:/usr/share/OVMF/OVMF_CODE_4M.ms.fd Vars:/usr/share/OVMF/OVMF_VARS_4M.ms.fd}]" operation=33940ed4-8228-4913-aeee-dcec49aa3530 project=default
INFO [2025-04-08T16:45:47Z] ID: 33940ed4-8228-4913-aeee-dcec49aa3530, Class: task, Description: Starting instance CreatedAt="2025-04-08 16:45:37.486247447 +0000 UTC" Err="Unable to locate matching firmware: [{Code:/opt/incus/share/qemu/OVMF_CODE.4MB.fd Vars:/opt/incus/share/qemu/OVMF_VARS.4MB.ms.fd} {Code:/usr/share/OVMF/OVMF_CODE_4M.ms.fd Vars:/usr/share/OVMF/OVMF_VARS_4M.ms.fd}]" Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/test]]" Status=Failure StatusCode=Failure UpdatedAt="2025-04-08 16:45:37.486247447 +0000 UTC"
DEBUG [2025-04-08T16:45:47Z] Start finished instance=test instanceType=virtual-machine project=default stateful=false
DEBUG [2025-04-08T16:45:47Z] UnmountInstance finished driver=zfs instance=test pool=default project=default
DEBUG [2025-04-08T16:45:47Z] Event listener server handler stopped listener=e9751ea7-7c71-488e-97ac-68534f89fbba local=/var/lib/incus/unix.socket remote=@
stgraber
(Stéphane Graber)
April 8, 2025, 7:28pm
5
Can you try to manually run umount /var/lib/incus/storage-pools/default/virtual-machines/test
to see if you get the same error?
I manually tried to execute the above command.
umount: /var/lib/incus/storage-pools/default/virtual-machines/test: target is busy.
May be this can give you any hint, I upgraded this machine from incus 6.4 (or something, not sure) to 6.11. And I realized that the source machine zfs version is zfs 2.3.1 and the target zfs version is 2.2.2 and the zfs status warns me about this situation.
indiana@incusrv01:~$ zpool status -v
pool: zfspool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:01:30 with 0 errors on Sun Feb 11 00:25:31 2024
Regards.
stgraber
(Stéphane Graber)
April 8, 2025, 8:59pm
7
Can you show ps fauww
?
I wonder if there is some leftover process from the migration that’s keeping the mountpoint active.
stgraber:
ps fauww
indiana@incusrv01:~$ ps fauxww
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 2 0.0 0.0 0 0 ? S 16:43 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 16:43 0:00 \_ [pool_workqueue_release]
root 4 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-rcu_g]
root 5 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-rcu_p]
root 6 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-slub_]
root 7 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-netns]
root 9 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/0:0H-events_highpri]
root 12 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-mm_pe]
root 13 0.0 0.0 0 0 ? I 16:43 0:00 \_ [rcu_tasks_kthread]
root 14 0.0 0.0 0 0 ? I 16:43 0:00 \_ [rcu_tasks_rude_kthread]
root 15 0.0 0.0 0 0 ? I 16:43 0:00 \_ [rcu_tasks_trace_kthread]
root 16 0.0 0.0 0 0 ? S 16:43 0:00 \_ [ksoftirqd/0]
root 17 0.0 0.0 0 0 ? I 16:43 0:00 \_ [rcu_preempt]
root 18 0.0 0.0 0 0 ? S 16:43 0:00 \_ [migration/0]
root 19 0.0 0.0 0 0 ? S 16:43 0:00 \_ [idle_inject/0]
root 20 0.0 0.0 0 0 ? S 16:43 0:00 \_ [cpuhp/0]
root 21 0.0 0.0 0 0 ? S 16:43 0:00 \_ [cpuhp/1]
root 22 0.0 0.0 0 0 ? S 16:43 0:00 \_ [idle_inject/1]
root 23 0.0 0.0 0 0 ? S 16:43 0:00 \_ [migration/1]
root 24 0.0 0.0 0 0 ? S 16:43 0:00 \_ [ksoftirqd/1]
root 26 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/1:0H-events_highpri]
root 27 0.0 0.0 0 0 ? S 16:43 0:00 \_ [cpuhp/2]
root 28 0.0 0.0 0 0 ? S 16:43 0:00 \_ [idle_inject/2]
root 29 0.0 0.0 0 0 ? S 16:43 0:00 \_ [migration/2]
root 30 0.0 0.0 0 0 ? S 16:43 0:00 \_ [ksoftirqd/2]
root 32 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/2:0H-events_highpri]
root 33 0.0 0.0 0 0 ? S 16:43 0:00 \_ [cpuhp/3]
root 34 0.0 0.0 0 0 ? S 16:43 0:00 \_ [idle_inject/3]
root 35 0.0 0.0 0 0 ? S 16:43 0:00 \_ [migration/3]
root 36 0.0 0.0 0 0 ? S 16:43 0:00 \_ [ksoftirqd/3]
root 38 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/3:0H-events_highpri]
root 39 0.0 0.0 0 0 ? S 16:43 0:00 \_ [kdevtmpfs]
root 40 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-inet_]
root 42 0.0 0.0 0 0 ? S 16:43 0:00 \_ [kauditd]
root 44 0.0 0.0 0 0 ? S 16:43 0:00 \_ [khungtaskd]
root 46 0.0 0.0 0 0 ? S 16:43 0:00 \_ [oom_reaper]
root 47 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-write]
root 48 0.0 0.0 0 0 ? S 16:43 0:02 \_ [kcompactd0]
root 49 0.0 0.0 0 0 ? SN 16:43 0:03 \_ [ksmd]
root 50 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [khugepaged]
root 51 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-kinte]
root 52 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-kbloc]
root 53 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-blkcg]
root 54 0.0 0.0 0 0 ? S 16:43 0:00 \_ [irq/9-acpi]
root 57 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-tpm_d]
root 58 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ata_s]
root 59 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-md]
root 60 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-md_bi]
root 61 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-edac-]
root 62 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-devfr]
root 63 0.0 0.0 0 0 ? S 16:43 0:00 \_ [watchdogd]
root 64 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/0:1H-kblockd]
root 65 0.0 0.0 0 0 ? S 16:43 0:03 \_ [kswapd0]
root 66 0.0 0.0 0 0 ? S 16:43 0:00 \_ [ecryptfs-kthread]
root 67 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-kthro]
root 68 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-acpi_]
root 71 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-mld]
root 72 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/3:1H-kblockd]
root 73 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ipv6_]
root 80 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-kstrp]
root 82 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/u9:0]
root 87 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-crypt]
root 97 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-charg]
root 126 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/1:1H-kblockd]
root 146 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/2:1H-kblockd]
root 164 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-mlx4]
root 165 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-mlx4_]
root 174 0.0 0.0 0 0 ? S 16:43 0:00 \_ [scsi_eh_0]
root 175 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-scsi_]
root 176 0.0 0.0 0 0 ? S 16:43 0:00 \_ [scsi_eh_1]
root 177 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-scsi_]
root 178 0.0 0.0 0 0 ? S 16:43 0:00 \_ [scsi_eh_2]
root 179 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-scsi_]
root 180 0.0 0.0 0 0 ? S 16:43 0:00 \_ [scsi_eh_3]
root 181 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-scsi_]
root 182 0.0 0.0 0 0 ? S 16:43 0:00 \_ [scsi_eh_4]
root 183 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-scsi_]
root 184 0.0 0.0 0 0 ? S 16:43 0:00 \_ [scsi_eh_5]
root 185 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-scsi_]
root 241 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-mlx4_]
root 242 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ib-co]
root 243 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ib-co]
root 244 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ib_mc]
root 245 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ib_nl]
root 246 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-mlx4_]
root 247 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-mlx4_]
root 248 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ib_ma]
root 249 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ib_ma]
root 269 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-raid5]
root 314 0.0 0.0 0 0 ? S 16:43 0:00 \_ [jbd2/sda2-8]
root 315 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ext4-]
root 409 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-nvme-]
root 410 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-nvme-]
root 411 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-nvme-]
root 412 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-nvme-]
root 413 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-kmpat]
root 414 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-kmpat]
root 469 0.0 0.0 0 0 ? S 16:43 0:00 \_ [psimon]
root 623 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [spl_system_task]
root 624 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [spl_delay_taskq]
root 625 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [spl_dynamic_tas]
root 626 0.0 0.0 0 0 ? S< 16:43 0:04 \_ [spl_kmem_cache]
root 627 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-ipmi-]
root 637 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [kipmi0]
root 698 0.0 0.0 0 0 ? S< 16:43 0:04 \_ [zvol]
root 699 0.0 0.0 0 0 ? S 16:43 0:00 \_ [arc_prune]
root 700 0.0 0.0 0 0 ? S 16:43 0:14 \_ [arc_evict]
root 701 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [arc_reap]
root 702 0.0 0.0 0 0 ? S 16:43 0:00 \_ [dbu_evict]
root 703 0.1 0.0 0 0 ? SN 16:43 0:22 \_ [dbuf_evict]
root 704 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [z_vdev_file]
root 706 0.0 0.0 0 0 ? S 16:43 0:00 \_ [nv_queue]
root 707 0.0 0.0 0 0 ? S 16:43 0:00 \_ [nv_queue]
root 744 0.0 0.0 0 0 ? S 16:43 0:00 \_ [nvidia-modeset/kthread_q]
root 745 0.0 0.0 0 0 ? S 16:43 0:00 \_ [nvidia-modeset/deferred_close_kthread_q]
root 755 0.0 0.0 0 0 ? S 16:43 0:00 \_ [UVM global queue]
root 756 0.0 0.0 0 0 ? S 16:43 0:00 \_ [UVM deferred release queue]
root 757 0.0 0.0 0 0 ? S 16:43 0:00 \_ [UVM Tools Event Queue]
root 766 0.0 0.0 0 0 ? S 16:43 0:00 \_ [l2arc_feed]
root 931 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-cfg80]
root 1080 0.0 0.0 0 0 ? I< 16:43 0:00 \_ [kworker/R-dio/s]
root 1140 0.0 0.0 0 0 ? S< 16:43 0:02 \_ [z_null_iss]
root 1141 0.0 0.0 0 0 ? S< 16:43 0:08 \_ [z_null_int]
root 1142 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_rd_iss]
root 1143 0.1 0.0 0 0 ? S< 16:43 0:21 \_ [z_rd_int]
root 1144 0.6 0.0 0 0 ? S< 16:43 1:49 \_ [z_wr_iss]
root 1145 0.0 0.0 0 0 ? S< 16:43 0:06 \_ [z_wr_iss_h]
root 1146 0.4 0.0 0 0 ? S< 16:43 1:06 \_ [z_wr_int]
root 1147 0.0 0.0 0 0 ? S< 16:43 0:10 \_ [z_wr_int_h]
root 1148 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_fr_iss]
root 1149 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_fr_int]
root 1150 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_cl_iss]
root 1151 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_cl_int]
root 1152 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_ioctl_iss]
root 1153 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_ioctl_int]
root 1154 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_trim_iss]
root 1155 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_trim_int]
root 1156 0.0 0.0 0 0 ? S 16:43 0:00 \_ [z_zvol]
root 1157 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [z_metaslab]
root 1158 0.0 0.0 0 0 ? S 16:43 0:00 \_ [z_prefetch]
root 1159 0.0 0.0 0 0 ? S 16:43 0:00 \_ [z_upgrade]
root 1167 0.0 0.0 0 0 ? SN 16:43 0:02 \_ [dp_sync_taskq]
root 1168 0.0 0.0 0 0 ? SN 16:43 0:01 \_ [dp_sync_taskq]
root 1169 0.0 0.0 0 0 ? SN 16:43 0:02 \_ [dp_sync_taskq]
root 1170 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [dp_zil_clean_ta]
root 1171 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [dp_zil_clean_ta]
root 1172 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [dp_zil_clean_ta]
root 1173 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [dp_zil_clean_ta]
root 1174 0.0 0.0 0 0 ? S 16:43 0:00 \_ [z_zrele]
root 1175 0.0 0.0 0 0 ? S 16:43 0:00 \_ [z_unlinked_drai]
root 1208 0.0 0.0 0 0 ? S 16:43 0:00 \_ [txg_quiesce]
root 1209 0.0 0.0 0 0 ? S 16:43 0:05 \_ [txg_sync]
root 1210 0.0 0.0 0 0 ? S 16:43 0:00 \_ [mmp]
root 1216 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [z_indirect_cond]
root 1217 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [z_livelist_dest]
root 1218 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [z_livelist_cond]
root 1219 0.0 0.0 0 0 ? SN 16:43 0:00 \_ [z_checkpoint_di]
root 1252 0.0 0.0 0 0 ? S< 16:43 0:00 \_ [spl_system_task]
root 3131 0.0 0.0 0 0 ? I 16:48 0:01 \_ [kworker/3:0-events]
root 3943 0.0 0.0 0 0 ? I< 17:08 0:00 \_ [kworker/R-tls-s]
root 5431 0.0 0.0 0 0 ? S< 18:00 0:00 \_ [z_rd_iss]
root 5432 0.0 0.0 0 0 ? S< 18:00 0:00 \_ [z_rd_iss]
root 5433 0.0 0.0 0 0 ? S< 18:00 0:00 \_ [z_rd_iss]
root 7241 0.0 0.0 0 0 ? I 19:20 0:00 \_ [kworker/1:2-events]
root 7772 0.0 0.0 0 0 ? I 19:35 0:00 \_ [kworker/2:0-mm_percpu_wq]
root 9034 0.0 0.0 0 0 ? I 20:08 0:02 \_ [kworker/u8:3-events_power_efficient]
root 9489 0.0 0.0 0 0 ? S 20:17 0:00 \_ [psimon]
root 9517 0.0 0.0 0 0 ? I 20:18 0:00 \_ [kworker/3:2-rcu_gp]
root 15099 0.0 0.0 0 0 ? I 20:24 0:00 \_ [kworker/0:1-mm_percpu_wq]
root 15238 0.0 0.0 0 0 ? I 20:43 0:00 \_ [kworker/u8:2-events_unbound]
root 15315 0.0 0.0 0 0 ? I 20:45 0:00 \_ [kworker/2:1-rcu_gp]
root 15316 0.0 0.0 0 0 ? I 20:45 0:00 \_ [kworker/1:0-rcu_gp]
root 15317 0.0 0.0 0 0 ? I 20:45 0:00 \_ [kworker/0:0-rcu_gp]
root 15406 0.0 0.0 0 0 ? I< 20:45 0:00 \_ [kworker/R-vfio-]
root 15846 0.0 0.0 0 0 ? I 20:55 0:00 \_ [kworker/u8:0-mlx4_en]
root 15885 0.0 0.0 0 0 ? S< 20:56 0:00 \_ [z_wr_int_h]
root 15896 0.0 0.0 0 0 ? S< 20:57 0:00 \_ [zvol]
root 15912 0.0 0.0 0 0 ? S< 20:57 0:00 \_ [zvol]
root 15913 0.0 0.0 0 0 ? S< 20:57 0:00 \_ [zvol]
root 15920 0.0 0.0 0 0 ? S< 20:57 0:00 \_ [zvol]
root 15923 0.0 0.0 0 0 ? S< 20:57 0:00 \_ [zvol]
root 15929 0.0 0.0 0 0 ? S< 20:57 0:00 \_ [zvol]
root 15930 0.0 0.0 0 0 ? S< 20:57 0:00 \_ [zvol]
root 15952 0.1 0.0 0 0 ? S< 20:58 0:01 \_ [z_wr_iss]
root 15972 0.1 0.0 0 0 ? S< 20:59 0:00 \_ [z_wr_int]
root 15992 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 15993 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 15994 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 15995 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 15996 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 15997 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 15998 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 15999 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 16001 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 16002 0.0 0.0 0 0 ? S< 21:00 0:00 \_ [zvol]
root 16013 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16014 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16015 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16016 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16017 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16018 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16019 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16020 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16021 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16028 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [z_wr_int_h]
root 16034 0.0 0.0 0 0 ? S< 21:01 0:00 \_ [zvol]
root 16053 0.0 0.0 0 0 ? S< 21:02 0:00 \_ [z_wr_iss_h]
root 16072 0.0 0.0 0 0 ? I 21:03 0:00 \_ [kworker/u8:1-events_power_efficient]
root 16086 0.0 0.0 0 0 ? S< 21:04 0:00 \_ [zvol]
root 16087 0.0 0.0 0 0 ? S< 21:04 0:00 \_ [zvol]
root 16088 0.0 0.0 0 0 ? S< 21:04 0:00 \_ [zvol]
root 16089 0.0 0.0 0 0 ? S< 21:04 0:00 \_ [zvol]
root 16092 0.0 0.0 0 0 ? S< 21:04 0:00 \_ [z_wr_iss_h]
root 16093 0.0 0.0 0 0 ? S< 21:04 0:00 \_ [z_wr_iss_h]
root 16094 0.0 0.0 0 0 ? S< 21:04 0:00 \_ [z_wr_iss_h]
root 16105 0.0 0.0 0 0 ? S< 21:05 0:00 \_ [z_wr_iss]
root 16106 0.0 0.0 0 0 ? S< 21:05 0:00 \_ [z_wr_int]
root 16109 0.0 0.0 0 0 ? S< 21:05 0:00 \_ [z_rd_int]
root 16110 0.0 0.0 0 0 ? S< 21:05 0:00 \_ [z_rd_int]
root 16113 0.0 0.0 0 0 ? S< 21:05 0:00 \_ [z_wr_int_h]
root 16114 0.0 0.0 0 0 ? S< 21:05 0:00 \_ [z_wr_int_h]
root 16115 0.0 0.0 0 0 ? I 21:05 0:00 \_ [kworker/0:2-kvm-irqfd-cleanup]
root 16116 0.0 0.0 0 0 ? I 21:05 0:00 \_ [kworker/1:1-rcu_gp]
root 16117 0.0 0.0 0 0 ? I 21:05 0:00 \_ [kworker/2:2-rcu_gp]
root 16121 0.0 0.0 0 0 ? I 21:05 0:00 \_ [kworker/2:3-rcu_gp]
root 16129 0.0 0.0 0 0 ? I 21:05 0:00 \_ [kworker/3:1]
root 16184 0.0 0.0 0 0 ? S 21:05 0:00 \_ [kvm-nx-lpage-recovery-16182]
root 16190 0.0 0.0 0 0 ? S 21:05 0:00 \_ [kvm-pit/16182]
root 1 0.0 0.0 22928 13556 ? Ss 16:43 0:02 /sbin/init
root 385 0.0 0.0 67116 17900 ? S<s 16:43 0:00 /usr/lib/systemd/systemd-journald
root 446 0.0 0.0 289252 27264 ? SLsl 16:43 0:01 /sbin/multipathd -d -s
root 455 0.0 0.0 29196 7636 ? Ss 16:43 0:00 /usr/lib/systemd/systemd-udevd
systemd+ 835 0.0 0.0 21584 12672 ? Ss 16:43 0:00 /usr/lib/systemd/systemd-resolved
systemd+ 850 0.0 0.0 91020 7680 ? Ssl 16:43 0:00 /usr/lib/systemd/systemd-timesyncd
systemd+ 927 0.0 0.0 22116 9728 ? Ss 16:43 0:00 /usr/lib/systemd/systemd-networkd
message+ 945 0.0 0.0 9652 5248 ? Ss 16:43 0:00 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
nvidia-+ 957 0.0 0.0 5228 2048 ? Ss 16:43 0:00 /usr/bin/nvidia-persistenced --user nvidia-persistenced --no-persistence-mode --verbose
root 958 0.0 0.0 12664 6912 ? Ss 16:43 0:00 /usr/sbin/smartd -n
root 959 0.0 0.0 18160 8576 ? Ss 16:43 0:00 /usr/lib/systemd/systemd-logind
root 960 0.0 0.0 422272 11136 ? Ssl 16:43 0:01 /usr/sbin/thermald --systemd --dbus-enable --adaptive
root 962 0.0 0.0 241388 6272 ? Ssl 16:43 0:00 zed -F
root 964 0.0 0.0 302748 2816 ? Ssl 16:43 0:00 /opt/incus/bin/lxcfs /var/lib/incus-lxcfs
root 979 0.0 0.0 6824 2688 ? Ss 16:43 0:00 /usr/sbin/cron -f -P
root 1010 0.0 0.0 109644 23040 ? Ssl 16:43 0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root 1019 0.0 0.0 6104 1920 tty1 Ss+ 16:43 0:00 /sbin/agetty -o -p -- \u --noclear - linux
syslog 1025 0.0 0.0 222508 5888 ? Ssl 16:43 0:00 /usr/sbin/rsyslogd -n -iNONE
root 1440 0.0 0.0 5937936 16340 ? Ss 16:43 0:00 [lxc monitor] /var/lib/incus/containers gluster_glstr01
1000000 1450 0.0 0.0 165324 7168 ? Ss 16:43 0:00 \_ /sbin/init
1000000 1793 0.0 0.0 46464 13696 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-journald
1000000 1891 0.0 0.0 21440 3200 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-udevd
1000100 2151 0.0 0.0 16124 5248 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-networkd
1000101 2157 0.0 0.0 25532 9824 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-resolved
1000000 2217 0.0 0.0 9496 1536 ? Ss 16:43 0:00 \_ /usr/sbin/cron -f -P
1000102 2220 0.0 0.0 8596 2944 ? Ss 16:43 0:00 \_ @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
1000000 2224 0.0 0.0 35324 15360 ? Ss 16:43 0:00 \_ /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
1000104 2226 0.0 0.0 152772 3328 ? Ssl 16:43 0:00 \_ /usr/sbin/rsyslogd -n -iNONE
1000000 2230 0.0 0.0 15008 5120 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-logind
1000000 2243 0.0 0.0 8400 1280 pts/0 Ss+ 16:43 0:00 \_ /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux
1000000 2255 0.0 0.0 112448 16380 ? Ssl 16:43 0:00 \_ /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root 1489 0.0 0.0 6011668 16364 ? Ss 16:43 0:00 [lxc monitor] /var/lib/incus/containers gluster_glstr02
1000000 1521 0.0 0.0 99788 7168 ? Ss 16:43 0:00 \_ /sbin/init
1000000 1833 0.0 0.0 46868 13824 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-journald
1000000 1924 0.0 0.0 21440 3456 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-udevd
1000100 2152 0.0 0.0 16124 5504 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-networkd
1000101 2159 0.0 0.0 25532 9584 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-resolved
1000000 2218 0.0 0.0 9496 1536 ? Ss 16:43 0:00 \_ /usr/sbin/cron -f -P
1000102 2219 0.0 0.0 8600 2944 ? Ss 16:43 0:00 \_ @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
1000000 2228 0.0 0.0 35324 15360 ? Ss 16:43 0:00 \_ /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
1000104 2229 0.0 0.0 152772 3456 ? Ssl 16:43 0:00 \_ /usr/sbin/rsyslogd -n -iNONE
1000000 2232 0.0 0.0 15012 5120 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-logind
1000000 2238 0.0 0.0 8400 1408 pts/0 Ss+ 16:43 0:00 \_ /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux
1000000 2252 0.0 0.0 112448 16380 ? Ssl 16:43 0:00 \_ /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root 1752 0.0 0.0 5938192 16408 ? Ss 16:43 0:00 [lxc monitor] /var/lib/incus/containers gluster_glstr03
1000000 1791 0.0 0.0 165460 7168 ? Ss 16:43 0:00 \_ /sbin/init
1000000 2053 0.0 0.0 47240 13696 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-journald
1000000 2109 0.0 0.0 21440 3456 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-udevd
1000100 2153 0.0 0.0 16124 5632 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-networkd
1000101 2158 0.0 0.0 25532 9596 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-resolved
1000000 2231 0.0 0.0 9496 1536 ? Ss 16:43 0:00 \_ /usr/sbin/cron -f -P
1000102 2233 0.0 0.0 8592 2816 ? Ss 16:43 0:00 \_ @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
1000000 2240 0.0 0.0 34300 15220 ? Ss 16:43 0:00 \_ /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
1000104 2246 0.0 0.0 152772 3328 ? Ssl 16:43 0:00 \_ /usr/sbin/rsyslogd -n -iNONE
1000000 2247 0.0 0.0 15008 4992 ? Ss 16:43 0:00 \_ /lib/systemd/systemd-logind
1000000 2256 0.0 0.0 8400 1408 pts/0 Ss+ 16:43 0:00 \_ /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux
1000000 2259 0.0 0.0 112448 16380 ? Ssl 16:43 0:00 \_ /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root 2291 0.0 0.0 12020 7936 ? Ss 16:43 0:00 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root 2292 0.0 0.0 14960 7664 ? Ss 16:43 0:00 \_ sshd: indiana [priv]
indiana 2687 0.0 0.0 15120 6980 ? S 16:43 0:05 | \_ sshd: indiana@pts/0
indiana 2688 0.0 0.0 8780 5632 pts/0 Ss+ 16:43 0:00 | \_ -bash
root 2840 0.0 0.0 14964 7920 ? Ss 16:44 0:00 \_ sshd: indiana [priv]
indiana 2906 0.0 0.0 15124 6852 ? S 16:45 0:01 | \_ sshd: indiana@pts/1
indiana 2907 0.0 0.0 8780 5504 pts/1 Ss 16:45 0:00 | \_ -bash
indiana 16311 0.0 0.0 12468 5376 pts/1 R+ 21:08 0:00 | \_ ps fauxww
root 3190 0.0 0.0 14964 7920 ? Ss 16:50 0:00 \_ sshd: indiana [priv]
indiana 3245 0.0 0.0 15124 6852 ? S 16:50 0:04 | \_ sshd: indiana@pts/2
indiana 3246 0.0 0.0 8912 5632 pts/2 Ss+ 16:50 0:00 | \_ -bash
root 7009 0.0 0.0 14964 7920 ? Ss 19:14 0:00 \_ sshd: indiana [priv]
indiana 7087 0.1 0.0 15124 6852 ? S 19:14 0:06 \_ sshd: indiana@pts/3
indiana 7088 0.0 0.0 9560 6400 pts/3 Ss+ 19:14 0:00 \_ -bash
indiana 2597 0.0 0.0 20508 11648 ? Ss 16:43 0:00 /usr/lib/systemd/systemd --user
indiana 2598 0.0 0.0 21148 3520 ? S 16:43 0:00 \_ (sd-pam)
polkitd 14645 0.0 0.0 308164 7808 ? Ssl 20:19 0:00 /usr/lib/polkit-1/polkitd --no-debug
root 14656 0.2 0.5 6796984 177336 ? Ssl 20:19 0:07 incusd --group incus-admin --logfile /var/log/incus/incusd.log
incus 14808 0.0 0.0 14472 5120 ? Ss 20:19 0:00 \_ dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=incusbr0 --dhcp-rapid-commit --no-negcache --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.143.8.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/lib/incus/networks/incusbr0/dnsmasq.leases --dhcp-hostsfile=/var/lib/incus/networks/incusbr0/dnsmasq.hosts --dhcp-range 10.143.8.2,10.143.8.254,1h --listen-address=fd42:d7b9:ac23:b60e::1 --enable-ra --dhcp-range fd42:d7b9:ac23:b60e::2,fd42:d7b9:ac23:b60e:ffff:ffff:ffff:ffff,64,1h -s incus --interface-name _gateway.incus,incusbr0 -S /incus/ --conf-file=/var/lib/incus/networks/incusbr0/dnsmasq.raw -u incus -g incus
incus 16182 6.5 3.3 1893868 1114440 ? SLl 21:05 0:12 /opt/incus/bin/qemu-system-x86_64 -S -name nanogpt -uuid a4dafadd-b640-478c-9ca1-5b9c41ec9d89 -daemonize -cpu host,hv_passthrough -nographic -serial chardev:console -nodefaults -no-user-config -sandbox on,obsolete=deny,elevateprivileges=allow,spawn=allow,resourcecontrol=deny -readconfig /run/incus/nanogpt/qemu.conf -spice unix=on,disable-ticketing=on,addr=/run/incus/nanogpt/qemu.spice -pidfile /run/incus/nanogpt/qemu.pid -D /var/log/incus/nanogpt/qemu.log -smbios type=2,manufacturer=LinuxContainers,product=Incus -runas incus
stgraber
(Stéphane Graber)
April 8, 2025, 11:22pm
9
Hmm, nope, that seems fine. Assuming the error is still happening if you try the umount
again, can you then try:
ls -lh /proc/14656/task/*/fd/ | grep test
I rebooted the machine and the process number is changed but the command output like this.
root 1038 0.1 0.5 6700328 174808 ? Ssl 17:09 0:04 incusd --group incus-admin --logfile /var/log/incus/incusd.log
incus 1401 0.0 0.0 14472 5120 ? Ss 17:09 0:00 \_ dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=incusbr0 --dhcp-rapid-commit --no-negcache --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.143.8.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/lib/incus/networks/incusbr0/dnsmasq.leases --dhcp-hostsfile=/var/lib/incus/networks/incusbr0/dnsmasq.hosts --dhcp-range 10.143.8.2,10.143.8.254,1h --listen-address=fd42:d7b9:ac23:b60e::1 --enable-ra --dhcp-range fd42:d7b9:ac23:b60e::2,fd42:d7b9:ac23:b60e:ffff:ffff:ffff:ffff,64,1h -s incus --interface-name _gateway.incus,incusbr0 -S /incus/ --conf-file=/var/lib/incus/networks/incusbr0/dnsmasq.raw -u incus -g incus
sudo ls -lh /proc/1038/task/*/fd/ | grep test
Nothing displayed.
Regards.
Hi,
I have recenctly upgrade the incus packages. And copied with stateless option and it does not start again then I wanted to delete but an error occuried.
indiana@incusrv01:~$ dpkg -l | grep -i incus
ii incus 1:6.11-ubuntu24.04-202504101646 amd64 Incus - Container and virtualization daemon
ii incus-base 1:6.11-ubuntu24.04-202504101646 amd64 Incus - Container and virtualization daemon (container-only)
ii incus-client 1:6.11-ubuntu24.04-202504101646 amd64 Incus - Command line client
And now the error message changes,
indiana@incusrv01:~$ incus delete c1
Error: Failed deleting instance "c1" in project "default": Failed deleting instance snapshots: Failed to run: zfs list -H -p -o name,used,referenced -t snapshot zfspool/virtual-machines/c1.block: exit status 1 (cannot open 'zfspool/virtual-machines/c1.block': dataset does not exist)
DEBUG [2025-04-13T15:36:00Z] Handling API request ip=@ method=GET protocol=unix url=/1.0 username=indiana
DEBUG [2025-04-13T15:36:00Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/instances/c1 username=indiana
DEBUG [2025-04-13T15:36:00Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/instances/c1 username=indiana
DEBUG [2025-04-13T15:36:00Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/events username=indiana
DEBUG [2025-04-13T15:36:00Z] Event listener server handler started id=8fe970e0-da75-44d0-9de8-0575c44efd77 local=/var/lib/incus/unix.socket remote=@
DEBUG [2025-04-13T15:36:00Z] Handling API request ip=@ method=DELETE protocol=unix url=/1.0/instances/c1 username=indiana
INFO [2025-04-13T15:36:00Z] Deleting instance created="2025-04-13 15:19:18.539942915 +0000 UTC" ephemeral=false instance=c1 instanceType=virtual-machine project=default used="1970-01-01 00:00:00 +0000 UTC"
INFO [2025-04-13T15:36:00Z] ID: 5d6cacdd-e1db-4ad5-b213-983438b93c38, Class: task, Description: Deleting instance CreatedAt="2025-04-13 15:36:00.367693583 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/c1]]" Status=Pending StatusCode=Pending UpdatedAt="2025-04-13 15:36:00.367693583 +0000 UTC"
DEBUG [2025-04-13T15:36:00Z] Started operation class=task description="Deleting instance" operation=5d6cacdd-e1db-4ad5-b213-983438b93c38 project=default
DEBUG [2025-04-13T15:36:00Z] New operation class=task description="Deleting instance" operation=5d6cacdd-e1db-4ad5-b213-983438b93c38 project=default
INFO [2025-04-13T15:36:00Z] ID: 5d6cacdd-e1db-4ad5-b213-983438b93c38, Class: task, Description: Deleting instance CreatedAt="2025-04-13 15:36:00.367693583 +0000 UTC" Err= Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/c1]]" Status=Running StatusCode=Running UpdatedAt="2025-04-13 15:36:00.367693583 +0000 UTC"
DEBUG [2025-04-13T15:36:00Z] Instance operation lock created action=delete instance=c1 project=default reusable=false
DEBUG [2025-04-13T15:36:00Z] Handling API request ip=@ method=GET protocol=unix url=/1.0/operations/5d6cacdd-e1db-4ad5-b213-983438b93c38 username=indiana
DEBUG [2025-04-13T15:36:00Z] CacheInstanceSnapshots started driver=zfs instance=c1 pool=default project=default
DEBUG [2025-04-13T15:36:00Z] CacheInstanceSnapshots finished driver=zfs instance=c1 pool=default project=default
DEBUG [2025-04-13T15:36:00Z] Instance operation lock finished action=delete err="<nil>" instance=c1 project=default reusable=false
INFO [2025-04-13T15:36:00Z] ID: 5d6cacdd-e1db-4ad5-b213-983438b93c38, Class: task, Description: Deleting instance CreatedAt="2025-04-13 15:36:00.367693583 +0000 UTC" Err="Failed deleting instance snapshots: Failed to run: zfs list -H -p -o name,used,referenced -t snapshot zfspool/virtual-machines/c1.block: exit status 1 (cannot open 'zfspool/virtual-machines/c1.block': dataset does not exist)" Location=none MayCancel=false Metadata="map[]" Resources="map[instances:[/1.0/instances/c1]]" Status=Failure StatusCode=Failure UpdatedAt="2025-04-13 15:36:00.367693583 +0000 UTC"
DEBUG [2025-04-13T15:36:00Z] Failure for operation class=task description="Deleting instance" err="Failed deleting instance snapshots: Failed to run: zfs list -H -p -o name,used,referenced -t snapshot zfspool/virtual-machines/c1.block: exit status 1 (cannot open 'zfspool/virtual-machines/c1.block': dataset does not exist)" operation=5d6cacdd-e1db-4ad5-b213-983438b93c38 project=default
DEBUG [2025-04-13T15:36:00Z] Event listener server handler stopped listener=8fe970e0-da75-44d0-9de8-0575c44efd77 local=/var/lib/incus/unix.socket remote=@
indiana@incusrv01:~$ incus info c1 --show-log
Error: Failed to run: zfs list -H -p -o name,used,referenced -t snapshot zfspool/virtual-machines/c1.block: exit status 1 (cannot open 'zfspool/virtual-machines/c1.block': dataset does not exist)
stgraber
(Stéphane Graber)
April 13, 2025, 3:48pm
12
A workaround for this situation would be to do a temporary zfs create zfspool/virtual-machines/c1.block
.
I’ll add a check to the snapshot list code to avoid failing in the future.
Thanks Stephane for the patch.
Regards,