Welcome!
In such cases it’s good to perform one (or a few) trial runs before doing the actual thing. Let’s give it a go.
We launch a VM that will act as our Incus server. We then (on the host) create a storage volume of type block. Next, we attach that block storage volume to the VM. In the VM it can be found as /dev/sdb
. Subsequently, we incus admin init
, and configure Incus to use the block device /dev/sdb
.
$ incus launch images:ubuntu/24.04/cloud --vm incusserver
Launching incusserver
$ incus storage volume create default IncusStorage --type=block size=6GiB
Storage volume IncusStorage created
$ incus storage volume attach default IncusStorage incusserver
$ incus shell incusserver
root@incusserver:~# fdisk -l /dev/sdb
Disk /dev/sdb: 6 GiB, 6442450944 bytes, 12582912 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@incusserver:~# sudo apt install -y incus zfsutils-linux
...
root@incusserver:~# incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, zfs) [default=zfs]:
Create a new ZFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: yes
Path to the existing block device: /dev/sdb
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
ipv4.address: auto
ipv6.address: auto
description: ""
name: incusbr0
type: ""
project: default
storage_pools:
- config:
source: /dev/sdb
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: incusbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
projects: []
cluster: null
root@incusserver:~#
Next we populate the Incus installation with a few alpines.
root@incusserver:~# incus launch images:alpine/edge alpine1
Launching alpine1
root@incusserver:~# incus launch images:alpine/edge alpine2
Launching alpine2
root@incusserver:~# incus launch images:alpine/edge alpine3
Launching alpine3
root@incusserver:~#
This is where the interesting stuff start. We now want to shutdown the Incus server and remove it. However, the block storage volume will still be there, as the server has been shutdown cleanly.
root@incusserver:~# shutdown -h now
root@incusserver:~# Error: websocket: close 1006 (abnormal closure): unexpected EOF
$ incus storage volume show default IncusStorage
config:
size: 6GiB
description: ""
name: IncusStorage
type: custom
used_by:
- /1.0/instances/incusserver
location: none
content_type: block
project: default
created_at: ...
$ incus delete incusserver
$ incus storage volume show default IncusStorage
config:
size: 6GiB
description: ""
name: IncusStorage
type: custom
used_by: []
location: none
content_type: block
project: default
created_at: ...
$
Next, we launch a new VM, attach back the block storage volume and install Incus.
$ incus launch images:ubuntu/24.04/cloud --vm incusserver
Launching incusserver
$ incus storage volume attach default IncusStorage incusserver
$ incus shell incusserver
Error: Instance is not running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
Error: VM agent isn't currently running
$ incus shell incusserver
root@incusserver:~# apt install -y zfsutils-linux incus
...
Finally, we bring back the old installation data with those three alpines.
root@incusserver:~# zfs list
no datasets available
root@incusserver:~# zpool list
no pools available
root@incusserver:~# zpool import
pool: default
id: 8311839500301555365
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
default ONLINE
sdb ONLINE
root@incusserver:~# zpool import default
root@incusserver:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
default 5.50G 6.80M 5.49G - - 0% 0% 1.00x ONLINE -
root@incusserver:~#
root@incusserver:~# incus admin recover
This server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (zfs, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /dev/sdb
Additional storage pool configuration property (KEY=VALUE, empty when done):
Would you like to recover another storage pool? (yes/no) [default=no]:
The recovery process will be scanning the following storage pools:
- NEW: "default" (backend="zfs", source="/dev/sdb")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]:
Scanning for unknown volumes...
The following unknown storage pools have been found:
- Storage pool "default" of type "zfs"
The following unknown volumes have been found:
- Container "alpine2" on pool "default" in project "default" (includes 0 snapshots)
- Container "alpine3" on pool "default" in project "default" (includes 0 snapshots)
- Container "alpine1" on pool "default" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...
root@incusserver:~# incus list
+---------+---------+------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------+---------+------+------+-----------+-----------+
| alpine1 | STOPPED | | | CONTAINER | 0 |
+---------+---------+------+------+-----------+-----------+
| alpine2 | STOPPED | | | CONTAINER | 0 |
+---------+---------+------+------+-----------+-----------+
| alpine3 | STOPPED | | | CONTAINER | 0 |
+---------+---------+------+------+-----------+-----------+
root@incusserver:~#
Do they work? They sure do.
root@incusserver:~# incus start alpine1 alpine2 alpine3
root@incusserver:~# incus list -c ns4t
+---------+---------+----------------------+-----------+
| NAME | STATE | IPV4 | TYPE |
+---------+---------+----------------------+-----------+
| alpine1 | RUNNING | 10.36.146.69 (eth0) | CONTAINER |
+---------+---------+----------------------+-----------+
| alpine2 | RUNNING | 10.36.146.101 (eth0) | CONTAINER |
+---------+---------+----------------------+-----------+
| alpine3 | RUNNING | 10.36.146.248 (eth0) | CONTAINER |
+---------+---------+----------------------+-----------+
root@incusserver:~#
In summary, the important commands are zpool import
and incus admin recover
.