Hello all,
I am trying to migrate the lxc vm named: ansible
from the lxd host kumo2
to lxd host lxc1
I followed the instructions here: How to move/migrate LXD VM to another host on Linux - nixCraft “Method # 2 LXD VM container migration using LXD API and Simplestreams”
CONTEXT:
LXD host: kumo2
lxc-vm to migrate: ansible
Version: 5.2-79c3c3b
Storage:
kumo2:~$ lxc storage list
±------±-------±-----------------------------------------±------------±--------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
±------±-------±-----------------------------------------±------------±--------±--------+
| DATA1 | zfs | /var/snap/lxd/common/lxd/disks/DATA1.img | | 4 | CREATED |
±------±-------±-----------------------------------------±------------±--------±--------+
LXD host: lxc1
Version: 5.2-79c3c3b
Storage:
lxc1:~$ lxc storage list
±------±-------±-------±------------±--------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
±------±-------±-------±------------±--------±--------+
| RAID1 | zfs | RAID1 | | 9 | CREATED |
±------±-------±-------±------------±--------±--------+
The set up seems to be fine no errors there, but when I try to migrate the snapshot I get this message:
kumo2:~$ lxc copy ansible/snap0 lxc1:ansible --verbose
Error: Failed instance creation: Error transferring instance data: Failed decoding migration header: invalid character ‘\x00’ looking for beginning of value
Any suggestions or comments will be really appreciated, in the meantime I will see if there are other methods that allow me to complete the migration.
Hello all,
I tried following the configuration again and this time I got this message:
kumo2:~$ lxc copy ansible/snap0 lxc1:ansible
Error: Failed instance creation: Error transferring instance data: websocket: close 1000 (normal)
I am not sure what’s causing it, but I will continuing investigating.
Any suggestions or comments will be greatly appreciated.
Hello all,
When trying to use a different method of migration, publishing the image/snap I get this:
kumo2:~$ lxc publish ansible/snap0 --alias ansible-image
Error: Failed getting disk path: Could not locate a zvol for DATA1/virtual-machines/ansible.block@snapshot-snap0
That message about the storage at the source LXD host concerns me…
Here is what I see on the source LXD host profile:
kumo2:~$ lxc profile show default
config:
boot.autostart: “false”
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: DATA1
type: disk
name: default
used_by:
I will continue investigating.
tomp
(Thomas Parrott)
June 13, 2022, 8:47am
4
There have been numerous issues with optimized migration since optimized refresh was added (see https://github.com/lxc/lxd/issues/10186 ).
Are both source and target running same LXD version?
There have been several fixes added since LXD 5.2 was released, so perhaps this will be fixed in LXD 5.3.
Hello @tomp ,
Thank you for the update, as you can see in the start of this thread I shared the context:
CONTEXT:
LXD host: kumo2
lxc-vm to migrate: ansible
Version: 5.2-79c3c3b
Storage:
kumo2:~$ lxc storage list
±------±-------±-----------------------------------------±------------±--------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
±------±-------±-----------------------------------------±------------±--------±--------+
| DATA1 | zfs | /var/snap/lxd/common/lxd/disks/DATA1.img | | 4 | CREATED |
±------±-------±-----------------------------------------±------------±--------±--------+
LXD host: lxc1
Version: 5.2-79c3c3b
Storage:
lxc1:~$ lxc storage list
±------±-------±-------±------------±--------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
±------±-------±-------±------------±--------±--------+
| RAID1 | zfs | RAID1 | | 9 | CREATED |
±------±-------±-------±------------±--------±--------+
I am not sure about that error message I am getting on the destination LXD host:
kumo2:~$ lxc publish ansible/snap0 --alias ansible-image
Error: Failed getting disk path: Could not locate a zvol for DATA1/virtual-machines/ansible.block@snapshot-snap0
I am also in irc.libera.chat
willing to investigate and troubleshoot, this might be related to the way the storage pool is created on my LXD hosts?
Sincerely,
Hello @tomp ,
Any workaround I can follow in order to:
Backup the lxc VM from that LXD host.
Remove the full configuration of LXD from the Ubuntu 20.04 LXD host and convert the host to a container using p2c tool.
I am trying to migrate the lxc VM out of that host and concert the host from LDX host into a normal Ubtuntu 20.04 lxc container, and then re-install Ubuntu 24.04 LTS in that box.
I may end up backing up the whole server including the lxc VM or try to convert it into a lxc container with a nested LXD configuration and lxc VM running in it.
I look forward to any suggestions.
Sincerely,
tomp
(Thomas Parrott)
June 13, 2022, 11:18am
7
You could do lxc export
transfer the file to the new host and do lxc import
there.
Also you could try upgrading to the edge snap on both sides and see if any of the recent fixes resolve it:
snap refresh lxd --channel=latest/edge
1 Like
Alright, I will try the lxc export method first, but I believe that is what I tried and got different errors I listed on the thread. I will then upgrade both LXD servers and try again.
Any suggestions regarding the lxd-p2c
options?
Thank you for all you do!
Right,
This is the error I get when trying to export the lxc VM in the source LXD host:
kumo2:~$ lxc export ansible ansible.tar.gz
Error: Create backup: Backup create: Error getting VM block volume disk path: Could not locate a zvol for DATA1/virtual-machines/ansible.block@snapshot-snap0
I will go with the snap refresh option and try the first method of:
LXD VM container migration using LXD API and Simplestreams
Let’s see how it goes,
Hmm, no luck.
source
kumo2:~$ sudo snap refresh lxd --channel=latest/edge
lxd (edge) git-d139b55 from Canonical✓ refreshed
destination
lxc1:~$ sudo snap refresh lxd --channel=latest/edge
lxd (edge) git-d139b55 from Canonical✓ refreshed
lxc1:~$ lxc config set core.https_address 192.168.0.10:8443
lxc config set core.trust_password secretpassword
remote LXD host added to source
kumo2:~$ lxc remote add lxc1 192.168.0.10
Certificate fingerprint: 56d897cc1e84e73d68b8380344633481d995b5c8c69579781ebc75da38da436f
ok (y/n/[fingerprint])? y
kumo2:~$ lxc remote list
±----------------±-----------------------------------------±--------------±------------±-------±-------±-------+
| NAME | URL | PROTOCOL | AUTH TYPE | PUBLIC | STATIC | GLOBAL |
±----------------±-----------------------------------------±--------------±------------±-------±-------±-------+
| images | https://images.linuxcontainers.org | simplestreams | none | YES | NO | NO |
±----------------±-----------------------------------------±--------------±------------±-------±-------±-------+
| local (current) | unix:// | lxd | file access | NO | YES | NO |
±----------------±-----------------------------------------±--------------±------------±-------±-------±-------+
| lxc1 | https://192.168.0.10:8443 | lxd | tls | NO | NO | NO |
±----------------±-----------------------------------------±--------------±------------±-------±-------±-------+
| ubuntu | Ubuntu Cloud Images | simplestreams | none | YES | YES | NO |
±----------------±-----------------------------------------±--------------±------------±-------±-------±-------+
| ubuntu-daily | Ubuntu Cloud Images | simplestreams | none | YES | YES | NO |
±----------------±-----------------------------------------±--------------±------------±-------±-------±-------+
kumo2:~$ lxc list
±--------±--------±-----±-----±----------------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±--------±--------±-----±-----±----------------±----------+
| ansible | STOPPED | | | VIRTUAL-MACHINE | 1 |
±--------±--------±-----±-----±----------------±----------+
migration command
kumo2:~$ lxc copy ansible/snap0 lxc1:ansible --verbose
Error: Failed instance creation: Error transferring instance data: websocket: close 1000 (normal)
Any ideas?
tomp
(Thomas Parrott)
June 13, 2022, 12:41pm
11
So focusing on the lxc export
problem atm:
Can you show:
lxc info ansible
lxc storage volume ls <pool> | grep ansible
sudo zfs list -t snapshot | grep ansible
So we can ascertain what we have in the instance snapshot records, instance volume snapshot records and the actual snapshots on disk.
1 Like
kumo2:~$ lxc info ansible
Name: ansible
Status: STOPPED
Type: virtual-machine
Architecture: x86_64
Created: 2022/05/30 22:41 JST
Last Used: 2022/05/31 12:11 JST
Snapshots:
±------±---------------------±-----------±---------+
| NAME | TAKEN AT | EXPIRES AT | STATEFUL |
±------±---------------------±-----------±---------+
| snap0 | 2022/06/13 09:06 JST | | NO |
±------±---------------------±-----------±---------+
kumo2:~$ lxc storage volume ls DATA1 | grep ansible
| virtual-machine | ansible | | block | 1 |
| virtual-machine (snapshot) | ansible/snap0 | | block | 1 |
kumo2:~$ sudo zfs list -t snapshot | grep ansible
DATA1/virtual-machines/ansible@snapshot-snap0 15.5K - 5.87M -
DATA1/virtual-machines/ansible.block@migration-5a3e5b67-bda8-4e8a-b49d-daade77a4df1 0B - 2.71G -
DATA1/virtual-machines/ansible.block@snapshot-snap0 0B - 2.71G -
Thank you in advance,
tomp
(Thomas Parrott)
June 13, 2022, 1:42pm
13
Can you remove this one please, I’m wondering if its preventing the other ones from being activated?
1 Like
Sounds good, removed please see below:
kumo2:~$ sudo zfs destroy DATA1/virtual-machines/ansible.block@migration-5a3e5b67-bda8-4e8a-b49d-daade77a4df1
Now I see this:
kumo2:~$ sudo zfs list -t snapshot | grep ansible
DATA1/virtual-machines/ansible@snapshot-snap0 15.5K - 5.87M -
DATA1/virtual-machines/ansible.block@snapshot-snap0 1K - 2.71G -
I tried exporting again and got a similar error:
kumo2:~$ lxc export ansible ansible.tar.gz
Error: Create backup: Backup create: Error getting VM block volume disk path: Could not locate a zvol for DATA1/virtual-machines/ansible.block@snapshot-snap0
Hmm,
tomp
(Thomas Parrott)
June 13, 2022, 1:57pm
16
Can you try rebooting that machine and trying again, I want to rule out an issue with the mount namespace inside the snap becoming out of sync with the host.
1 Like
Alright,
Here is what I am seeing now:
kumo2:~$ sudo zfs list -t snapshot | grep ansible
DATA1/virtual-machines/ansible@snapshot-snap0 15.5K - 5.87M -
DATA1/virtual-machines/ansible.block@snapshot-snap0 1K - 2.71G -
kumo2:~$ lxc export ansible ansible.tar.gz
Backing up instance: 46.69MB (1.31MB/s)
I like how it changed from just showing the size as you can see above, then to:
kumo2:~$ lxc export ansible ansible.tar.gz
Exporting the backup: 91% (34.43MB/s)
and then:
kumo2:~$ lxc export ansible ansible.tar.gz
Backup exported successfully!
Looking better,
tomp
(Thomas Parrott)
June 13, 2022, 2:24pm
18
So it looks like reboot fixed the lxc export
case.
Likely this is caused by a known issue related to ZFS and snap packaging:
opened 07:43PM - 05 Aug 20 UTC
<!--
Github issues are used for bug reports. For support questions, please use … [our forum](https://discuss.linuxcontainers.org).
Please fill the template below as it will greatly help us track down your issue and reproduce it on our side.
Feel free to remove anything which doesn't apply to you and add more information where it makes sense.
-->
# Required information
* Distribution: Ubuntu
* Distribution version: Focal
* The output of "lxc info" or if that fails:
```
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- resources_system
- usedby_consistency
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
addresses: []
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
MIIFPjCCAyagAwIBAgIRAO0feJeGdoQakUfi0zaZEqAwDQYJKoZIhvcNAQELBQAw
MzEcMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzETMBEGA1UEAwwKcm9vdEB0
dW1teTAeFw0xOTA0MjcxNDQ4MzJaFw0yOTA0MjQxNDQ4MzJaMDMxHDAaBgNVBAoT
E2xpbnV4Y29udGFpbmVycy5vcmcxEzARBgNVBAMMCnJvb3RAdHVtbXkwggIiMA0G
CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQC11IOxuOtOzsGjz2jn8cDnTBZ3xrsF
N6CS5jIZQA6S101Hj+TsyNvAOn3Lp5wFzq2+v79VbbKpnKjCZkOpY+sLkZoHphl5
GPOBaGZgEK3ySlRIuSiaQH+yZl6ExYgLctyo8BtoxsZyJJrSt+YNHql2rQoDxdBG
7MlvmJ6v5pJfLQc2QDHP2k/UJLVpdqFFw0D3f+t6yjh8vjXm9tBUuAs/RJHGwwca
ex1dcEdkIDISsO5GenvOm5L04GIh+x3nAOdVY2NLaxOPXvpEhX13b4Rp4nKokld6
0VD7uKP5e0Q2soztGBhNPHetEMSs34zR4V7GOxIOz4UsQ2+bifo7nSnBuED1uHNc
Mjb0m7QmMLFQjhvF+BG0Ja4cB5gP/3fVF41NgDJqhyDoKeW3rDNziD+Bz3MJcDFu
ISe5kFMBotDqas8IWGpGUJsM9iKcQFk76rSkExTRA6EHmEiBCxF6K8wRg0dLfg3f
96E++8hOyaptb7kRxNzCl3sjXet4A24V4th+d3IQzVZ/BvvnWWUiTbkMzoU90ZDg
cssmCoAPZ8FBfXd48AyOaJ+YxqakdoqiJGyedEwQjWFfTlMwSpva1C7Bj7LHyR2+
vbmFwc4iJEVX3tniWKk6sWcoHBIxCq1kxf0jNssavjDKZjbOa0KgTygcpCtMU3GW
oUeIX8fkQyXgJQIDAQABo00wSzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYI
KwYBBQUHAwEwDAYDVR0TAQH/BAIwADAWBgNVHREEDzANggV0dW1teYcEwKigTDAN
BgkqhkiG9w0BAQsFAAOCAgEAZlewGkZBaEGumo4teqNg/Z981ey5ZqmXmiznAAIl
Fnv0Mlbg73wnqeIKeAOz+9eJiQ9vmXzbag3KLuuWR6DWL9EnST8VkOqeIh78NQIX
bSag2QpSlT5RaOHhDrHHSwlFBuzycJCVvMU9TuUvA/sSzgP/lIuu/T/v0gdwriCt
ofSLz7YyqpDIOUbCgqWOMFXMkhZqLJPo2Se609qHcqhS1tUATW62nhF4Ly9GXTu7
HlrPqw1a2/BGyrGSRSn6AWcVV+6nCJfdekv0Ed2nc6QVPaK2lAMmDofc289MGrAa
BGlttdRzS0zuNg3GKySS6+xbryGvAXDBL39iZMMAog9T55t2fB8RM1BeYcyhJVM5
WORk1IQlzBx0bXz0Xtkr6nYLfg/IDswyyDbkT1yGLHZjcz70u2MU5tbChmOyAbVu
tCE3QVCLj/+KHIlE+Z5YIZ17LKcUTI6jiViESOSRm6Qvi0XkQcaK06JRaH6neXi0
ACq5NJ2UjSOhk60TQj18I493/ooFR3nN5M8OPyVLEXruEOsjw5E6BBqBt70H4/9V
F2PqMhhCvrzdd9faC4PKPcHRKMzhMyd3+0NffT2Z8l3dElSmN5dyv4Ig3sLD3uTj
4vRTMt8lrfmfeIXlwk+i9RslYsYmGGETi8R85FrTbMLv6SpDbRLQ74rarbU4nnFi
ZpA=
-----END CERTIFICATE-----
certificate_fingerprint: ae0ef773c385ab2d7e14642aea1ec8aae8728be9b5862240b045de9b45b740b4
driver: lxc
driver_version: 4.0.3
firewall: xtables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
netnsid_getifaddrs: "true"
seccomp_listener: "true"
seccomp_listener_continue: "true"
shiftfs: "false"
uevent_injection: "true"
unpriv_fscaps: "true"
kernel_version: 5.4.0-40-generic
lxc_features:
cgroup2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
pidfd: "true"
seccomp_allow_deny_syntax: "true"
seccomp_notify: "true"
os_name: Ubuntu
os_version: "20.04"
project: default
server: lxd
server_clustered: false
server_name: tummy
server_pid: 1215389
server_version: 4.0.2
storage: zfs
storage_version: 0.8.3-1ubuntu12
```
# Issue description
This issue already happened a couple of months ago. At that time restarting the server fixed it.
No it's back.
# Steps to reproduce
I have no reproducer, just the failing environment.
```
# lxc snapshot feb sssss
Error: Create instance snapshot (mount source): Failed to run: zfs mount tank/lxd/containers/feb: cannot mount 'tank/lxd/containers/feb': filesystem already mounted
```
# Information to attach
I see no more relevant information. It just started to happen.
Let me know if I can help more.
To answer your question about physical machine migration, there is a tool called lxd-migrate
that you run on the machine/VM you want to migrate into LXD and specify the address of a LXD server that is configured to listen on the network.
See What's new in LXD 4.23? - YouTube
There’s no dedicated lxd-migrate package currently, but you can download a fresh build from the build assets of any pull-request, e.g. doc: add link to video about network forwards · lxc/lxd@09d8bb6 · GitHub
1 Like
Excellent @tomp ,
I am still waiting for the export to complete before I attempt the migration following the LXD API method, and if that fails I will report the outcome here, and try to import the exported lxc VM into the destination LXD host manually.
I will keep you posted, and thank you for the references, those are really useful!
Hello @tomp ,
It worked!
destination lxd host:
lxc1:~$ lxc list
±--------±--------±-----±-----±----------------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±--------±--------±-----±-----±----------------±----------+
| ansible | STOPPED | | | VIRTUAL-MACHINE | 0
|
±--------±--------±-----±-----±----------------±----------+
| c1 | STOPPED | | | CONTAINER | 0 |
±--------±--------±-----±-----±----------------±----------+
| c2 | STOPPED | | | CONTAINER | 0 |
±--------±--------±-----±-----±----------------±----------+
| vm1 | STOPPED | | | VIRTUAL-MACHINE | 0 |
±--------±--------±-----±-----±----------------±----------+
| vm2 | STOPPED | | | VIRTUAL-MACHINE | 0 |
±--------±--------±-----±-----±----------------±----------+
| vm3 | STOPPED | | | VIRTUAL-MACHINE | 0 |
±--------±--------±-----±-----±----------------±----------+
| vm4 | STOPPED | | | VIRTUAL-MACHINE | 0 |
±--------±--------±-----±-----±----------------±----------+
lxc1:~$ lxc config show ansible
architecture: x86_64
config:
image.architecture: amd64
image.description: Ubuntu jammy amd64 (20220530_08:32)
image.os: Ubuntu
image.release: jammy
image.serial: “20220530_08:32”
image.type: disk-kvm.img
image.variant: default
limits.cpu: “1”
limits.memory: 2GB
volatile.apply_template: copy
volatile.base_image: dddb4c8d1b25ad693722babbc4726da56ccfcb307859b59070206f78547fcd13
volatile.cloud-init.instance-id: 82430879-382e-4d1c-9505-7b9c287856ed
volatile.eth0.hwaddr: 00:16:3e:9f:18:5e
volatile.uuid: 433a1988-e492-4a8b-b78b-b02d0a55d22e
devices:
root:
path: /
pool: RAID1
size: 20GiB
type: disk
ephemeral: false
profiles:
default
stateful: false
description: “”
Thank you so much I can mark this one as solved.
I really appreciate all your time and help!
1 Like