Im trying to create a block storage volume from an existing qcow file, I create the volume first this way
incus storage volume create remote game3-disk --type=block
then try to import the cow file
incus storage volume import game3-disk /home/marcos/disk1.qcow2
Importing custom volume: 100% (1.13GB/s)
but the process never gets completed or fails. its seating there at 100%. the file is small 200Mb
Hi @marcosbis, there is misunderstanding here, if you create a block volume you have to attach to your vm instance like that. I assume your vm name is test. incus storage volume attach default game3-disk test
Reference link. Volume Storage
Regards.
incus storage volume import imports volume backups (exports) which are tarballs with a raw volume inside it, not a qcow2 image.
There is no trivial way to turn an existing qcow2 into a new custom storage volume at this time though it should be pretty easy to extend incus-migrate to allow some of that.
I see a few options for you:
Use qemu-img convert to turn it into a raw disk image, then attach it as a disk without using a custom volume
Use qemu-img convert to turn it into a raw disk image, then manually over-write a volume you create with incus storage volume create with the raw disk image content
Create a custom block volume, attach it to your VM as well as sharing the filesystem that your disk1.qcow2 is on, then run qemu-img convert inside of the VM, reading from the shared qcow2 file into the attached disk
we have 100’s of those qcow files and I would like them to be converted to a volume so they are avaible to all servers in cluster and also avaiable via Incus API.
I will try option 2:
do you have a sample command to over-write an storage volume
I tried using block but Im not sure what I should be using.
we need to attach/detach those volumes as disks to mutliple VMs before boot and while they are running.
we are doing this in KVM, and want to migrate to Incus
we have an incus cluster with Ceph
You’ll need to make sure the target volume is the same size or larger or things will obviously go wrong (hopefully not silently, but not sure what qemu-img would do).
Your Incus pool name and your RBD pool name may be different.
You can either look at incus storage show remote to see the name in there, or use ceph osd pool ls to list the RBD pools that your Ceph cluster is running.