What type of device should I choose to attach a volume to an instance with terraform?

Hello there,

I’d like to attach a volume to my instance with terraform using the provider version 1.0.0.

I’ve follow the instructions [here](Terraform Registry) and write a main.tf file like that

terraform {
  required_providers {
    incus = {
      source = "lxc/incus"
      version = "1.0.0"
    }
  }
}

[...]

resource "incus_storage_pool" "LFS" {
  name    = "LFS"
  driver  = "dir"
}

resource "incus_storage_volume" "lfs" {
  name = "lfs"
  pool = "${incus_storage_pool.LFS.name}"
}

resource "incus_instance" "lfs" {
  name  = "lfs"
  image = "images:debian/trixie/cloud"
  type  = "virtual-machine"

[...]

   device {
    name = "disk2"
    type = "block"
    properties = {
      size = "40GiB"
      source = "incus_storage_volume.LFS.lfs"
      pool = "incus_storage_pool.LFS"
      }
    }
 
[...]
}

but I obtain the message :

│ Error: Invalid Attribute Value Match

│ with incus_instance.lfs,
│ on main.tf line 31, in resource “incus_instance” “lfs”:
│ 31: resource “incus_instance” “lfs” {

│ Attribute device[Value({“name”:“disk2”,“properties”:{“pool”:“incus_storage_pool.LFS”,“size”:“40GiB”,“source”:“incus_storage_volume.LFS.lfs”},“type”:“block”})].type value
│ must be one of: [“none” “disk” “nic” “unix-char” “unix-block” “usb” “gpu” “infiniband” “proxy” “unix-hotplug” “tpm” “pci”], got: “block”

in bash I use something like :

	# create and attach a dedicated block device to the vm
	incus storage volume create ${STORAGE_POOL_NAME} ${VOLUME_NAME} --type=block size=${VOLUME_SIZE}
	incus storage volume attach ${STORAGE_POOL_NAME} ${VOLUME_NAME} ${VM_NAME}

and it work pretty well.

When I change the type to unix-block I obtain the message :

incus_instance.lfs: Modifying… [name=lfs]

│ Error: Failed to update instance “lfs”

│ with incus_instance.lfs,
│ on main.tf line 31, in resource “incus_instance” “lfs”:
│ 31: resource “incus_instance” “lfs” {

│ Invalid devices: Device validation failed for “disk2”: Unsupported device type

What am I doing wrong ?

disk is the type

The size also should go on the volume rather than in the instance device.

You can look at incus config show --expanded NAME to see your instance’s full configuration which should then mostly line up with what you can put in Terraform.

Hello @stgraber thank for the quick correction.

I post here my the steps I came though for those who be interested in.

TL;DR

  1. Add a root disk with path set to / mapped to the default pool
  2. Add a second disk with content_type = "block` without path statement.

After changing the volume type to disk it run well. But not as I wished.

With bash script I obtain a second device mounted in my instance like that : /dev/sdb1 40G 2.1M 38G 1% /mnt/lfs And I would like to get the same result. Here an extract of this code :

# Set global variable LFS
VERSION=0.1
export LFS=/mnt/lfs

export STORAGE_POOL_NAME=${STORAGE_POOL_NAME:-'LFS'}
export VOLUME_NAME=${VOLUME_NAME:-'lfs'}
export STORAGE_POOL_PATH='/home/incus/pool/lfs-storage'
export VOLUME_SIZE='40GiB'

export VM_NAME=${VM_NAME:-'debian13'}

[...]
	# create and attach a dedicated block device to the vm
	incus storage volume create ${STORAGE_POOL_NAME} ${VOLUME_NAME} --type=block size=${VOLUME_SIZE}
	incus storage volume attach ${STORAGE_POOL_NAME} ${VOLUME_NAME} ${VM_NAME}

	# start the vm
	incus start ${VM_NAME}

	# wait for the vm to be started
	# grep the incus info instance debian13 is not enough
	# because it returns the status 'RUNNING' even if the vm is not completly started
		while ! incus exec ${VM_NAME} -- true; do
			echo "Waiting for the VM to start..."
			sleep 5
		done

	# install parted in the vm to create a partition on the block device
	#incus exec ${VM_NAME} -- apt-get update # take a lot in cache
	incus exec ${VM_NAME} -- apt-get install -y parted

	# create a partition on the block device
	incus exec ${VM_NAME} -- parted -s /dev/sdb -- mklabel gpt mkpart p ext4 1 100% set 1 boot on

	# format and mount dedicated
	incus exec debian13 -- bash -c "mkfs.ext4 /dev/sdb1; mkdir /mnt/lfs; mount /dev/sdb1 /mnt/lfs; systemctl daemon-reload"

[...]

But with the terraform script (which seems the same except for the type disk in place of block) I end with this partition incus_disk2 220G 11G 198G 5% /mnt in

$> incus exec lfs -- df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.8G     0  1.8G   0% /dev
tmpfs           363M  652K  362M   1% /run
/dev/sda2       9.6G  1.6G  8.0G  17% /
tmpfs           1.8G     0  1.8G   0% /dev/shm
efivarfs        256K   28K  224K  11% /sys/firmware/efi/efivars
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           1.8G     0  1.8G   0% /tmp
tmpfs           1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
tmpfs           1.0M     0  1.0M   0% /run/credentials/systemd-resolved.service
tmpfs            50M   15M   36M  29% /run/incus_agent
/dev/sda1        99M  8.7M   90M   9% /boot/efi
incus_disk2     220G   11G  198G   5% /mnt
tmpfs           1.0M     0  1.0M   0% /run/credentials/systemd-networkd.service
tmpfs           1.0M     0  1.0M   0% /run/credentials/getty@tty1.service
tmpfs           1.0M     0  1.0M   0% /run/credentials/serial-getty@ttyS0.service

note the size 220G instead of the 40GiB as desired

My first intuition was I should use the block device type instead of disk so my first question. I’ve tried content_type mentioned in the doc here and it seems ok.

My final terraform so far :

terraform {
  required_providers {
    incus = {
      source = "lxc/incus"
      version = "1.0.0"
    }
  }
}

resource "incus_network" "LFS" {
  name = "lfsNetwork"

  config = {
    "ipv4.address" = "10.150.19.254/24"
    "ipv4.nat"     = "true"
    "ipv6.address" = "fd42:474b:622d:259d::1/64"
    "ipv6.nat"     = "true"
  }
}

resource "incus_storage_pool" "LFS" {
  name    = "LFS"
  driver  = "dir"
}

resource "incus_storage_volume" "lfs" {
  name = "lfs"
  pool = "${incus_storage_pool.LFS.name}"
  content_type = "block"
  config = {
    "size" = "40GiB"    
  }
}

resource "incus_instance" "lfs" {
  name  = "lfs"
  image = "images:debian/trixie/cloud"
  type  = "virtual-machine"

  device {
    name = "eth0"
    type = "nic"

    properties = {
      nictype = "bridged"
      parent  = "${incus_network.LFS.name}"
      }
    }

   device {
    name = "root"
    type = "disk"
    properties = {
      pool = "default"
      path = "/"
      }
   }

   device {
    name = "disk2"
    type = "disk"
    properties = {
      source = "${incus_storage_volume.lfs.name}"
      pool = "${incus_storage_pool.LFS.name}"
      }
    }
 
  config = {
    "boot.autostart" = true
    "limits.cpu"     = 1
    "limits.memory"  = "4GB"
    "cloud-init.user-data" = file("./cloud-config.yaml")
  }
}

The cloud-init or ansible next step will take the relay to part and format the device. The goal is to set an instance to build an ISO according to steps as defined in LFS. With all the security kernel modules needed to make it robust. This ISO will be use to setup an hardened Internal Developer Platform.