How to resize ZFS used in LXD

Hi all,

I am trying to resize/expand the ZFS pool used in my lxd setup. Unfortunately, no luck so far.

I am trying to follow this to resize/expand the ZFS pool used in my LXD setup. However, I am unable to get the path of image file. I tried lxc storage list, but it show only the following.
±--------±------------±-------±--------±--------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
±--------±------------±-------±--------±--------+
| lxd | | zfs | lxd | 3 |
±--------±------------±-------±--------±--------+

I used to be able to get the image path ‘/path/to/.img’ under the source column.

The following is my lxd setup

  • lxd version : 2.21 (install using snap)
  • use block device (/dev/sdb) in lxd init
  • storage driver : zfs

Also, other than the method above, is there any other way i can try to expand/resize my zfs pool after creation?

Thanks in advance for your advice. Very much appreciated.

Since we added the API, any LXD-generated image file will end up in /var/lib/lxd/disks (or /var/snap/lxd/common/lxd/disks if using the snap).

That method is still correct to grow a ZPOOL.

Though above you mention having entered /dev/sdb during lxd init which would imply you’re using a full disk for your zpool, if that’s the case, growing will be a bit different given that you can’t really grow on that disk :slight_smile:

Can you show zpool status -v?

1 Like

HI @stgraber, thanks for the prompt reply.

I think i used the full disk when initiating lxd for the first time. If this is the case, it is not possible to grow the disk anymore? What would be the better way to setup lxd using block device?

The following are some output in my machine.

root@test:~# zpool status -v
  pool: default
 state: ONLINE
  scan: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        default     ONLINE       0     0     0
          sdb       ONLINE       0     0     0
errors: No known data errors
root@test:~# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop1    7:1    0   47M  1 loop /snap/lxd/5866
sdb      8:16   0   15G  0 disk 
├─sdb9   8:25   0    8M  0 part 
└─sdb1   8:17   0   10G  0 part 
loop0    7:0    0 81.6M  1 loop /snap/core/4110
sda      8:0    0   10G  0 disk 
└─sda1   8:1    0   10G  0 part /
root@test:~# ls -la /var/snap/lxd/common/lxd/disks
total 8
drwx------  2 root root 4096 Mar  5 12:28 .
drwx--x--x 14 root root 4096 Mar  5 12:30 ..
root@test:~# 

Thanks once again for the advice. :wink:

Ok, so yes, your zpool is the size of the entire /dev/sdb physical disk.
If that’s a virtual disk, then you could grow it at the VM level, reboot the VM and then use the same growing trick as mentioned in our documentation.

If this is a physical system, then the zpool is already using the entire physical disk. Your only option to grow at that point is to either move the pool to a bigger replacement disk or to add a second disk to the pool (but be careful as failure of either would cause the whole pool to be lost).

Thanks @stgraber for the explanation.

It is a virtual disk. I am able to grow it from 10G to 15G.

I tried the mentioned method and i use snap to install lxd. However, when i am trying to expand the pool it showed an error.

user@test:~$ sudo zpool online -e default /var/snap/lxd/common/lxd/disks/default.img
cannot expand /var/snap/lxd/common/lxd/disks/default.img: no such device in pool

And, before trying the method, I check the path you mentioned (/var/snap/lxd/common/lxd/disks) and it was empty.

Assume, my zfs pool as follow

root@test:~# lxc storage show default
config:
  source: default
  volatile.initial_source: /dev/sdb
  zfs.pool_name: default
description: ""
name: default
driver: zfs
used_by:
- /1.0/containers/amazing-goblin
- /1.0/images/b5f3a547289fabf26d90250605dc3067f1863ee46c802f004aa97954cc852c33
- /1.0/profiles/default

Appreciate your advice.

I also have same issue. I have 3 containers. By default while creating containers its gets 100GB (which i set while installing LXD). Now, I want to increase my container capacilty as i have added a new harddrive to my host.
How can I do it.
Thanks

hi @hwslew,

What I have figured out is that for this command:
zpool online -e default /var/lib/lxd/disks/default.img
you have to use the path that the pool has in zpool status -v
So in your case it should be
zpool online -e default sdb

Makes sense?
Does it work?

1 Like

How did u grow it?

So I just went through this and was getting some of the errors listed above and thought I’d leave what worked for me in case someone else out there has similar problems.

Assumptions: 1) One zfs pool called ‘default’ that lives on the hard drive at /var/lib/lxd/disks/default.img. 2) Running as root. 3) Growing pool ‘default’ from 25GB to 45GB.

  1. Stop all running containers.
  2. #truncate -s +20G /var/lib/lxd/disks/default.img
  3. #zpool set autoexpand=on default
  4. #zpool status -vg default
    4a. Note the device id value from the results (for me, it was a really long number). You’ll need it in the next step.
  5. #zpool online -e default device_id_from_step_4a
  6. #zpool set autoexpand=off default
  7. #service lxd restart
    7a. This is on Debian/Ubuntu. For CentOS, it’d be a “systemctl” command.

After service restart, if you do a “$lxc storage info default”, you should now see the expanded space. Restart containers.

5 Likes

Huge thank you for this… I just want to comment that if you install LXD via snap… the disk has now been moved … you can find it with

find / -name 'default.img' 2>/dev/null

This command

service lxd restart

Didn’t work for me in Kubuntu 22.04 LTS… but I just chose to reboot here

On Ubuntu snap, what worked for me was the command

lxc storage list
This gave the location of default.img in “SOURCE”

±--------±-------±-------------------------------------------±------------±--------±--------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |
±--------±-------±-------------------------------------------±------------±--------±--------+
| default | zfs | /var/snap/lxd/common/lxd/disks/default.img | | 14 | CREATED |
±--------±-------±-------------------------------------------±------------±--------±--------+

1 Like