How to resize ZFS used in LXD

(Siew) #1

Hi all,

I am trying to resize/expand the ZFS pool used in my lxd setup. Unfortunately, no luck so far.

I am trying to follow this to resize/expand the ZFS pool used in my LXD setup. However, I am unable to get the path of image file. I tried lxc storage list, but it show only the following.
| lxd | | zfs | lxd | 3 |

I used to be able to get the image path ‘/path/to/.img’ under the source column.

The following is my lxd setup

  • lxd version : 2.21 (install using snap)
  • use block device (/dev/sdb) in lxd init
  • storage driver : zfs

Also, other than the method above, is there any other way i can try to expand/resize my zfs pool after creation?

Thanks in advance for your advice. Very much appreciated.

(Stéphane Graber) #2

Since we added the API, any LXD-generated image file will end up in /var/lib/lxd/disks (or /var/snap/lxd/common/lxd/disks if using the snap).

That method is still correct to grow a ZPOOL.

Though above you mention having entered /dev/sdb during lxd init which would imply you’re using a full disk for your zpool, if that’s the case, growing will be a bit different given that you can’t really grow on that disk :slight_smile:

Can you show zpool status -v?

(Siew) #3

HI @stgraber, thanks for the prompt reply.

I think i used the full disk when initiating lxd for the first time. If this is the case, it is not possible to grow the disk anymore? What would be the better way to setup lxd using block device?

The following are some output in my machine.

root@test:~# zpool status -v
  pool: default
 state: ONLINE
  scan: none requested
        NAME        STATE     READ WRITE CKSUM
        default     ONLINE       0     0     0
          sdb       ONLINE       0     0     0
errors: No known data errors
root@test:~# lsblk 
loop1    7:1    0   47M  1 loop /snap/lxd/5866
sdb      8:16   0   15G  0 disk 
├─sdb9   8:25   0    8M  0 part 
└─sdb1   8:17   0   10G  0 part 
loop0    7:0    0 81.6M  1 loop /snap/core/4110
sda      8:0    0   10G  0 disk 
└─sda1   8:1    0   10G  0 part /
root@test:~# ls -la /var/snap/lxd/common/lxd/disks
total 8
drwx------  2 root root 4096 Mar  5 12:28 .
drwx--x--x 14 root root 4096 Mar  5 12:30 ..

Thanks once again for the advice. :wink:

(Stéphane Graber) #4

Ok, so yes, your zpool is the size of the entire /dev/sdb physical disk.
If that’s a virtual disk, then you could grow it at the VM level, reboot the VM and then use the same growing trick as mentioned in our documentation.

If this is a physical system, then the zpool is already using the entire physical disk. Your only option to grow at that point is to either move the pool to a bigger replacement disk or to add a second disk to the pool (but be careful as failure of either would cause the whole pool to be lost).

(Siew) #5

Thanks @stgraber for the explanation.

It is a virtual disk. I am able to grow it from 10G to 15G.

I tried the mentioned method and i use snap to install lxd. However, when i am trying to expand the pool it showed an error.

user@test:~$ sudo zpool online -e default /var/snap/lxd/common/lxd/disks/default.img
cannot expand /var/snap/lxd/common/lxd/disks/default.img: no such device in pool

And, before trying the method, I check the path you mentioned (/var/snap/lxd/common/lxd/disks) and it was empty.

Assume, my zfs pool as follow

root@test:~# lxc storage show default
  source: default
  volatile.initial_source: /dev/sdb
  zfs.pool_name: default
description: ""
name: default
driver: zfs
- /1.0/containers/amazing-goblin
- /1.0/images/b5f3a547289fabf26d90250605dc3067f1863ee46c802f004aa97954cc852c33
- /1.0/profiles/default

Appreciate your advice.

(Vinay Kumar) #7

I also have same issue. I have 3 containers. By default while creating containers its gets 100GB (which i set while installing LXD). Now, I want to increase my container capacilty as i have added a new harddrive to my host.
How can I do it.

(Javi) #8

hi @hwslew,

What I have figured out is that for this command:
zpool online -e default /var/lib/lxd/disks/default.img
you have to use the path that the pool has in zpool status -v
So in your case it should be
zpool online -e default sdb

Makes sense?
Does it work?

(Vinay Kumar) #9

How did u grow it?