I am trying to resize/expand the ZFS pool used in my lxd setup. Unfortunately, no luck so far.
I am trying to follow this to resize/expand the ZFS pool used in my LXD setup. However, I am unable to get the path of image file. I tried lxc storage list, but it show only the following.
±--------±------------±-------±--------±--------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
±--------±------------±-------±--------±--------+
| lxd | | zfs | lxd | 3 |
±--------±------------±-------±--------±--------+
I used to be able to get the image path ‘/path/to/.img’ under the source column.
The following is my lxd setup
lxd version : 2.21 (install using snap)
use block device (/dev/sdb) in lxd init
storage driver : zfs
Also, other than the method above, is there any other way i can try to expand/resize my zfs pool after creation?
Thanks in advance for your advice. Very much appreciated.
Since we added the API, any LXD-generated image file will end up in /var/lib/lxd/disks (or /var/snap/lxd/common/lxd/disks if using the snap).
That method is still correct to grow a ZPOOL.
Though above you mention having entered /dev/sdb during lxd init which would imply you’re using a full disk for your zpool, if that’s the case, growing will be a bit different given that you can’t really grow on that disk
I think i used the full disk when initiating lxd for the first time. If this is the case, it is not possible to grow the disk anymore? What would be the better way to setup lxd using block device?
The following are some output in my machine.
root@test:~# zpool status -v
pool: default
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
default ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors
root@test:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop1 7:1 0 47M 1 loop /snap/lxd/5866
sdb 8:16 0 15G 0 disk
├─sdb9 8:25 0 8M 0 part
└─sdb1 8:17 0 10G 0 part
loop0 7:0 0 81.6M 1 loop /snap/core/4110
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
root@test:~# ls -la /var/snap/lxd/common/lxd/disks
total 8
drwx------ 2 root root 4096 Mar 5 12:28 .
drwx--x--x 14 root root 4096 Mar 5 12:30 ..
root@test:~#
Ok, so yes, your zpool is the size of the entire /dev/sdb physical disk.
If that’s a virtual disk, then you could grow it at the VM level, reboot the VM and then use the same growing trick as mentioned in our documentation.
If this is a physical system, then the zpool is already using the entire physical disk. Your only option to grow at that point is to either move the pool to a bigger replacement disk or to add a second disk to the pool (but be careful as failure of either would cause the whole pool to be lost).
It is a virtual disk. I am able to grow it from 10G to 15G.
I tried the mentioned method and i use snap to install lxd. However, when i am trying to expand the pool it showed an error.
user@test:~$ sudo zpool online -e default /var/snap/lxd/common/lxd/disks/default.img
cannot expand /var/snap/lxd/common/lxd/disks/default.img: no such device in pool
And, before trying the method, I check the path you mentioned (/var/snap/lxd/common/lxd/disks) and it was empty.
I also have same issue. I have 3 containers. By default while creating containers its gets 100GB (which i set while installing LXD). Now, I want to increase my container capacilty as i have added a new harddrive to my host.
How can I do it.
Thanks
What I have figured out is that for this command: zpool online -e default /var/lib/lxd/disks/default.img
you have to use the path that the pool has in zpool status -v
So in your case it should be zpool online -e default sdb
So I just went through this and was getting some of the errors listed above and thought I’d leave what worked for me in case someone else out there has similar problems.
Assumptions: 1) One zfs pool called ‘default’ that lives on the hard drive at /var/lib/lxd/disks/default.img. 2) Running as root. 3) Growing pool ‘default’ from 25GB to 45GB.
Stop all running containers.
#truncate -s +20G /var/lib/lxd/disks/default.img
#zpool set autoexpand=on default
#zpool status -vg default
4a. Note the device id value from the results (for me, it was a really long number). You’ll need it in the next step.
#zpool online -e default device_id_from_step_4a
#zpool set autoexpand=off default
#service lxd restart
7a. This is on Debian/Ubuntu. For CentOS, it’d be a “systemctl” command.
After service restart, if you do a “$lxc storage info default”, you should now see the expanded space. Restart containers.