xlmnxp
(Salem AlSaiari)
May 24, 2021, 9:46am
1
I have virtual machine created by following command
lxc launch images:ubuntu/focal dfoss
It tried to increase it size to 64G but I couldn’t
my storage bool
[syaslem@lucid ~]$ lxc storage info default
info:
description: ""
driver: btrfs
name: default
space used: 122.11GB
total space: 1.00TB
used by:
images:
- 8e4d025b836f8a2a8df25c6268e926f87cdcd7d682f0d9a8e52b017881ea10db
- 9b2d34a71d2893ea74cc3e14c7bce5873d37e6bd39fe16fa29b04d908b7ea3d2
instances:
- dfoss
- secure-lizard
profiles:
- default
[syaslem@lucid ~]$ lxc storage list
+---------+--------+------------------------------------------------+-------------+---------+
| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY |
+---------+--------+------------------------------------------------+-------------+---------+
| default | btrfs | /var/snap/lxd/common/lxd/storage-pools/default | | 5 |
+---------+--------+------------------------------------------------+-------------+---------+
Inside the Virtual Machine
I tried growpart
root@dfoss:~# growpart /dev/sda 2
NOCHANGE: partition 2 is size 19324383. it cannot be grown
and resize2fs /dev/sda2
root@dfoss:~# resize2fs /dev/sda2
resize2fs 1.45.5 (07-Jan-2020)
The filesystem is already 2415547 (4k) blocks long. Nothing to do!
df -h
outout
root@dfoss:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 477M 0 477M 0% /dev
tmpfs 98M 708K 98M 1% /run
/dev/sda2 8.9G 842M 8.1G 10% /
tmpfs 489M 0 489M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 489M 0 489M 0% /sys/fs/cgroup
/dev/sda1 99M 3.9M 95M 4% /boot/efi
tmpfs 50M 11M 40M 21% /run/lxd_agent
tmpfs 98M 0 98M 0% /run/user/0
Hi,
Have you ever looked at that link before, it contains some hints about resizing btrfs.
https://linuxcontainers.org/lxd/docs/master/storage#growing-a-loop-backed-btrfs-pool
Regards.
xlmnxp
(Salem AlSaiari)
May 24, 2021, 3:29pm
3
I checked it but it do nothing
$ sudo truncate -s +64G /var/snap/lxd/common/lxd/disks/default.img
$ sudo losetup -c /dev/loop5 # lxd loop
$ sudo btrfs filesystem resize max /var/snap/lxd/common/lxd/storage-pools/default
Resize device id 1 (/dev/nvme0n1p2) from 931.51GiB to max
then I tried restart the virtual machine then grow it but I can’t
root@dfoss:~# growpart /dev/sda 2
NOCHANGE: partition 2 is size 19324383. it cannot be grown
root@dfoss:~# resize2fs /dev/sda2
resize2fs 1.45.5 (07-Jan-2020)
The filesystem is already 2415547 (4k) blocks long. Nothing to do!
root@dfoss:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 477M 0 477M 0% /dev
tmpfs 98M 708K 98M 1% /run
/dev/sda2 8.9G 842M 8.1G 10% /
tmpfs 489M 0 489M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 489M 0 489M 0% /sys/fs/cgroup
/dev/sda1 99M 3.9M 95M 4% /boot/efi
tmpfs 50M 11M 40M 21% /run/lxd_agent
tmpfs 98M 0 98M 0% /run/user/0
root@dfoss:~#
Hi Salem,
So is this a lxd container or a vm? Can you paste this command output in the container/vm?
df -Th
Regards.
xlmnxp
(Salem AlSaiari)
May 24, 2021, 3:45pm
5
thank you for response
Output
root@dfoss:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 477M 0 477M 0% /dev
tmpfs tmpfs 98M 708K 98M 1% /run
/dev/sda2 ext4 8.9G 842M 8.1G 10% /
tmpfs tmpfs 489M 0 489M 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 489M 0 489M 0% /sys/fs/cgroup
/dev/sda1 vfat 99M 3.9M 95M 4% /boot/efi
tmpfs tmpfs 50M 11M 40M 21% /run/lxd_agent
tmpfs tmpfs 98M 0 98M 0% /run/user/0
I think @tomp explained in that post already, look at that link.
https://discuss.linuxcontainers.org/t/how-can-i-expand-the-size-of-vm/7618/3
Regards.
1 Like
xlmnxp
(Salem AlSaiari)
May 24, 2021, 4:49pm
7
I tried it, I mentioned growpart
and resize2fs
above
don’t be like Arch community
xlmnxp
(Salem AlSaiari)
May 24, 2021, 4:53pm
8
fixed, thanks @cemzafer @toby63 (I got the solution from one of his comments)
the solution is to override config for my vm and specify new size for it
lxc config device override [vm name] root size=15GB
or
lxc config device set [vm name] root size=64GB
After override the root disk size dont forget to restart the vm.
Regards.
1 Like
xlmnxp
(Salem AlSaiari)
May 24, 2021, 4:58pm
10
that work great, thank you again