Hi,
I have a cluster of 5 lxd hosts using latest snap 3.3.
All lxd hosts use ceph storage cluster as default storage
config:
ceph.cluster_name: ceph
ceph.osd.pg_num: "1024"
ceph.osd.pool_name: lxd
volatile.pool.pristine: "true"
volume.size: 80GB
description: ""
name: lxd-ceph
driver: ceph
Initially the volume.size
was 40GB
but now I changed it to 80GB
When the size was 40GB
i created containers using lxd-p2v
to convert my Ubuntu 16.04 vms to containers.
This container was published as an image to use as template.
+-------------------------------------------+--------------+--------+---------------------------------------------+--------+-----------+------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------------------------------------------+--------------+--------+---------------------------------------------+--------+-----------+------------------------------+
| android-runner-ubuntu-2018-06-01-old | 90026c8ee3b9 | no | | x86_64 | 4446.05MB | Jul 25, 2018 at 4:07pm (UTC) |
+-------------------------------------------+--------------+--------+---------------------------------------------+--------+-----------+------------------------------+
When i launched containers with template it used to have correct 40GB root disk size.
Creating test-runner
Starting test-runner
root@ubuntu:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/rbd2 40G 12G 27G 30% /
Now I need to increase the default root disk size of the containers when I launch this image. So I updated the storage config volume.size
to 80GB
and created the container but root disk is still 40GB
I checked the size of rbd for the container and saw that it is having image as parent and found that image is still 40GB
so I increased the size of image rbd to 80G
rbd -p lxd resize --size 81920 image_90026c8ee3b9c9d58a444089466d733220f18dae72b0c050a4bda474d86f829b
rbd image 'image_90026c8ee3b9c9d58a444089466d733220f18dae72b0c050a4bda474d86f829b':
size 81920 MB in 20480 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1137238e1f29
format: 2
features: layering
flags:
create_timestamp: Wed Jul 25 16:09:59 2018
Now i removed and launched the container again and it still has 40GB
rbd image 'container_test-runner':
size 40960 MB in 10240 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.15e72eb141f2
format: 2
features: layering
flags:
create_timestamp: Tue Aug 7 10:23:48 2018
parent: lxd/image_90026c8ee3b9c9d58a444089466d733220f18dae72b0c050a4bda474d86f829b@readonly
overlap: 40960 MB
What is the correct way of launching the container with a bigger root disk than image or update the default size of the image?
Thanks,