Can't create container from >10GB image using a profile?

Hello everyone,

I’m running LXD 3.0.3 installed via apt on Ubuntu 18.04, using an LVM storage pool. I have an image that is 11GB when unpacked, and I’m trying to launch a container with it. I’m aware that I can modify the default size of a container by setting volume.size, but I prefer to use a profile rather than change the global configuration of my storage pool.

This is the configuration of my storage pool:

$ lxc storage list
+---------+-------------+--------+--------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |             SOURCE             | USED BY |
+---------+-------------+--------+--------------------------------+---------+
| default |             | lvm    | /var/lib/lxd/disks/default.img | 14      |
+---------+-------------+--------+--------------------------------+---------+

$ lxc storage show default
config:
  lvm.thinpool_name: LXDThinPool
  lvm.vg_name: default
  size: 100GB
  source: /var/lib/lxd/disks/default.img
  volume.size: 10GB
description: ""
name: default
driver: lvm
used_by:
 [ redacted ]
status: Created
locations:
- none

I created a new profile by copying the default profile:

$ lxc profile copy default myprofile

I tried to use the command listed in this thread on GitHub to set the size of the profile, but lxc did not seem to recognize the command:

$ lxc profile set myprofile root size 20GB
Description:
  Set profile configuration keys

Usage:
  lxc profile set [<remote>:]<profile> <key> <value> [flags]

Global Flags:
      --debug         Show all debug messages
      --force-local   Force using the local unix socket
  -h, --help          Print help
  -v, --verbose       Show all information messages
      --version       Print version number
Error: Invalid number of arguments

But I was able to edit it manually to add the “size: 20GB” line:

$ lxc profile edit myprofile

$ lxc profile show myprofile
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    size: 20GB
    type: disk
name: myprofile

When I launch a small image using this profile, it does seem to set the size of the container correctly:

$ lxc launch ubuntu:18.04 test -p myprofile
Creating test
Starting test

$ lxc exec test -- bash

root@test:~# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/default/containers_test   20G  342M   19G   2% /
none                          492K     0  492K   0% /dev
udev                          2.9G     0  2.9G   0% /dev/tty
tmpfs                         100K     0  100K   0% /dev/lxd
tmpfs                         100K     0  100K   0% /dev/.lxd-mounts
tmpfs                         3.0G     0  3.0G   0% /dev/shm
tmpfs                         3.0G  8.1M  2.9G   1% /run
tmpfs                         5.0M     0  5.0M   0% /run/lock
tmpfs                         3.0G     0  3.0G   0% /sys/fs/cgroup

root@test:~# dd if=/dev/zero of=testfile bs=1M count=11000
11000+0 records in
11000+0 records out
11534336000 bytes (12 GB, 11 GiB) copied, 222.289 s, 51.9 MB/s

root@test:~# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/default/containers_test   20G   12G  7.6G  60% /
none                          492K     0  492K   0% /dev
udev                          2.9G     0  2.9G   0% /dev/tty
tmpfs                         100K     0  100K   0% /dev/lxd
tmpfs                         100K     0  100K   0% /dev/.lxd-mounts
tmpfs                         3.0G     0  3.0G   0% /dev/shm
tmpfs                         3.0G  8.1M  2.9G   1% /run
tmpfs                         5.0M     0  5.0M   0% /run/lock
tmpfs                         3.0G     0  3.0G   0% /sys/fs/cgroup

However, when I try to unpack my 11GB image into a container, the process fails:

$ lxc launch 11gb-image test2 -p myprofile
Creating test2
Error: Unable to unpack image, run out of disk space (consider increasing your pool's volume.size)

The only possibility I can imagine is that LXD is creating a default-size (10GB) volume, unpacking the image into the volume, and then resizing the volume, which seems counterintuitive (as well as inefficient). Is launching large images using a profile not supported, or is this behavior a bug?

You’ll need to use volume.size.

The root config on the profile or container works fine, for the container block device itself.
The issue is that the way containers are created is by cloning an image.
That image is its own LV on LVM and that’s what’s failing to create due to running out of disk space.

So on LVM you’d generally want to set volume.size to the size of the largest unpacked image you want to handle. Then use profiles to restrict the size of the containers as created from the images.

1 Like

Alright, thanks for the information!