Building Docker Images in LXD is slow compared to Host

I have a project with a few bash scripts which build several Docker images. The builds are fairly lengthy (relatively speaking). On the host with Docker, it can take an hour and half but sometimes a little longer.

Now I have been testing out LXD with Docker installed as a means of adding a virtualized layer to the whole process. This could enable developers to have different testing states of the application environment. It’s experimenting :slight_smile: .

But what I’ve found, is that while building the Docker images the process time takes a really really long time. I’m talking like several hours.

It seems that when it’s compiling actual code like java or c++ that process is on par with the host. The steps that take a long time, even seem to stall out, is when yum is installing an rpm.
For example:
Installing : ffmpeg-3.3.3-1.el7.centos.x86_64 140/149
took several minutes to complete.

Network speeds, for downloading packages or curl commands, are just as quick as the host. The other thing I’ve noticed is it seems this installing process is single threaded. Meaning, with btop open, only one core seems to be in use and maxed out. But when running the build process on host, all cores are used.

Any thought, comments? Maybe I’m looking in the wrong direction.
lxc profile config

config:
raw.lxc: |-
lxc.apparmor.profile=unconfined
lxc.mount.auto=proc:rw sys:rw cgroup:rw
lxc.cgroup.devices.allow=a
lxc.cap.drop=
security.nesting: “true”
security.privileged: “true”
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
I do have a host folder mapped into the container but I’m not doing any work within that folder.

Thanks!

What storage pool type are you using?

Can you show ‘lxc storage show default’ output please?

Simple dir storage.

lxc storage show default
config:
source: /var/snap/lxd/common/lxd/storage-pools/default
description: “”
name: default
driver: dir
used_by:

  • /1.0/instances/foo
  • /1.0/instances/ub1
  • /1.0/profiles/default
    status: Created
    locations:
  • none

This post seems to related to the issues I was seeing.
https://discuss.linuxcontainers.org/t/lxd-vm-io-bottleneck-on-zfs-with-a-solution/14170

So I should probaly run the containers in a zfs pool.
I’m still a little confused on how he ran that command that resolved his issue.
sudo zfs set sync=disabled local/virtual-machines

Did he run that on the host or within the container/vm?
I guess I would also need to install zfsutils-linux on either host or container.
# zfs Command 'zfs' not found, but can be installed with: apt install zfsutils-linux # version 2.1.2-1ubuntu3, or apt install zfs-fuse # version 0.7.0-22ubuntu1

Thanks

That post is about ZFS, but you are using dir storage, although what is the backing storage on your system?