I have a project with a few bash scripts which build several Docker images. The builds are fairly lengthy (relatively speaking). On the host with Docker, it can take an hour and half but sometimes a little longer.
Now I have been testing out LXD with Docker installed as a means of adding a virtualized layer to the whole process. This could enable developers to have different testing states of the application environment. It’s experimenting .
But what I’ve found, is that while building the Docker images the process time takes a really really long time. I’m talking like several hours.
It seems that when it’s compiling actual code like java or c++ that process is on par with the host. The steps that take a long time, even seem to stall out, is when yum is installing an rpm.
For example:
Installing : ffmpeg-3.3.3-1.el7.centos.x86_64 140/149
took several minutes to complete.
Network speeds, for downloading packages or curl commands, are just as quick as the host. The other thing I’ve noticed is it seems this installing process is single threaded. Meaning, with btop
open, only one core seems to be in use and maxed out. But when running the build process on host, all cores are used.
Any thought, comments? Maybe I’m looking in the wrong direction.
lxc profile config
config:
raw.lxc: |-
lxc.apparmor.profile=unconfined
lxc.mount.auto=proc:rw sys:rw cgroup:rw
lxc.cgroup.devices.allow=a
lxc.cap.drop=
security.nesting: “true”
security.privileged: “true”
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
I do have a host folder mapped into the container but I’m not doing any work within that folder.
Thanks!