Using apt cache (proxy) during builds with `distrobuilder`

I’m new to distrobuilder, so apologies if I’m missing something obvious, but I’m trying to figure out how to use a local apt cache while developing an image that I’m hoping to be able to create with distrobuilder.

Basically, I want to start with a base Ubuntu image, add several PPAs, configure various settings, and end up with a squashfs file system that can be served to diskless clients (via LTSP and nbd).

But while I’m working on creating and iterating through changes to the image creation process, I don’t want to keep downloading base images, updates, packages, etc. over my (metered) network connection. I already have an apt-cacher-ng instance running, so I’d like distrobuilder to utilize it. If that’s not possible, is there some other caching mechanism that distrobuilder supports?

I’ve seen references to configuring a proxy with the #cloud-config tag in the default profile’s user.user-data section, but it’s not clear to me that this applies while distrobuilder is creating an image.

Any suggestions, pointers, etc. would be appreciated!

I’m no expert with this tool, but here is my understanding of it.

Basically in most cases, you need one to build one.

That is, if you want to build a Fedora 31 container, you will have to do it on a Fedora 31 host (as long as you want to do changes that can be done only with distro specific tool, such as dnf; if your host is Ubuntu and your rootfs is Fedora 31, the host will not run dnf)
So if your host use a proxy by default to access the distro repos, it should use it also when using distrobuilder. I’m not giving you my experience, it is just a guess.

For more mundane changes such as removing files or creating some config files, there is no need to run a distro specific tool and this problem does not exist.
For the image, it has to be downloaded with http from a fixed location (set in the code). The only way to use build-lxd with this could be to use some resolution name (DNS) trickery to create a pseudo using a host file or something like that.

Another way could be to download the needed file from its true location and use pack-lxd instead of build-lxd. This could work with an important proviso: building the container changes the image, so if you use pack-lxd you have to use some script to copy the true original image at a given place, mount it, for example at /mnt/temp, and then use

distrobuilder pack-lxd mydistro.yaml /mnt/temp

after that your script should unmount the modified image, and probably by default delete it so you don’t risk to do another distrobuilder run with this changed image.

Luckily I want to build an Ubuntu rootfs on an Ubuntu server, so the “need one to build one” won’t be an issue.

This is what I was hoping, and I do have an /etc/apt/apt.conf.d/01proxy file that sets Acquire::HTTP::Proxy to my local apt-cacher-ng instance, but when I run distrobuilder, it does not seem to be using the cache… :thinking: every time it runs, it starts (slowly) pulling everything over the net again…

Starting to make some progress!

I was able to add my cache in the url like this in the .yaml file:

  downloader: debootstrap
  same_as: gutsy
  url: http://cache:3142/

and the apt-cacher-ng transfer statistics is now showing activity. :+1:

There is a snap package of distrobuilder, and there is the option to compile it from source. If you compile from source, you can tinker at will.
Which one do you use?

Because if you are using the snap package, it is likely that it does not respect the /etc/apt/apt.conf.d/proxy.conf configuration. The snap package is using the classic confinement, that is why I mention likely.

Is your ubuntu.yaml based on these at ?

I’m using the Snap package (trying to minimize packages installed on the bare-metal machine hosting the containers. I wasn’t sure if distrobuilder would work inside a container, so figured the Snap package was the way to go.)

I’ve looked at several example .yaml files to get this far. As mentioned above, adding the cache to the url for the debootstrap step works.

I also determined that I need to add a sources.list to the repositories definition to update the one added during the source step.

Then, if I add this to my post-unpack action:

  echo 'Acquire::HTTP::Proxy "http://cache:3142";' >> /etc/apt/apt.conf.d/01proxy
  echo 'Acquire::HTTPS::Proxy "false";' >> /etc/apt/apt.conf.d/01proxy

then the cache does get used by the packages step. :+1:

I also added a post-package step to rm /etc/apt/apt.conf.d/01proxy.

So now I can make small adjustments to my .yaml file, run distrobuilder, and then examine the resulting rootfs much more quickly… and without wasting metered bandwidth.

compiling distrobuilder from sources is long and boring but it does not interfere with your system packages, you can setup it entirely as a standard user (but you have to run it with sudo)

I don’t think it could work inside a container, because you have to be root in the host, so it should not be able to work inside an unprivileged container. I had the same doubt when I touched distrobuilder for the first time.
The distrobuilder documentation is something like the doc of an internal tool, aimed at people ‘in the know’, that is, Ubuntu developers and distro packagers, definitely not a doc for new, naive beginners. It’s barely more than a collection of sample commands, there is no comprehensive presentation, and no specifications (but that’s pretty usual these days)

The issue with distrobuilder not being able to run in a container has to do with specific YAML files that require to use mknod. Such as to create an Ubuntu container image. Because you can create an Alpine container image with distrobuilder inside a LXD container.

The issue with mknod will probably get resolved soon according to LXD 3.13 has been released It might work already on Ubuntu host, if you have Linux 5.0 or newer?

Compiling distrobuilder should be easy. I wrote this, and also the distrobuilder project has the instructions.
Here is a full compilation walkthrough of distrobuilder, inside a LXD container.

lxc launch ubuntu:18.04 distrobuilder
lxc exec distrobuilder -- sudo --user ubuntu --login

Then, in the container.

sudo apt update
sudo apt install -y golang-go debootstrap rsync gpg squashfs-tools
go get -d -v
cd ~/go/src/

The generated binary is found at ~/go/bin/distrobuilder. It is a self-container binary, and can be copied to the host.

Thanks Simos. It’s good to know that building the latest distrobuilder is easy, and possible to do it in a container.

So far, the changes I mentioned above are working well to use my local apt cache. Unless anybody can provide a more “elegant” solution, I’ll assume this is good enough.