Overview
It can be a major challenge to create a development environment suitable for efficient testing of virtualization across multiple platforms. Fortunately, LXD can make the process much easier.
LXD can quickly and easily create containers that allow for nested virtualization. LXD also allows directories from the host system to be passed through to containers, which can be very convenient for development.
This post describes an environment based on LXD 5.12, running on Ubuntu 22.04 “Jammy”. To simplify LXD integration, the OS was installed on a ZFS root filesystem (with encryption enabled).
Initial LXD Configuration
First, LXD needs to be initialzed. In this case, prompts were followed to create a storage pool referencing a local ZFS rpool
. (Because the root disk in this situation is ZFS-based, there is no need for LXD to create a loopback device for full functionality.)
Example lxd init
run
$ lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (ceph, dir, lvm, zfs, btrfs) [default=zfs]: zfs
Would you like to create a new zfs dataset under rpool/lxd? (yes/no) [default=yes]: yes
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: no
Would you like the LXD server to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks: []
storage_pools:
- config:
source: rpool/lxd
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: ""
devices:
root:
path: /
pool: default
type: disk
name: default
projects: []
cluster: null
The lxdbr0
interface can subsequently be configured.
Commands for lxdbr0
configuration and default profile attachment
lxc network create lxdbr0 \
ipv4.address="100.65.0.1/16" \
ipv4.nat="true" \
ipv4.dhcp="true" \
ipv4.dhcp.ranges="100.65.1.1-100.65.1.254" \
ipv6.address="none" \
ipv6.nat="false" && \
lxc network attach-profile lxdbr0 default
In some cases, in may be problematic for LXD to propagate its DNS service to your system. For example, this occasionally interferes with the DNS servers provided by VPN clients. To tell LXD not to manage DNS entries for its resources, you can run:
lxc network set lxdbr0 dns.mode=none
The above is enough to get a container running and connected to the network with the default
profile.
Passing Through Virtualization Devices
In order to make it easy to pass through the necessary devices, it makes sense to create a virt
profile to encapsulate the required security properties and devices necessary to test virtualization.
virt
profile configuration
lxc profile create virt && \
lxc profile set virt security.nesting=true && \
lxc profile device add virt kvm unix-char source=/dev/kvm && \
lxc profile device add virt vhost-net unix-char source=/dev/vhost-net && \
lxc profile device add virt vhost-vsock unix-char source=/dev/vhost-vsock && \
lxc profile device add virt vsock unix-char source=/dev/vsock
Some notes on profile composition
When creating a new profile in LXD, it will not contain any default values for the storage pool or network. This is fine, because LXD allows for profile composition (profiles for various purposes can be defined separately and later composed together). In this case, it allows virtualization-specific directives to be placed only in the virt
profile, thus decoupling the concerns of which network and storage pool to attach to (contained in the default
profile).
Note that it is possible to create a single profile which encapsulates everything necessary to start the instance. For example, to configure a specific network device and storage pool on the virt
profile, the following commands could be used:
lxc network attach-profile lxdbr0 virt && \
lxc profile device add virt root disk pool=default path=/
The remainder of this post assumes this has NOT been done; rather, the preferred network and storage pool are configured in the default
profile.
After creating the virt
profile, existing containers using the default
profile can then be enabled for virtualization by using a command such as lxc profile assign <container> virt
. However, the containers will need to be restarted in order for the new security settings to take effect.
From this point onward, new containers can be created with the virt
profile. For example, lxc launch ubuntu:jammy jammy -p default -p virt
would create a container called jammy
based on the official Ubuntu 22.04 jammy
image (and also the settings from the default profile).
Testing Nested Virtualization
Nested virtualization can be tested easily by running lxd
inside a container with the virt
profile applied. For example, a jammy
container can first be created, using the -p
(or --profile
) option to select the virt
profile:
lxc launch ubuntu:jammy jammy -p default -p virt
lxc shell jammy
Inside the jammy
container, attempt to launch a bionic
VM:
lxd init --auto
lxc launch images:ubuntu/bionic/cloud bionic --vm
# wait for the virtual machine to start
lxc shell bionic
Note here that the images:ubuntu/bionic/cloud
image is used in preference to the ubuntu:bionic
image, because the former has the lxd-agent
preinstalled (which is necessary to manage the VM with LXD).
Creating a User-Specific Profile
Now that a profile has been created which allows for nested virtualization, it might be convenient to utilize some of LXD’s other features in order to make it easier to develop inside the container.
Privileged Containers
For development purposes, it can be convenient to run “privileged” containers. To LXD, this means that the container’s UID and GID mapping will be consistent with the host. This has the advantage of being able to easily share the host filesystem with the container without additional access controls. However, this practice is considered risky because it reduces the degree of isolation between the container and the host. Use privileged containers at your own risk.
Be aware that making a container privileged impacts its UID/GID mapping, which effectively changes the meaning of all files’ ownership within the container. For this reason, it is simplest to apply security.privileged=true
at launch time.
Using $HOME
Inside a Container
It might be useful to have a profile named after a local $USER
, which would mount $HOME
into the countainer, while also making the container privileged.
$USER
profile configuration
lxc profile create "$USER" && \
lxc profile set "$USER" security.privileged=true && \
lxc profile device add "$USER" "home-$USER" disk source="$HOME" path="$HOME"
Adding cloud-init
User Data
As it is currently written, one problem with this profile is that the user defined inside the container is inconsistent with the local system. This can be solved by adding cloud-init
user data.
Creating a user-specific cloud-init
configuration
The following shell snippet will create a file called $USER-cloud-config.yml
which can be set in the $USER
profile:
cat <<EOF > $USER-cloud-config.yml
#cloud-config
users:
- name: "$(id -u -n)"
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: [root, sudo, staff]
homedir: "$HOME"
passwd: "$(sudo getent shadow "$USER" | cut -d':' -f 2)"
lock_passwd: false
no_create_home: true
shell: /bin/bash
uid: $(id -u)
disable_root: false
package_update: true
package_upgrade: true
packages:
- openssh-server
EOF
This example cloud-init
configuration will configure a user inside the container generally matching the properties of the current user. In addition, it will install, update, and upgrade packages upon container launch. In particular, in this example the openssh-server
package is installed. (It can be useful to have the SSH server available, which makes it possible to run something like ssh -A <container-ip>
to make use of the SSH agent.) Of course, this list of packages can be customized to suit individual developer needs.
Note that this shell snippet uses sudo
to propagate $USER
's hashed password into the container. This may prompt for a password, but is optional and can be removed.
Given a file containing a cloud-init
configuration specific to the current $USER
, it seems reasonable to apply it as user-data
. This can be done as follows for the $USER
profile (assuming it was generated per the shell snippet in the details section above):
lxc profile set "$USER" user.user-data "$(cat "$USER"-cloud-config.yml)"
Please see the official documentation for more details on the integration between cloud-init
and LXD.
Launching the Container
As an example, the following command would launch a focal
container with the default
, virt
and $USER
profiles:
lxc launch images:ubuntu/focal/cloud focal -p default -p virt -p "$USER"
Then, lxc shell focal
could be used to gain a root
shell inside the container, and su - <your-username>
could be used to become an unprivileged user.
In addition, because an OpenSSH server should be running in the container (per the cloud-init
user data), lxc list
can be used to obtain the container’s IP address, and ssh -A <container-ip>
could be used to gain an underprivileged shell inside the container, with SSH agent forwarding. (Note that your own SSH key must be present in ~/.ssh/authorized_keys
, which is now mapped into the container, for this to work.)
Summary
The default LXD configuration profile can be composed with a profile allowing for nested virtualation. With an additional user-specific profile (and a little bit of cloud-init
), seamless development and test environments for virtualization can be created.
Thank you to the community for making all of this possible!
References
- LXD 4.0 quick recipe: LXC and KVM coexisting
- Weekly status #251 (“Add support for running LXD VMs inside LXD containers”)
- LXD Documentation
- cloud-init Reference documentation