The LXD tutorials of Simos

For a long time we couldn’t do this because the server only had one constant NotFound return value which didn’t let us provide additional context, but this has recently been fixed by @monstermunchkin so in theory the right NotFound(nil) could be tracked down in the code base and the error message be tweaked to something more sensible.

Added the following four-part series on running LXD on, on the AMD EPYC 24-core baremetal server (48 threads).

  1. A closer look at AMD EPYC baremetal servers at
  2. Booting up the AMD EPYC baremetal server at
  3. Configuring LXD on the AMD EPYC baremetal server at
  4. Benchmarking LXD on an AMD EPYC server at

The third post is about setting up LXD on the server. I tried here LXD 3.0.2 (from bionic-backports).

The fourth post is about benchmarking LXD with lxd-benchmark.

Some interesting LXD information:

  1. Creating more than 1024 containers (with networking) is a problematic issue. There is a Linux kernel hard limit to the number of IP address on a bridge. To bypass the limit, you need to recompile, or not use a bridge.
  2. If you create too many containers than your computer can handle (memory, perhaps CPU), then you cause this type of kernel error. The memory was exhausted but it could also be an issue with too many processes for the scheduler to handle. Would need to perform again in order to grab some useful logs.
[ 1450.993972] INFO: task systemd:1 blocked for more than 120 seconds.
[ 1451.000279] Tainted: P O 4.15.0-36-generic #39-Ubuntu
[ 1451.007094] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 1451.014957] systemd D 0 1 0 0x00000000
[ 1451.014960] Call Trace:
[ 1451.014969] __schedule+0x291/0x8a0
[ 1451.014971] schedule+0x2c/0x80
[ 1451.014973] schedule_preempt_disabled+0xe/0x10
[ 1451.014974] __mutex_lock.isra.2+0x18c/0x4d0
[ 1451.014976] __mutex_lock_slowpath+0x13/0x20
[ 1451.014978] ? __mutex_lock_slowpath+0x13/0x20
[ 1451.014979] mutex_lock+0x2f/0x40
[ 1451.014982] proc_cgroup_show+0x4c/0x2a0
[ 1451.014985] proc_single_show+0x56/0x80
[ 1451.014988] seq_read+0xe5/0x430
[ 1451.014990] __vfs_read+0x1b/0x40
[ 1451.014991] vfs_read+0x8e/0x130
[ 1451.014992] SyS_read+0x55/0xc0
[ 1451.014995] do_syscall_64+0x73/0x130
[ 1451.014997] entry_SYSCALL_64_after_hwframe+0x3d/0xa2

Added tutorial on distrobuilder,

It’s an introduction to distrobuilder, shows how to install it and then create a container image for Ubuntu.

It expands a bit on the content found in at

In this tutorial we see how we create a minimal configuration file that can be used to generate a container image.

Sort of like a HelloWorld for distrobuilder.

The generated container image is based on Alpine Linux, and takes a couple of seconds to get generated.

Hello @simos, your blog seems down (timeout).

Indeed, there was a downtime of about 3 hours. Now it is back up.

1 Like

I joined this forum just to say thank you for this post and the hours of work that went into this.


Added the following tutorial,

It is a dedicated tutorial on getting Steam to work inside a LXD container.
The reason I wrote it, was that there are many Steam github issues on getting Steam to work on Linux.
The audience is Steam users that do not need to learn too much about LXD and want to get their job (game) done quickly.

I noticed that I had a five-month hiatus in writing tutorials. Hmmm.


Added the following tutorial,

When you launch a container, it takes several seconds for systemd to complete all startup starts and become idle. The lxc launch command returns immediately, although the container has not fully started yet.

In this post we create a command that we can run on the container, and it returns only as soon as the container has completed the bootup.

1 Like

Added the following tutorial,

It is about using the Kali container image that was recently added to the images: repository.
Emphasis on Wifi and monitor mode.

Added the following tutorial,

The first Google result on proxy devices is an old reddit post that is archived (cannot edit). And mentions localhost when use create a TCP proxy device. I hope this post appears first instead.

Added tutorial on how to setup cloud-init in a LXD profile in order to add two network interfaces into a LXD container. One interface is over lxdbr0 and the other is on macvlan.


Added tutorial on how to test the recent php-fpm security vulnerability.
It is a security vulnerability that affects nginx+php-fpm, when nginx is configured in a very specific way for php-fpm. Sadly, the Nextcloud documentation describes that very bad configuration.

Netdata is a real-time monitoring tool for servers. As a tool, it is really big (popular) in the real-time monitoring business. You get more than 2000 metrics presented to you in real-time in order to figure out if your server is misbehaving.

In this post we install it in a LXD container

  1. in order to test-drive how it works in a safe environment (not on the host)
  2. and get a feeling of the separation between the host and a LXD container. For example, in the container, Netdata does show only the load for the container.

Netdata understands containers (LXC and LXD), and can show metrics when you install it on the host. We will install on the host in a future tutorial.


LXD and virtual machines,


These have been great. Thank you @simos.

I would love an update on combining the new VM feature with the ability to run GUI apps in LXD-- two topics you have already written about. Are there significant drawbacks or missing functionality when attempting to combine the two features?

I hope to dive into researching this area soon.

Thanks Mark!

LXD VMs would be great to run GUI applications, or even a separate desktop environment.

When a LXD VM is launched, it is pretty separate from the host (and LXD).
To be able to do, for example, lxc file pull/push, we need to install a companion service in each VM so that this service works as a middleman between LXD and each LXD VM.
That companion service is LXD Agent.

At the moment, LXD Agent can do lxc file and lxc exec, and I suppose it is a matter of time to get more advanced features such as LXD proxy devices.

Having said that, if you do not require hardware acceleration, and want either apps or even a desktop to run fully isolated in a LXD VM, then you can use LXD with X2Go. Your attack surface would be the X2Go client component.

When performance is a hard requirement (i.e. games, software that requires GPU acceleration), it needs extra investigation. It should be possible to share again the X11 Unix socket or X11 TCP port to the VM, but that breaks much of the security guarantees of the VM. And how do we share so that we get the best performance? (SSH does encryption, so doing this over SSH is probably not good for performance).

Alternatively, there is virgil3d and the ability to offer a paravirtual GPU into a VM. LXD launches a qemu-system-x86_64 command with many parameters. We could probably append there -vga virtio -display gtk,gl=on to enable the paravirtual GPU, so that the whole setup is quite transparent to the user. This direction is promising. It does not look to achieve the GPU performance of the host but others reported that the performance is like in VMWare’s virtual GPU.

If you have a separate GPU, then it should be possible (in the near future?) to use it as a dedicated GPU for a single LXD VM.

1 Like

there is some more information here Trying LXD virtual machines from stgraber about there GUI plans if you haven’t seen it

1 Like


Here is the text for completeness:

1 Like

Thanks for the extra context @simos and @turtle0x1 ! I replied in the other thread since that thread is focused on VMs. Trying LXD virtual machines