Nesting Nomad, attempting to access Web UI

I’m testing out some of HashiCorp’s applications, with the current main focus on Nomad. I will probably be integrating Consul into it and may eventually include their Vault app.

My current test environment is somewhat limited and running on a Windows physical machine. While I’d prefer Linux as the base, that’s not presently an option, so I spun up a Linux VM.

Here’s a look at some of the setup so far:

  • Windows 10 physical machine -> VirtualBox -> Ubuntu 20.04 VM
  • Ubuntu VM is running Ubuntu LXD container with nesting enabled
  • LXD container currently has Docker, Nomad, Consul, and is running Redis inside Docker

I could always omit some layers and just use Vagrant and VirtualBox, and tried that with WSL, although I am at least temporarily setting that aside and placing a priority focus on using LXD.

I’d like to run Oracle Linux as the test bed for Nomad and the Docker containers, although for testing purposes Ubuntu seems to be a more welcoming way to go, at least with LXD.

Here’s a look at what’s going on in the LXD container:

  • Consul agent is currently running in dev mode, so it can run server & client together
  • Nomad agent is also currently running in dev mode, same reason
  • Job is a generic example.nomad that spins up Redis

A few notes on progress along the way:

  • Status took a while on Redis to change from “pending” to “running” but it successfully deployed
  • Successfully tested Nomad scaling and upgrading Redis via jobspec

I am currently unable to access the Nomad web UI from the physical machine, although there is probably a way around the multiple layers to still access it, it just may require getting the correct address and port set up. I also set up Nomad on the physical machine just for a quick test, which likely introduces conflict, but am opting to keep tests contained.

The ports for the Nomad web UI are generally 4646 or 2200, and they’re accessed via localhost, which may not be an option with these layers. I am guessing this may require digging into the layers, from the initial VM (using Bridged Adapter via VirtualBox), to the LXD container (which includes separate addresses, for the LXD container itself, including a basic lxdbr0 setup, and Docker), to the Docker containers (testing with Redis) running inside.

A few things I’m curious about:

  • I am guessing that I will need to access the LXD container directly, if I want to load the Nomad web UI on the physical machine.
  • I am also curious how to access - also from the physical machine - any web interfaces for Docker containers that are running nested within the LXD container.
  • After that, I’d like to be able to securely access applications from the public, like if I spin up a website via Docker in that setup.

Any ideas on how I can access those applications from my web browser?


My understanding is that docker containers expose their service ports onto the ‘host’ (a nested LXD container in this case). So both of your questions are the same effectively if I understand correctly, how do I access ports on my container?

The answer depends on the type of NIC you use in your container. If your container is connected to the default private lxdbr0 bridge network, then this by default does not allow inbound connections.

If you would prefer to hide your containers (and docker services) behind a single IP on your external network, then you can use the proxy device to forward certain host ports into certain containers.


If you want to actually connect your container to the external network, as if it was a separate machine from the host with its own MAC address, then you can setup a standalone bridge, and then use the bridged NIC type to connect to it.

See and

In your case, using VirtualBox complicates things, as it has its own networking options, which you will also need to configure to ensure that packets are passed through to the LXD host.

I believe it has its own flavour of port forwarding you could use, or you can connect it to the external network directly as well. Be aware however that virtual machine managers often have a MAC filtering feature enabled by default that will only allow frames from the VM NIC’s MAC address, and this often causes issues with people trying to use an external bridge inside the VM as each container will have its own MAC address and frames will get filtered by the virtual machine manager.