How to access webserver running in container from outside of my host?

probably a small networking matter

I run an application in a Container which is being accessed by users through a browser (port 80, served by nginx).
It works once I use the browser of my host using the containers IP ( but doesn’t when I’m outside of the host machine.

host is a Virtual Box xubuntu 16.04 (bridged networking) (original post said NAT, sorry)
I can ping the hosts IP (
LXD 2.12

just guessing I’d come up with the idea that I might have to run some sort of
lxc network attach ... command?

The output of lxd network --help raises more question then answers unfortunately
total beginner here, so sorry if it’s a silly question. Can someone advise?

How is the networking setup on your VM in Virtual Box? Bridged or NAT’d. According to this page, NAT mode won’t allow VM with multiple MAC addresses to pass traffic (as in LXD with containers):

So, make sure your VM is set for bridged mode (not NAT).

BTW - VMWare has the same issue. The fix is to enable promiscuous mode on the vSwitch.

I have to admit that I don’t really get whether NAT or bridged works better.

however I don’t think that’s the root cause of my question … I can reach anything from within my container, so networking in general seems to be working from inside to the outside (I am actually on bridged Networking in the Virtualbox VM [sorry that was stated wrong in my post]).

so the question is how do I reach the container from the outside world (in my current case that would be the my offices internal network)

If I understand you correctly, you are running an LXD container with nginx (port 80). Is the container network using a bridged or macvlan interface (lxc config show )?

Sounds like you simply need some NAT run on the LXD server to port-forward all port-80 requests to your container. If so, look at this page:

Essentially, you probably need to run the following on the LXD server:

iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT–to-destination <container_ip>:80

This says to listen to port 80 on the LXD server (via eth0) and redirect that traffic to your container_ip on port 80

Hope this helps.

OH, and per your original post, "host is a Virtual Box xubuntu 16.04 (NAT)”

From what I read, you need to use a Bridged interface in VBox as the NAT interface won’t allow multiple MAC addresses to live behind one IP address.

I am actually using bridged … sorry my initial post was wrong in that regards (corrected now)

eth0 would be whatever the standard interface on my LXD host is? (enp0s3 in my case)

Correct. Use whichever interface you have on the LXD host for the “-i ” option. In your case, I suspect it is enp0s3

Thus, the iptables statement should read: iptables -t nat -A PREROUTING -p tcp -i enp0s3 --dport 80 -j DNAT–to-destination :80

thanks. 2 little things …

  1. with a space before --to-destination the command works much better :slight_smile:
    iptables -t nat -A PREROUTING -p tcp -i enp0s3 --dport 80 -j DNAT --to-destination :80
  2. still can’t ping the container’s IP (no matter whether my VirtualBox is on NAT, or bridged networking)

EDIT: also tried to apply the command with -i eth0 (which is then name of the network adapter inside of the container) … same, not working, result

Hi Gunnar,

Are you able to get to the web page of your container? If so, the NAT is working properly. If not, something else is wrong.

As for the ping command; do your clients know how to get to the container’s IP Address? From your original post, your LXD server’s IP is and the container’s IP is Those are on two different subnets. For the ping command to work from your local network to the container’s IP, those PCs must have a route to network.

Make sense?

no, that’s (get to the webpage of the container) exactly what I am trying to achieve. I can reach it using the browser of my containers host (which is a member of my local network as well, but that somehow doesn’t seem to matter). Do I have to do NAT port forwarding in the VirtualBox Settings maybe?

no, they don’t. And I do neither. I guess that’s what I may have to understand and then build a route to that network.

On our office’s server I have a KVM machine running and that machine has a static IP which matches the overall network ( I tried to do a similar setup in my container, but that only got me shut down completely (I mean I couldn’t even ping anything anymore from inside the container [which worked flawless before I did anything])

Maybe using a VM to play with containers is not the best idea if you don’t bring profound network knowledge with you. I did it because our Server is still an 14.04 machine and seems to only go as far as LCD 2.09.

Check your message box on Linux Containers discussion forum…

still haven’t been able to solve this in any way.

Thanks @rkelleyrtp for offering personal skype call … unfortunately due to time difference being substantial) this hasn’t worked out yet.

Anyone else who can shine some light on this (or maybe has mastered the same scenario: LXD on a host that is a VirtualBox VM)?

Your your PM on

DOH - email should have read:

vrms: Please check your PM on

thanks to @rkelleyrtp who sat down in a 1.5 hr Skype session we where able to come to one solution (of 2 different we where attempting)

this is mainly what we made to work:

Virtual Box is in Bridged Networking Mode and we tweak IP tables on our LXD Server (or the host of our container in other words) so the Containers IP’s out put at port 80 (where the webserver inside of the container delivers my application to) to a different port on the the outward facing IP of my LXD server

Container IP:
Container Port: 80
Conatiner’s interface: eth0
LXD Server’s IP:
LXC Server’s interface: enp0s3

the command (we are running on our LXD Server) which does this now is:

sudo /sbin/iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport 8822 -j DNAT --to-destination

abstract command:
sudo /sbin/iptables -t nat -A PREROUTING -i enp0s3 -p tcp --dport [public_port] -j DNAT --to-destination [container_IP]:[container_port]

the LXD profile is the default one which looks like

$ lxc profile show default config: {} description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: lxd-pool type: disk name: default

there was another approach utilizing a macvlan interface (creating a mvlan-profile for lxd).

This would not require to tweak iptables on the LXD Server (and therefore would be preferable) but somewhat we could not make it work which is suspected to be specific to the LXD Server living in a Virtual Box VM and the network settings of that

hope this may help anybody else with the same or similar problem. If anyone has a solution for the macvlan solution I’d be happy to hear it.

again, thanks a ton to Ron @rkelleyrtp (btw updating Virtual Box didn’t make it work neither)


2 more links dealing with this matter … just in my case it still does not work really, but it might be useful for anybody else

The port forwarding solved my issue of forwarding an rtmp stream to a container!! Although I cannot get the video out of the container. But that’s another issue!