I run an application in a Container which is being accessed by users through a browser (port 80, served by nginx).
It works once I use the browser of my host using the containers IP (10.116.163.102) but doesn’t when I’m outside of the host machine.
host is a Virtual Box xubuntu 16.04 (bridged networking) (original post said NAT, sorry)
I can ping the hosts IP (172.20.14.233)
LXD 2.12
just guessing I’d come up with the idea that I might have to run some sort of lxc network attach ... command?
The output of lxd network --help raises more question then answers unfortunately
total beginner here, so sorry if it’s a silly question. Can someone advise?
How is the networking setup on your VM in Virtual Box? Bridged or NAT’d. According to this page, NAT mode won’t allow VM with multiple MAC addresses to pass traffic (as in LXD with containers):
So, make sure your VM is set for bridged mode (not NAT).
BTW - VMWare has the same issue. The fix is to enable promiscuous mode on the vSwitch.
I have to admit that I don’t really get whether NAT or bridged works better.
however I don’t think that’s the root cause of my question … I can reach anything from within my container, so networking in general seems to be working from inside to the outside (I am actually on bridged Networking in the Virtualbox VM [sorry that was stated wrong in my post]).
so the question is how do I reach the container from the outside world (in my current case that would be the my offices internal network)
If I understand you correctly, you are running an LXD container with nginx (port 80). Is the container network using a bridged or macvlan interface (lxc config show )?
Sounds like you simply need some NAT run on the LXD server to port-forward all port-80 requests to your container. If so, look at this page: https://github.com/lxc/lxd/issues/591
Essentially, you probably need to run the following on the LXD server:
with a space before --to-destination the command works much better iptables -t nat -A PREROUTING -p tcp -i enp0s3 --dport 80 -j DNAT --to-destination :80
still can’t ping the container’s IP (no matter whether my VirtualBox is on NAT, or bridged networking)
EDIT: also tried to apply the command with -i eth0 (which is then name of the network adapter inside of the container) … same, not working, result
Are you able to get to the web page of your container? If so, the NAT is working properly. If not, something else is wrong.
As for the ping command; do your clients know how to get to the container’s IP Address? From your original post, your LXD server’s IP is 172.20.14.233 and the container’s IP is 10.116.163.102. Those are on two different subnets. For the ping command to work from your local network to the container’s IP, those PCs must have a route to 10.116.163.102 network.
no, that’s (get to the webpage of the container) exactly what I am trying to achieve. I can reach it using the browser of my containers host (which is a member of my local 172.20.14.xxx network as well, but that somehow doesn’t seem to matter). Do I have to do NAT port forwarding in the VirtualBox Settings maybe?
no, they don’t. And I do neither. I guess that’s what I may have to understand and then build a route to that 10.116.163.xxx network.
On our office’s server I have a KVM machine running and that machine has a static IP which matches the overall network (172.20.14.xxx). I tried to do a similar setup in my container, but that only got me shut down completely (I mean I couldn’t even ping anything anymore from inside the container [which worked flawless before I did anything])
Maybe using a VM to play with containers is not the best idea if you don’t bring profound network knowledge with you. I did it because our Server is still an 14.04 machine and seems to only go as far as LCD 2.09.
thanks to @rkelleyrtp who sat down in a 1.5 hr Skype session we where able to come to one solution (of 2 different we where attempting)
this is mainly what we made to work:
Virtual Box is in Bridged Networking Mode and we tweak IP tables on our LXD Server (or the host of our container in other words) so the Containers IP’s out put at port 80 (where the webserver inside of the container delivers my application to) to a different port on the the outward facing IP of my LXD server
the LXD profile is the default one which looks like
$ lxc profile show default config: {} description: Default LXD profile devices: eth0: name: eth0 nictype: bridged parent: lxdbr0 type: nic root: path: / pool: lxd-pool type: disk name: default
there was another approach utilizing a macvlan interface (creating a mvlan-profile for lxd).
This would not require to tweak iptables on the LXD Server (and therefore would be preferable) but somewhat we could not make it work which is suspected to be specific to the LXD Server living in a Virtual Box VM and the network settings of that
hope this may help anybody else with the same or similar problem. If anyone has a solution for the macvlan solution I’d be happy to hear it.
again, thanks a ton to Ron @rkelleyrtp (btw updating Virtual Box didn’t make it work neither)
The port forwarding solved my issue of forwarding an rtmp stream to a container!! Although I cannot get the video out of the container. But that’s another issue!