Webserver on public ip of container

Hello All , I am able to access the container’s public ip over public internet , from within the container , access the web server running on it using wget but from the host or from the public internet cannot get the web server in side the container to server pages

H1- Has public ip
C1 - Public Ip
C1-Running web server - NO FIREWALLS , has two network interfaces , one for internal ip , and the other for pubic ip
Inside C1 -wget http://internalipC1, wget http://publiipC1 , wget http://publichostnameC1 all work
Inside H1 - wget http://internalipofC1, wget http://publicC1, wget http://publichostnameC1 all DO NOT WORK.
Fro H1 - I can ping internalipC1, publiipC1, publichostnameC1

Also from a desktop over public internet , can ping publiipC1, publichostnameC1
Public ip’s for the containers are assigned and configured on the eth1 interface of container.

I had to run the lxc network set lxdbr0 ipv4.routes publicip/32 in order to reach the container’s public ip

So this all used to work very well , and i was able to access the container’s web service over public internet . After a host restart , stopped working , may be i missed a config.

I also ran route add default gw hostingproverassignedgateway dev br0

What am i missing ? Can someone help me with this … I am willing to pay for an hour of consulting to sort this out .

Thanks in advance.


How do you get the container to get a public IP?
It looks you use a bridge, is that correct?

That is correct . using the bridge. I added the Container’s ip address for the container’s netplan file. All this used to work , i can ping the containers ip addrss just fine , just that the when i try to access the webserver on container , either from host or from internet , i get connection refused
Note that i am binding the containers public ip address to the web server. This all used to work just fine , after the host restart this issue popped up

“Connection refused” is a specific error that the service is not listening on the appropriate interface. You can use netstat to verify that the service indeed listens the correct interface.

So question , what happens if the hosting providres gateway that the br0 uses is not reachable? Interestingly the container IP is resolving to the Host . and as the Host does not have the web server running it was returning connection refused. So i was thinking that the container was reachabel over public interent , but in reality , container and host’s ip addresses were resolving to the host and as a result when i ping the container ip and host ip ,from public internet , i was seeign a successful response and that threw me off …Also if the gateway is not reachable , is it safe to assume that the containers cannot be reached over the internet?


Ok , Seems like i finally got it to work, Ubuntu 18.04 , was using inerfaces files instead of Netplan on the host , while this should not be a problem ,for what ever reason Host was getting the C1’s ip so it seemed like when i was trying to access the web server on C1 from the host using the internal ip it worked while using the public ip it failed - Obviously Host does not have a web server running. First i am not great with network /subnets and routing , it never appealed to me… :smile: I happen to get by , well , i scraped the whole interfaces and started with the netplan , added a br0 bridge ,and bridged it with Hosts eno1 interface - I was seeing now that br0 was binding to the eno1 public ip. With the netplan , i did not have to intput each of the gateway / netmask etc and all did was included the ipaddress/28 which my hosting provider provided. This seemed to have resulted in all the proper ip addresses for gateway, netmask , etc … Restarted system . My host now was pingable from public internet . Then for the C1 , i assigned the eth1 to bind to br0 that i createdon the host . C1 was receiving the internal IP addess from lxdbr0 . Then i added the route for the container on host with the command "lxc network set lxdbr0 ipv4.routes ipaddressofc1/32 " restarted C1 . Adding this route worked int he past , but this time around , i could not ping the C1’s public ip from the host. I went back c1 , and in the c1’s netplan instead of /32 i changed it to /28 to match to what i had in the Host . That did it , i was able to ping C1 from public internet . So I wanted to see if this lxc network set lxdbr0 ipv4 routes has any effect at all , on aanother container , all i did is updated its netplan and that is it . it seemd to respond without any issues. So there seems to be some differences in how Ubuntu handles how it brings up network for the system based on the if using netplan vs interfaces . Also in the process i tried to use macvlan , and any time i tried to bring up the container with macvlan profile , it could not start the conainter and threw errors … Well , seems like the best option in my case is set the C1 netplan with the right /subnet … that was assigned by my Hosting provider . Sharing this hoping someone will benefit and save hours of trouble shooting / countless restarts of the server .