I have a host with a public IP h.h.h.h on Ubuntu 18.04. I have 2 LXD containers running on it also on Ubuntu 18.04. I am having difficulty assigning public IPs (let’s say c.c.c.1 and c.c.c.2) to the containers. I tried to follow this post:
and did the following:
host: lxc network set lxdbr0 ipv4.routes c.c.c.1/32
c1: ip -4 addr add dev eth0 c.c.c.1/32
I couldn’t ping c.c.c.1 from elsewhere. When I added c.c.c.1 to the host, I could ping it which means my hosting company’s routing is working.
What am I missing? Is routing not the right way? Should I forward traffic from host through the bridge to containers? If so, how to do that?
Routing is probably not going to work in my case because as soon as I did “lxc network set lxdbr0 ipv4.routes c.c.c.2/32” on the host, I couldn’t ping c.c.c.1 on the host any more. So how to support 2 containers with public IPs?
If the IP addresses work on the wan interface on the host, but not on other interfaces or in containers, then the addresses are probably on-link which means they are present directly on the wire and can only be used on the wan interface unless you use proxy arp.
I can’t find much about lxd and proxy arp but it shouldn’t be different from using proxy arp in other configurations. Another alternative is to use bridging instead of routing on your host (i.e. attach the wan interface to the bridge), but then you can’t use a firewall in the same way.
And it worked for a half a day. Then all of sudden, it stopped working. My provider said they didn’t change anything. Now here is my complete configuration.
Host
Did this at some point:
lxc network set lxdbr0 ipv4.routes 69.64.79.80/32,69.64.79.84/32,69.64.79.200/32
/1.0/containers/vps605
managed: true
status: Created
locations:
none
And routing:
# ip route
default via 69.64.79.1 dev eth1 onlink
10.3.130.0/24 dev lxdbr0 proto kernel scope link src 10.3.130.1
69.64.79.80 dev lxdbr0 proto static scope link
69.64.79.84 dev lxdbr0 proto static scope link 69.64.79.200 dev lxdbr0 proto static scope link
Container
# ip addr
1: lo: …
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:ef:55:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.3.130.208/24 brd 10.3.130.255 scope global dynamic eth0
valid_lft 2953sec preferred_lft 2953sec
inet 69.64.79.200/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd42:ae30:7b99:a03b:216:3eff:feef:55f0/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 3600sec preferred_lft 3600sec
inet6 fe80::216:3eff:feef:55f0/64 scope link
valid_lft forever preferred_lft forever
Notice the public IP address of the container above.
# ip route
default via 10.3.130.1 dev eth0 proto dhcp src 10.3.130.208 metric 100
10.3.130.0/24 dev eth0 proto kernel scope link src 10.3.130.208
10.3.130.1 dev eth0 proto dhcp scope link src 10.3.130.208 metric 100
I can ping google.com from the container with no problem. But I can’t ping 69.64.79.200 from my laptop.
I think you need a DNAT rule set up on the host. Something like this: iptables -A PREROUTING -d <external_ip_on_your_host's_interface> -i <host_interface> -j DNAT --to-destination <container's_ip>
I had to add “-t nat” or otherwise will get error “x_tables: ip_tables: DNAT target: only valid in nat table, not filter”. But when I do “iptables --list”, I see nothing listed under “PREROUTING” and pinging obviously doesn’t work.
/1.0/containers/cont3
managed: true
status: Created
locations:
none
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether ac:1f:6b:97:12:72 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ac:1f:6b:97:12:73 brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 06:d0:dd:dc:99:38 brd ff:ff:ff:ff:ff:ff
inet x.x.x.60/25 brd 194.59.248.127 scope global br0
valid_lft forever preferred_lft forever
inet x.x.x.65/25 brd 194.59.248.127 scope global secondary br0
valid_lft forever preferred_lft forever
inet x.x.x.66/25 brd 194.59.248.127 scope global secondary br0
valid_lft forever preferred_lft forever
inet x.x.x.67/25 brd 194.59.248.127 scope global secondary br0
valid_lft forever preferred_lft forever
79: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:62:6e:b0:bc:6d brd ff:ff:ff:ff:ff:ff
inet 10.90.57.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
I coudln’t figure it out so i eventually added x.x.x.68 on the host and used an iptable rule to send the traffic to the right container, but would have liked to set this up more elegantly
If anyone could shed some light on what could be wrong here i’d appreciate it