How to set up networking for 1 host with a public IP running 2 LXD containers with public IPs?

Hi all,

I have a host with a public IP h.h.h.h on Ubuntu 18.04. I have 2 LXD containers running on it also on Ubuntu 18.04. I am having difficulty assigning public IPs (let’s say c.c.c.1 and c.c.c.2) to the containers. I tried to follow this post:

and did the following:

host: lxc network set lxdbr0 ipv4.routes c.c.c.1/32
c1: ip -4 addr add dev eth0 c.c.c.1/32

I couldn’t ping c.c.c.1 from elsewhere. When I added c.c.c.1 to the host, I could ping it which means my hosting company’s routing is working.

What am I missing? Is routing not the right way? Should I forward traffic from host through the bridge to containers? If so, how to do that?

Routing is probably not going to work in my case because as soon as I did “lxc network set lxdbr0 ipv4.routes c.c.c.2/32” on the host, I couldn’t ping c.c.c.1 on the host any more. So how to support 2 containers with public IPs?

Thanks a lot!

PS: Additional information:

# lxc profile show default
config:
  environment.http_proxy: ""
  user.network_mode: ""
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
name: default
used_by:
- /1.0/containers/a
- /1.0/containers/b

The container “b” was created with this default profile and yet it has nothing about the bridge:

# lxc config device show vps605
root:
  path: /
  pool: default
  type: disk

Is this wrong?

Did you set up routing (at least a default gw) inside your c1 container? Could you paste the outputs of ip addr show and ip route show here?

If the IP addresses work on the wan interface on the host, but not on other interfaces or in containers, then the addresses are probably on-link which means they are present directly on the wire and can only be used on the wan interface unless you use proxy arp.

I can’t find much about lxd and proxy arp but it shouldn’t be different from using proxy arp in other configurations. Another alternative is to use bridging instead of routing on your host (i.e. attach the wan interface to the bridge), but then you can’t use a firewall in the same way.

Thanks for the replies! I made some progress since then. I added these lines in bold to /etc/network/interfaces on the host:

auto eth1
iface eth1 inet static
address 69.64.79.67
netmask 255.255.255.255
gateway 69.64.79.1
hwaddress 0C:C4:7A:C3:C3:A1
bridge_ports eth1
bridge_stp off
bridge_maxwait 5
post-up /sbin/brctl setfd lxdbr0 0

And it worked for a half a day. Then all of sudden, it stopped working. My provider said they didn’t change anything. Now here is my complete configuration.

Host

Did this at some point:

lxc network set lxdbr0 ipv4.routes 69.64.79.80/32,69.64.79.84/32,69.64.79.200/32

And the configuration is confirmed:

# lxc network show lxdbr0
config:
ipv4.address: 10.3.130.1/24
ipv4.nat: “true”
ipv4.routes: 69.64.79.80/32,69.64.79.84/32,69.64.79.200/32
ipv6.address: fd42:ae30:7b99:a03b::1/64
ipv6.nat: “true”
description: “”
name: lxdbr0
type: bridge
used_by:

  • /1.0/containers/vps601
  • /1.0/containers/vps602
  • /1.0/containers/vps604
  • /1.0/containers/vps605
    managed: true
    status: Created
    locations:
  • none

And routing:

# ip route
default via 69.64.79.1 dev eth1 onlink
10.3.130.0/24 dev lxdbr0 proto kernel scope link src 10.3.130.1
69.64.79.80 dev lxdbr0 proto static scope link
69.64.79.84 dev lxdbr0 proto static scope link
69.64.79.200 dev lxdbr0 proto static scope link

Container

# ip addr
1: lo: …
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:ef:55:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.3.130.208/24 brd 10.3.130.255 scope global dynamic eth0
valid_lft 2953sec preferred_lft 2953sec
inet 69.64.79.200/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd42:ae30:7b99:a03b:216:3eff:feef:55f0/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 3600sec preferred_lft 3600sec
inet6 fe80::216:3eff:feef:55f0/64 scope link
valid_lft forever preferred_lft forever

Notice the public IP address of the container above.

# ip route
default via 10.3.130.1 dev eth0 proto dhcp src 10.3.130.208 metric 100
10.3.130.0/24 dev eth0 proto kernel scope link src 10.3.130.208
10.3.130.1 dev eth0 proto dhcp scope link src 10.3.130.208 metric 100

I can ping google.com from the container with no problem. But I can’t ping 69.64.79.200 from my laptop.

What am I missing? Thanks!

I think you need a DNAT rule set up on the host. Something like this: iptables -A PREROUTING -d <external_ip_on_your_host's_interface> -i <host_interface> -j DNAT --to-destination <container's_ip>

1 Like

Pick this thread back up :). I am trying Andras’ suggestion:

iptables -N PREROUTING
iptables -t nat -A PREROUTING -d <public-ip> -i <host-ethernet-interface> -j DNAT --to-destination <container-ip>

I had to add “-t nat” or otherwise will get error “x_tables: ip_tables: DNAT target: only valid in nat table, not filter”. But when I do “iptables --list”, I see nothing listed under “PREROUTING” and pinging obviously doesn’t work.

Any further suggestion?

After getting a hang of iptables, it is working now. Thanks Andras for suggesting this approach!

1 Like

i currently have the same problem as Jun

Worked for few hours then suddenly it stopped and can’t get it back up

Did on host:
lxc network set lxdbr0 ipv4.routes x.x.x.68/32

Then on container:
ip -4 addr add dev eth0 x.x.x.68/32 preferred_lft 1

Also on the host i have:

lxc network list

±-------±---------±--------±------------±--------+
| NAME | TYPE | MANAGED | DESCRIPTION | USED BY |
±-------±---------±--------±------------±--------+
| br0 | bridge | NO | | 0 |
±-------±---------±--------±------------±--------+
| eno1 | physical | NO | | 0 |
±-------±---------±--------±------------±--------+
| eno2 | physical | NO | | 0 |
±-------±---------±--------±------------±--------+
| lxdbr0 | bridge | YES | | 3 |
±-------±---------±--------±------------±--------+

lxc network show lxdbr0

config:
ipv4.address: 10.90.57.1/24
ipv4.nat: “true”
ipv4.routes: x.x.x.68/32
ipv6.address: none
ipv6.nat: “true”
description: “”
name: lxdbr0
type: bridge
used_by:

  • /1.0/containers/cont1
  • /1.0/containers/cont2
  • /1.0/containers/cont3
    managed: true
    status: Created
    locations:
  • none

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether ac:1f:6b:97:12:72 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether ac:1f:6b:97:12:73 brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 06:d0:dd:dc:99:38 brd ff:ff:ff:ff:ff:ff
inet x.x.x.60/25 brd 194.59.248.127 scope global br0
valid_lft forever preferred_lft forever
inet x.x.x.65/25 brd 194.59.248.127 scope global secondary br0
valid_lft forever preferred_lft forever
inet x.x.x.66/25 brd 194.59.248.127 scope global secondary br0
valid_lft forever preferred_lft forever
inet x.x.x.67/25 brd 194.59.248.127 scope global secondary br0
valid_lft forever preferred_lft forever
79: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:62:6e:b0:bc:6d brd ff:ff:ff:ff:ff:ff
inet 10.90.57.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever

I coudln’t figure it out so i eventually added x.x.x.68 on the host and used an iptable rule to send the traffic to the right container, but would have liked to set this up more elegantly

If anyone could shed some light on what could be wrong here i’d appreciate it

thanks