LXD public ip, not reachable

Greetings,
i want to give a public ip to a container, but i can’t reach it over the internet. i’ve tried to follow this: Give public ip one container with custom bridge

my bridge looks like this:

$lxc network show lxdbr0
config:
  ipv4.address: 10.191.26.1/24
  ipv4.nat: "true"
  ipv4.routes: 133.123.122.222
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/containers/guest99
managed: true

the interfaces file on the container:

$cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
	post-up ip -4 addr add dev eth0 133.123.122.222/32
	pre-down ip -4 addr del dev eth0  133.123.122.222/32

the container can access the internet, and i can ping the container from the host. but i can’t ping the container from a different machine.
lxc list shows

+---------+---------+--------------------------------+------+------------+-----------+
|  NAME   |  STATE  |              IPV4              | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+--------------------------------+------+------------+-----------+
| guest99 | RUNNING | 133.123.122.222 (eth0)         |      | PERSISTENT | 0         |
|         |         | 10.191.26.120 (eth0)           |      |            |           |
+---------+---------+--------------------------------+------+------------+-----------+

ip route list on the host shows:

133.123.122.222 dev lxdbr0 proto static scope link


$lxc config device show guest99 
bridge:
  name: eth0
  nictype: bridged
  parent: lxdbr0
  type: nic

full config:

lxc config show guest99 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian stretch amd64 (20171207_22:42)
  image.os: Debian
  image.release: stretch
  image.serial: "20171207_22:42"
  volatile.base_image: 8251f73695dafe002d700e4560a60f0a30641c39e93de12988682aa917b3231f
  volatile.bridge.hwaddr: 00:16:3e:d1:6b:5c
  volatile.eth0.hwaddr: 00:16:3e:09:8a:43
  volatile.eth0.name: eth1
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
devices:
  bridge:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  eth0:
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: lxdPool
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

weirdly there ist an eth1 interface and i have no idea where it came from.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
122: eth0@if123: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:d1:6b:5c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.191.26.120/24 brd 10.191.26.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 133.123.122.222/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fed1:6b5c/64 scope link 
       valid_lft forever preferred_lft forever
124: eth1@if125: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:09:8a:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::216:3eff:fe09:8a43/64 scope link 
       valid_lft forever preferred_lft forever

am I missing something? Thanks for your help!

The eth1 device is showing up because of the existing device in your default profile.

Assuming you want all your containers connected to lxdbr0, it’s best to do:

  • lxc config device remove guest99 bridge
  • lxc profile device set default eth0 name eth0

And then restart the container. You should just have on eth0 device at that point.

The rest of your setup looks correct to me. Can your host ping the container’s public IP?

If so, then I think you’re at the stage where you should do a bit of tcpdump to figure out what’s going on. See whether your external traffic reaches the host, then if it does, whether it reaches the container and if it does, make sure the response on the way out is using the right IP.

Thank you for the quick reply.
I’ve noticed that the host doesn’t reply to ARP requests for the container’s public IP. trying to find out why.
yes, the host can ping the container’s public IP.