Can't access container from internet with ovn network attached

Sorry for late reply. Here are the output:

  • lxc network show <OVN network>
config:
  bridge.mtu: "1442"
  ipv4.address: 10.206.12.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:99f8:d3cc:80e1::1/64
  ipv6.nat: "true"
  network: ovn-uplink
  volatile.network.ipv4.address: 192.168.17.201
description: ""
name: lxdovn0
type: ovn
used_by:
- /1.0/instances/container1
- /1.0/instances/container2
managed: true
status: Created
locations:
- node1
- node2
  • lxc network show <uplink network for OVN network>
config:
  dns.nameservers: 192.168.17.36
  ipv4.gateway: 192.168.17.1/24
  ipv4.ovn.ranges: 192.168.17.200-192.168.17.211
  volatile.last_state.created: "false"
description: ""
name: ovn-uplink
type: physical
used_by:
- /1.0/networks/lxdovn0
managed: true
status: Created
locations:
- node1
- node2
  • lxc config show <nginx instance> --expanded
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Debian bullseye arm64 (20211225_07:49)
  image.os: Debian
  image.release: bullseye
  image.serial: "20211328_07:49"
  image.type: squashfs
  image.variant: cloud
  volatile.base_image: 36fe6744706815a37f63d943bc813bfa728c53276c9f27e740afbf5f3f4ffc3
  volatile.cloud-init.instance-id: 8fc69d2d-bedb-4b79-9863-34a29a01b718
  volatile.eth0.host_name: veth7394c989
  volatile.eth0.hwaddr: 00:16:3e:bc:1a:34
  volatile.eth0.name: eth0
  volatile.eth1.host_name: vethfb5856ef
  volatile.eth1.hwaddr: 00:16:3e:ed:6d:21
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 48221a04-36f0-4906-9870-df49ea083420
devices:
  eth0:
    ipv4.address: 10.206.12.2
    network: lxdovn0
    type: nic
  eth1:
    name: eth1
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: pool01
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
  • lxc config show <reverse proxy instance> --expanded
    At the moment, for testing, the reverse proxy does act as normal nginx endpoint and provides a simple html site. The reverse functionality is not the reason for the problem.

  • ip a and ip r from the LXD host(s) and from inside both instances.
    Just for container1:

  • ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
182: eth0@if183: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:bc:1a:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.206.12.2/24 brd 10.206.12.255 scope global dynamic eth0
       valid_lft 2588sec preferred_lft 2588sec
184: eth1@if185: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:ed:6d:89 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.17.37/24 brd 192.168.17.255 scope global dynamic eth1
       valid_lft 826676sec preferred_lft 826676sec
  • ip r
default via 10.206.12.1 dev eth0 
10.206.12.0/24 dev eth0 proto kernel scope link src 10.206.12.2 
192.168.17.0/24 dev eth1 proto kernel scope link src 192.168.17.37 

Hopefully that helps.

Thanks!