Incoming net stopped working for old containers after upgrade LXD 4!

This is very strange.
Incoming network works only for containers created after upgrade to LXD 4
I cant access via network macvlan or bridged nic but outgoing is working.
I use Ubuntu 18.04 for host and containers

I am desperate - cant see differences beetween new and old containers
lxc copy of old container not working too!

We will need to see your configuration before being able to advise.

Please can you provide the output of:

lxc config show <container> --expanded

Additionally please can you further describe the problem you are experiencing, i.e show specific commands that you would expect to work that do not anymore.

Thanks

In short - I can ping to the outside world but cant ping back to ipv4 container
When I create new container with same config it works

OK below config files

macvlan

lxc config show mail7 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu bionic amd64 (20190406_07:42)
  image.os: Ubuntu
  image.release: bionic
  image.serial: "20190406_07:42"
  volatile.base_image: c771e2fac68adf1e3fadbc85ebf7339191ce16e33b8f03c90e2b8179b26e28eb
  volatile.eth0.hwaddr: 00:50:56:00:DD:5C
  volatile.eth0.name: eth0
  volatile.eth1.hwaddr: 00:16:3e:83:35:a4
  volatile.eth1.name: eth1
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
devices:
  eth0:
    nictype: macvlan
    parent: enp0s31f6
    type: nic
  eth1:
    nictype: bridged
    parent: localbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
- macnew
stateful: false
description: ""

bridged

lxc config show web3 --expanded
architecture: x86_64   
config:                    
  image.architecture: amd64                              
  image.description: Ubuntu bionic amd64 (20190406_07:42)
  image.os: Ubuntu     
  image.release: bionic         
  image.serial: "20190406_07:42"
  user.fqdn: web.net2000.pl
  user.user-data: |
    #cloud-config          
    timezone: Europe/Warsaw
    locale: pl_PL
    package_update: true
    package_upgrade: true
    package_reboot_if_required: true
    packages:
      - mysqltuner
      - mosh
      - lsof
      - aide
      - nmap
  volatile.base_image: c771e2fac68adf1e3fadbc85ebf7339191ce16e33b8f03c90e2b8179b26e28eb
  volatile.eth0.hwaddr: 00:16:3e:b9:6a:70
  volatile.eth0.name: eth0
  volatile.eth1.hwaddr: 00:16:3e:dd:4e:9f
  volatile.eth1.name: eth1
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
devices:
  eth0:
    nictype: bridged
    parent: lxdbr0
    type: nic
  eth1:
    nictype: bridged
    parent: localbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk

Thanks.

I can see from your configs that each container has 2 network devices added.

Please can you show the output of the following commands on the host and inside each container:

ip a
ip r

I found that I cant bound powerdns in container to port 53
Maybe new containers got fix to this problem?

bridged container

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:bc:ca:55 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.34.150.64/24 brd 10.34.150.255 scope global dynamic eth0
       valid_lft 3494sec preferred_lft 3494sec
    inet6 fd42:18f9:aab8:4a5:216:3eff:febc:ca55/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 3495sec preferred_lft 3495sec
    inet6 fe80::216:3eff:febc:ca55/64 scope link 
       valid_lft forever preferred_lft forever
13: eth1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:ca:1a:cb brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.168.188.86/24 brd 10.168.188.255 scope global dynamic eth1
       valid_lft 3494sec preferred_lft 3494sec
    inet6 fd42:f17e:a583:3af3:216:3eff:feca:1acb/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 3495sec preferred_lft 3495sec
    inet6 fe80::216:3eff:feca:1acb/64 scope link 
       valid_lft forever preferred_lft forever

$ ip r
default via 10.34.150.1 dev eth0 proto dhcp src 10.34.150.64 metric 100 
default via 10.168.188.1 dev eth1 proto dhcp src 10.168.188.86 metric 100 
10.34.150.0/24 dev eth0 proto kernel scope link src 10.34.150.64 
10.34.150.1 dev eth0 proto dhcp scope link src 10.34.150.64 metric 100 
10.168.188.0/24 dev eth1 proto kernel scope link src 10.168.188.86 
10.168.188.1 dev eth1 proto dhcp scope link src 10.168.188.86 metric 100

host

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:bc:ca:55 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.34.150.64/24 brd 10.34.150.255 scope global dynamic eth0
   valid_lft 3469sec preferred_lft 3469sec
inet6 fd42:18f9:aab8:4a5:216:3eff:febc:ca55/64 scope global dynamic mngtmpaddr noprefixroute
   valid_lft 3469sec preferred_lft 3469sec
inet6 fe80::216:3eff:febc:ca55/64 scope link
   valid_lft forever preferred_lft forever
13: eth1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:ca:1a:cb brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.168.188.86/24 brd 10.168.188.255 scope global dynamic eth1
   valid_lft 3469sec preferred_lft 3469sec
inet6 fd42:f17e:a583:3af3:216:3eff:feca:1acb/64 scope global dynamic mngtmpaddr noprefixroute
   valid_lft 3596sec preferred_lft 3596sec
inet6 fe80::216:3eff:feca:1acb/64 scope link
   valid_lft forever preferred_lft forever
$ ip r
default via 94.130.143.1 dev enp0s31f6 proto static onlink
10.34.150.0/24 dev lxdbr0 proto kernel scope link src 10.34.150.1
10.168.188.0/24 dev localbr0 proto kernel scope link src 10.168.188.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.19.0.0/16 dev br-444234cecc2c proto kernel scope link src 172.19.0.1 linkdown
$ ip r
default via 10.34.150.1 dev eth0 proto dhcp src 10.34.150.64 metric 100 
default via 10.168.188.1 dev eth1 proto dhcp src 10.168.188.86 metric 100 

The presence of 2 default routes with the same metric value is likely to cause unintended networking problems, can you explain why you have this configuration?

I use secondary network inside HOST to communicate macvlan container with other bridged containers

That 2nd NIC, using DHCP with a default gateway is likely going to cause issues for you. I’m not clear why you need that for the bridged container, as bridged devices can communicate with the host.

Either way, I would suggest disabling DHCP on the 2nd NIC and using static configuration (or modifying the 2nd DHCP server so that it doesn’t configure a default gateway), such that there is no 2nd default gateway added.

Its likely that return packets are going out of the wrong interface.

Well I use secondary network to communicate macvlan with other containers and HOST himself.
How can I disable DHCP for sec network without removing network itself?

I upgraded containers/host and now I can use in netplan config metric!
Now it works
Thank you very very much for pointing me into right direction