NIC Copper Link is Down / Up

Hi, I install the lxd on Debian 9 using snapd with this guide (https://stgraber.org/2017/01/18/lxd-on-debian/) and create three containers. I configure the public ip’s using a bridge (macvlan) according to my server provider. The config of the container is (other containers are the same):

architecture: x86_64
config:
  image.architecture: x86_64
  image.description: debian stretch x86_64 (default) (20180322_22:42)
  image.name: debian-stretch-x86_64-default-20180322_22:42
  image.os: debian
  image.release: stretch
  image.variant: default
  limits.memory: 16GB
  limits.memory.enforce: soft
  volatile.base_image: f21c3bb11dc854ed74ac6d57b957e9e1ee6a25e340eaea3680e5f09ebb3a9066
  volatile.eth0.hwaddr: 00:16:3e:ce:c3:52
  volatile.eth1.name: eth1
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  eth1:
    hwaddr: 52:54:00:00:4d:a2
    nictype: macvlan
    parent: eno1
    type: nic
  root:
    path: /
    pool: lxc
    size: 250GB
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

The containers and host works well, but I have an strange problem with the network interface. I lost the connectivity with the server many times and randomly, every day. The kernel logs are:

Dmesg output:
[972969.446720] bnx2 0000:01:00.0 eno1: NIC Copper Link is Down
[973271.213398] bnx2 0000:01:00.0 eno1: NIC Copper Link is Up, 1000 Mbps full duplex
[1010520.375505] bnx2 0000:01:00.0 eno1: NIC Copper Link is Down
[1010822.471302] bnx2 0000:01:00.0 eno1: NIC Copper Link is Up, 1000 Mbps full duplex

/var/log/kern.log
Jul 20 10:37:56 sd-60979 kernel: [1446734.755575] bnx2 0000:01:00.0 eno1: NIC Copper Link is Down
Jul 20 10:42:57 sd-60979 kernel: [1447036.437145] bnx2 0000:01:00.0 eno1: NIC Copper Link is Up, 1000 Mbps full duplex
Jul 20 11:39:54 sd-60979 kernel: [1450452.642729] bnx2 0000:01:00.0 eno1: NIC Copper Link is Down
Jul 20 11:44:56 sd-60979 kernel: [1450754.765508] bnx2 0000:01:00.0 eno1: NIC Copper Link is Up, 1000 Mbps full duplex

I created a ticket on my hosting provider service and they did a hardware and cable verification, also they says to me that the traffic isn’t suspicious. When I reinstalled the host server, the problem did not happen before mount the conatainers infraestructure.

I do not understand what is causing this error.

I leave here some configurations and versions:

Host: Debian 9
Lxd versions from Snap: 3.2
Containers: Debian 9

Interfaces (lspci):
01:00.0 Ethernet controller: Broadcom Limited NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
01:00.1 Ethernet controller: Broadcom Limited NetXtreme II BCM5716 Gigabit Ethernet (rev 20)

lshw -class network

*-network:0                                                                                                                                                                                   
       description: Ethernet interface                                                                                                                                                          
       product: NetXtreme II BCM5716 Gigabit Ethernet                                                                                                                                           
       vendor: Broadcom Limited                                                                                                                                                                 
       physical id: 0                                                                                                                                                                           
       bus info: pci@0000:01:00.0                                                                                                                                                               
       logical name: eno1                                                                                                                                                                       
       version: 20                                                                                                                                                                              
       serial: d4:ae:52:cf:1b:d5                                                                                                                                                                
       size: 1Gbit/s                                                                                                                                                                            
       capacity: 1Gbit/s                                                                                                                                                                        
       width: 64 bits                                                                                                                                                                           
       clock: 33MHz                                                                                                                                                                             
       capabilities: pm vpd msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation                                                  
       configuration: autonegotiation=on broadcast=yes driver=bnx2 driverversion=2.2.6 duplex=full firmware=7.4.8 bc 7.4.0 NCSI 2.0.11 ip=62.210.113.221 latency=0 link=yes multicast=yes port=t
wisted pair speed=1Gbit/s     
  *-network:1 DISABLED
       description: Ethernet interface
       product: NetXtreme II BCM5716 Gigabit Ethernet
       vendor: Broadcom Limited
       physical id: 0.1
       bus info: pci@0000:01:00.1
       logical name: eno2
       version: 20
       serial: d4:ae:52:cf:1b:d6
       capacity: 1Gbit/s
       width: 64 bits
       clock: 33MHz
       capabilities: pm vpd msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=bnx2 driverversion=2.2.6 duplex=half firmware=7.4.8 bc 7.4.0 NCSI 2.0.11 latency=0 link=no multicast=yes port=twisted pair
       resources: irq:17 memory:c2000000-c3ffffff
  *-network:0
       description: Ethernet interface
       physical id: 1
       logical name: lxdbr0
       serial: fe:23:51:db:5d:dc
       capabilities: ethernet physical
       configuration: broadcast=yes driver=bridge driverversion=2.3 firmware=N/A ip=10.151.108.1 link=yes multicast=yes
*-network:1
       description: Ethernet interface
       physical id: 2
       logical name: veth4EC8V1
       serial: fe:7e:54:fc:61:d6
       size: 10Gbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=veth driverversion=1.0 duplex=full link=yes multicast=yes port=twisted pair speed=10Gbit/s
  *-network:2
       description: Ethernet interface
       physical id: 3
       logical name: vethL1H5US
       serial: fe:39:17:f8:43:6d
       size: 10Gbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=veth driverversion=1.0 duplex=full link=yes multicast=yes port=twisted pair speed=10Gbit/s
  *-network:3
       description: Ethernet interface
       physical id: 4
       logical name: vethQHX91I
       serial: fe:23:51:db:5d:dc
       size: 10Gbit/s
       capabilities: ethernet physical
       configuration: autonegotiation=off broadcast=yes driver=veth driverversion=1.0 duplex=full link=yes multicast=yes port=twisted pair speed=10Gbit/s

Thanks in advance

It is most likely due to handling of macvlan in that particular device driver.
macvlan requires some amount of cooperation with the NIC to effectively track multiple MAC addresses, not all cards do that the same way and some can get you into problems.

I suspect this may be the case here, so switching to a good old bridge may help.

Hi Stéphane, thanks for answering, what you say sounds good.

I will try to configure a classic bridge, but I remember that I had some problems with the bridges and finally I opted for macvlan.

I will read some guides and forum tips… and if I have some problems/questions I will post in forum again

roger

Hello, I have already configured the host and containers with bridge. Now I will wait a few days to confirm if the problem repeats or disappears…

Thanks again

Hi, I changed the configuration to bridge and the problem has been fixed. Thank you very much @stgraber :smiley::smiley::smiley: