Lxd bridge not working consistently

Hi,
I recently had to rebuild my system drive. Since then I’ve been having some problems; most have been fixed.
Current problem. From inside a container I do apt-get update, it almost always fails in all containers. From the host system however it works fine. I started looking at the lxc network setup.
This looked fine to me:

$ lxc network list
+--------+----------+---------+-----------------+---------------------------+-------------+---------+
|  NAME  |   TYPE   | MANAGED |      IPV4       |           IPV6            | DESCRIPTION | USED BY |
+--------+----------+---------+-----------------+---------------------------+-------------+---------+
| enp4s0 | physical | NO      |                 |                           |             | 0       |
+--------+----------+---------+-----------------+---------------------------+-------------+---------+
| lxdbr0 | bridge   | YES     | 10.221.211.1/24 | fd42:15d6:d200:ce52::1/64 |             | 8       |
+--------+----------+---------+-----------------+---------------------------+-------------+---------+

This looked fine to me also:

$ lxc network show lxdbr0
config:
  ipv4.address: 10.221.211.1/24
  ipv4.nat: "true"
  ipv6.address: fd42:15d6:d200:ce52::1/64
  ipv6.nat: "true"
  volatile.bridge.hwaddr: 00:16:3e:bc:e5:d1
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/emerson20200730
- /1.0/instances/emerson20200816
- /1.0/instances/mail
- /1.0/instances/nextcloud2
- /1.0/instances/phabricator20200827
- /1.0/instances/project20200807
- /1.0/instances/router20200728
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

This one got me a little concerned:

$ lxc network info lxdbr0
Name: lxdbr0
MAC address: 00:16:3e:bc:e5:d1
MTU: 1500
State: up

Ips:
  inet	10.221.211.1
  inet6	fd42:15d6:d200:ce52::1
  inet6	fe80::216:3eff:febc:e5d1

Network usage:
  Bytes received: 12.79GB
  Bytes sent: 14.32GB
  Packets received: 17892966
  Packets sent: 20686862

I saw a second ipv6 entry (fe80 I don’t think is valid). It could be a left over from previous attempts at getting networking running again.

Do you think this is the problem?
How do I get rid of it? Do I need to drop all networking and rebuild the bridge?

Thanks,

Harlan…

$ lxc info
config: {}
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIICCjCCAZCgAwIBAgIRANlt8OhDEtg7h7kSqvkT6j4wCgYIKoZIzj0EAwMwNjEc
    MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEWMBQGA1UEAwwNcm9vdEBmaWxl
    c3J2MjAeFw0yMDA4MTQwNTI3MzJaFw0zMDA4MTIwNTI3MzJaMDYxHDAaBgNVBAoT
    E2xpbnV4Y29udGFpbmVycy5vcmcxFjAUBgNVBAMMDXJvb3RAZmlsZXNydjIwdjAQ
    BgcqhkjOPQIBBgUrgQQAIgNiAAS39xiMdHFg+ITBzYyk0byS9jlPObg/d8SJLV4I
    3gmStf+yjqtAIMNuw9Xggibfa7XlVFQzM+3MpBOVgq7OBtryJj/qyADlKMW8rcH9
    8VKDDU0oUxVhwfDe/j+q1ej9ZqGjYjBgMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUE
    DDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCsGA1UdEQQkMCKCCGZpbGVzcnYy
    hwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gAMGUCMQDQmTGg
    Kk1KxMch1uLr1Sudm8YHRb+Vs2FeSjQfEP4N9qZakyJqg+GnzmUS9k1HLd8CMGMV
    uM1rpiZgmTnSe1ZK53rU1rVT4IA9BGpVC2p63CKSt/YeQm1LRk56pDER6yRKWw==
    -----END CERTIFICATE-----
  certificate_fingerprint: 95f141d78510dd3a6c65ec12d15a03d2fbc0c148041c2a7624b0cb3ec00ccd79
  driver: lxc
  driver_version: 4.0.4
  firewall: xtables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "false"
    seccomp_listener: "false"
    seccomp_listener_continue: "false"
    shiftfs: "false"
    uevent_injection: "false"
    unpriv_fscaps: "true"
  kernel_version: 4.15.0-112-generic
  lxc_features:
    cgroup2: "true"
    devpts_fd: "false"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "false"
  os_name: Ubuntu
  os_version: "18.04"
  project: default
  server: lxd
  server_clustered: false
  server_name: filesrv2
  server_pid: 3918
  server_version: "4.4"
  storage: dir
  storage_version: "1"

Does anyone have thoughts on this? I could some help getting the bridge sorted out.

Thanks,

Harlan…

I’ve modified the category of your post from LXC to LXD, as you’re not using LXC you’re using LXD. Also I’ve quoted your command outputs for clarity as it is easier to read this way.

As to your specific issue, the additional IPv6 address is normal and it what is known as a “link-local” address, these are added automatically to all IPv6 interfaces and are part of normal operation.

Beyond that your post is a little light on details, to help further we would need to know:

  • The specific error when apt fails (this can provide useful info).
  • The output of lxc config show <container> --expanded of one of the problem containers.
  • The output of ip a, and ip r from both the problem container and the LXD host.

Thanks

Hi Thomas,
Thank You for straightening up my mess. Here is info you requested, in order of request.

Please let me know if you need anything else.

Thanks,

Harlan…

    # apt-get update
    Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease                                  
    Err:2 http://security.ubuntu.com/ubuntu bionic-security InRelease                    
      Connection failed [IP: 91.189.88.142 80]
    Err:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
      Connection failed [IP: 91.189.88.142 80]
    Err:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
      Connection failed [IP: 91.189.88.142 80]
    Ign:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
    Ign:6 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en
    Ign:7 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
    Ign:8 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en
    Ign:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
    Ign:6 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en
    Ign:7 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
    Ign:8 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en
    Ign:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
    Ign:6 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en
    Ign:7 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
    Ign:8 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en
    Err:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
      Connection failed [IP: 91.189.88.142 80]
    Ign:6 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en
    Ign:7 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
    Ign:8 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en
    Reading package lists... Done
    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic-updates/InRelease  Connection failed [IP: 91.189.88.142 80]
    W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic-backports/InRelease  Connection failed [IP: 91.189.88.142 80]
    W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/bionic-security/InRelease  Connection failed [IP: 91.189.88.142 80]
    E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic/universe/binary-amd64/Packages  Connection failed [IP: 91.189.88.142 80]
    W: Some index files failed to download. They have been ignored, or old ones used instead.
    $ lxc config show phabricator20200827 --expanded
    architecture: x86_64
    config:
      image.architecture: amd64
      image.description: ubuntu 18.04 LTS amd64 (release) (20200807)
      image.label: release
      image.os: ubuntu
      image.release: bionic
      image.serial: "20200807"
      image.type: squashfs
      image.version: "18.04"
      volatile.base_image: a92eaa65a5c5e53c6bf788b4443f4e5d2afac1665486247c336aa90959522bb6
      volatile.eth0.host_name: vethe59fea35
      volatile.eth0.hwaddr: 00:16:3e:32:8f:42
      volatile.idmap.base: "0"
      volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
      volatile.last_state.power: RUNNING
    devices:
      eth0:
        name: eth0
        network: lxdbr0
        type: nic
      root:
        path: /
        pool: data2dir
        type: disk
    ephemeral: false
    profiles:
    - default
    stateful: false
    description: ""
HOST:
    $ ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 00:1d:7d:d2:79:a2 brd ff:ff:ff:ff:ff:ff
        inet 192.168.0.21/24 brd 192.168.0.255 scope global enp4s0
           valid_lft forever preferred_lft forever
        inet6 fe80::21d:7dff:fed2:79a2/64 scope link 
           valid_lft forever preferred_lft forever
    3: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 00:16:3e:bc:e5:d1 brd ff:ff:ff:ff:ff:ff
        inet 10.221.211.1/24 scope global lxdbr0
           valid_lft forever preferred_lft forever
        inet6 fd42:15d6:d200:ce52::1/64 scope global 
           valid_lft forever preferred_lft forever
        inet6 fe80::216:3eff:febc:e5d1/64 scope link 
           valid_lft forever preferred_lft forever
    5: vethac398884@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
        link/ether 3a:5d:43:49:e9:21 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    8: vethcf29410c@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
        link/ether 56:ab:db:04:99:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    15: veth6ff9d273@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
        link/ether ca:d2:d5:18:53:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    4123: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
        link/ether 02:42:52:f4:c8:88 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
        inet6 fe80::42:52ff:fef4:c888/64 scope link 
           valid_lft forever preferred_lft forever
    4125: vethe85b3fe@if4124: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
        link/ether ea:89:0e:fe:aa:6a brd ff:ff:ff:ff:ff:ff link-netnsid 5
        inet6 fe80::e889:eff:fefe:aa6a/64 scope link 
           valid_lft forever preferred_lft forever
    4127: vetha71b647@if4126: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
        link/ether 6e:c4:3a:8a:58:47 brd ff:ff:ff:ff:ff:ff link-netnsid 6
        inet6 fe80::6cc4:3aff:fe8a:5847/64 scope link 
           valid_lft forever preferred_lft forever
    169: veth13a9d70c@if168: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
        link/ether 16:d7:d2:28:be:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    4290: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1396 qdisc fq_codel state UNKNOWN group default qlen 3
        link/ppp 
        inet 172.111.192.111 peer 192.253.242.4/32 scope global ppp0
           valid_lft forever preferred_lft forever
    4039: vethe59fea35@if4038: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
        link/ether 72:e8:af:37:47:b7 brd ff:ff:ff:ff:ff:ff link-netnsid 3

    $ ip r
    default dev ppp0 scope link 
    10.221.211.0/24 dev lxdbr0 proto kernel scope link src 10.221.211.1 
    45.74.4.1 via 192.168.0.1 dev enp4s0 src 192.168.0.21 
    46.243.136.130 via 192.168.0.1 dev enp4s0 src 192.168.0.21 
    46.243.239.2 via 192.168.0.1 dev enp4s0 src 192.168.0.21 
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
    178.170.146.1 via 192.168.0.1 dev enp4s0 src 192.168.0.21 
    188.72.86.2 via 192.168.0.1 dev enp4s0 src 192.168.0.21 
    192.168.0.0/24 dev enp4s0 proto kernel scope link src 192.168.0.21 
    192.168.1.201 dev lo scope link 
    192.253.242.2 via 192.168.0.1 dev enp4s0 src 192.168.0.21 
    192.253.242.4 dev ppp0 proto kernel scope link src 172.111.192.111 
CONTAINER:
    # ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    4038: eth0@if4039: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
        link/ether 00:16:3e:32:8f:42 brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet 10.221.211.131/24 brd 10.221.211.255 scope global dynamic eth0
           valid_lft 2673sec preferred_lft 2673sec
        inet6 fd42:15d6:d200:ce52:216:3eff:fe32:8f42/64 scope global dynamic mngtmpaddr noprefixroute 
           valid_lft 3175sec preferred_lft 3175sec
        inet6 fe80::216:3eff:fe32:8f42/64 scope link 
           valid_lft forever preferred_lft forever

    # ip r
        default via 10.221.211.1 dev eth0 proto dhcp src 10.221.211.131 metric 100 
        10.221.211.0/24 dev eth0 proto kernel scope link src 10.221.211.131 
        10.221.211.1 dev eth0 proto dhcp scope link src 10.221.211.131 metric 100

So the best thing to do with any networking issue is to run through some common steps to try and locate where the problem is, so:

  1. Can you ping the lxdbr0 gateway IP from the container? ping 10.221.211.1?
  2. Can you ping externally from the container? ping 8.8.8.8
  3. Can you resolve a DNS name inside the container? host www.linuxcontainers.org

I also notice that you’ve got a docker0 bridge on your LXD host. This suggests you’ve got docker installed and it occurs semi-frequently on this forum that users have trouble with LXD networking and it turns out to be the firewall rules that docker configures on your host that blocks other network traffic that LXD requires to function. So I would also encourage you check whether docker has created firewall rules that may impact network connectivity for non-docker applications.

# ping 10.221.211.1
PING 10.221.211.1 (10.221.211.1) 56(84) bytes of data.
64 bytes from 10.221.211.1: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 10.221.211.1: icmp_seq=2 ttl=64 time=0.089 ms

# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=228 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=227 ms

# host www.linuxcontainers.org
www.linuxcontainers.org is an alias for rproxy.stgraber.org.
rproxy.stgraber.org has address 149.56.148.5
rproxy.stgraber.org has IPv6 address 2001:470:b368:1020:1::2

# sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp dpt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp dpt:bootps /* generated for LXD network lxdbr0 */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             /* generated for LXD network lxdbr0 */
ACCEPT     all  --  anywhere             anywhere             /* generated for LXD network lxdbr0 */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             tcp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp spt:bootps /* generated for LXD network lxdbr0 */

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:9000
ACCEPT     tcp  --  anywhere             172.17.0.2           tcp dpt:8000
ACCEPT     tcp  --  anywhere             172.17.0.3           tcp dpt:8123

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere

Hi,
After you mentioning possible issues between LXD and docker, I moved the docker containers to another host and removed the docker software from this server. Removed the docker specific stuff from iptables (below). Doing an apt-get update on the host works perfectly. The same command from a container still not working correctly; see below. I also included the pings of the sites that apt-get said it couldn’t connect to; pings work just fine. traceroute also seems to work just fine.

One thing I don’t think I mentioned, is that all traffic to and from this host server runs through a ppp0 connect (VPN). Traffic from the host goes out just fine, it’s the containers that are having some problems.

Could it be a problem with how LXD is routed?

Thanks,

Harlan…

From host:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp – anywhere anywhere tcp dpt:domain /* generated for LXD network lxdbr0 /
ACCEPT udp – anywhere anywhere udp dpt:domain /
generated for LXD network lxdbr0 /
ACCEPT udp – anywhere anywhere udp dpt:bootps /
generated for LXD network lxdbr0 */

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             /* generated for LXD network lxdbr0 */
ACCEPT     all  --  anywhere             anywhere             /* generated for LXD network lxdbr0 */

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             tcp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT     udp  --  anywhere             anywhere             udp spt:bootps /* generated for LXD network lxdbr0 */

From container:
# apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Err:2 http://security.ubuntu.com/ubuntu bionic-security InRelease
Connection failed [IP: 91.189.91.38 80]
Err:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Connection failed [IP: 91.189.88.142 80]
Err:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Connection failed [IP: 91.189.88.142 80]
Ign:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
Ign:6 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en
Ign:7 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
Ign:8 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en
Ign:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
Ign:6 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en
Ign:7 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
Ign:8 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en
Ign:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
Ign:6 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en
Ign:7 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
Ign:8 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en
Err:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages
Connection failed [IP: 91.189.88.142 80]
Ign:6 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en
Ign:7 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages
Ign:8 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en
Reading package lists… Done
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic-updates/InRelease Connection failed [IP: 91.189.88.142 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic-backports/InRelease Connection failed [IP: 91.189.88.142 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/bionic-security/InRelease Connection failed [IP: 91.189.91.38 80]
E: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/bionic/universe/binary-amd64/Packages Connection failed [IP: 91.189.88.142 80]
W: Some index files failed to download. They have been ignored, or old ones used instead.

Container:
# ping 91.189.88.142
PING 91.189.88.142 (91.189.88.142) 56(84) bytes of data.
64 bytes from 91.189.88.142: icmp_seq=1 ttl=49 time=386 ms
64 bytes from 91.189.88.142: icmp_seq=2 ttl=49 time=386 ms

Container:
# ping 91.189.91.38
PING 91.189.91.38 (91.189.91.38) 56(84) bytes of data.
64 bytes from 91.189.91.38: icmp_seq=1 ttl=45 time=394 ms
64 bytes from 91.189.91.38: icmp_seq=2 ttl=45 time=393 ms

I bet it is an issue with MTU mismatch between your ppp0 interface and lxdbr0 and broken path MTU discovery.

You could try setting the bridge.mtu setting on lxdbr0 to match your PPP interfaces setting.

See https://linuxcontainers.org/lxd/docs/master/networks#network-bridge

You could also try adding an iptables rule on your lxd host to clamp tcp mss to the outgoing interfaces MTU see https://lartc.org/howto/lartc.cookbook.mtu-mss.html

1 Like

Hi Thomas,
Thank You very much for you assistance. I needed to use both sites to get the containers working. I do believe you were correct about the MTU size.
I will mark this as RESOLVED.

Thanks again,

Harlan…