Network routing died - iptables and docker

It was working but now no packets past lxdbr0. The iptables MASQUERADE rule is gone.
I run wireshark on both eth0 and lxdbr0 then started a ping in a container to another machine on host network.MP packets on lxdbr0 but not eth0.

I have attached
lxc info
lxc network list
lxc network info lxdbr0
lxc network show lxdbr0

Is there a command I could run to re-setup iptables?

$ lxc info
config:
core.https_address: ‘[::]’
core.trust_password: true
api_extensions:

  • storage_zfs_remove_snapshots
  • container_host_shutdown_timeout
  • container_stop_priority
  • container_syscall_filtering
  • auth_pki
  • container_last_used_at
  • etag
  • patch
  • usb_devices
  • https_allowed_credentials
  • image_compression_algorithm
  • directory_manipulation
  • container_cpu_time
  • storage_zfs_use_refquota
  • storage_lvm_mount_options
  • network
  • profile_usedby
  • container_push
  • container_exec_recording
  • certificate_update
  • container_exec_signal_handling
  • gpu_devices
  • container_image_properties
  • migration_progress
  • id_map
  • network_firewall_filtering
  • network_routes
  • storage
  • file_delete
  • file_append
  • network_dhcp_expiry
  • storage_lvm_vg_rename
  • storage_lvm_thinpool_rename
  • network_vlan
  • image_create_aliases
  • container_stateless_copy
  • container_only_migration
  • storage_zfs_clone_copy
  • unix_device_rename
  • storage_lvm_use_thinpool
  • storage_rsync_bwlimit
  • network_vxlan_interface
  • storage_btrfs_mount_options
  • entity_description
  • image_force_refresh
  • storage_lvm_lv_resizing
  • id_map_base
  • file_symlinks
  • container_push_target
  • network_vlan_physical
  • storage_images_delete
  • container_edit_metadata
  • container_snapshot_stateful_migration
  • storage_driver_ceph
  • storage_ceph_user_name
  • resource_limits
  • storage_volatile_initial_source
  • storage_ceph_force_osd_reuse
  • storage_block_filesystem_btrfs
  • resources
  • kernel_limits
  • storage_api_volume_rename
  • macaroon_authentication
  • network_sriov
  • console
  • restrict_devlxd
  • migration_pre_copy
  • infiniband
  • maas_network
  • devlxd_events
  • proxy
  • network_dhcp_gateway
  • file_get_symlink
  • network_leases
  • unix_device_hotplug
  • storage_api_local_volume_handling
  • operation_description
  • clustering
  • event_lifecycle
  • storage_api_remote_volume_handling
  • nvidia_runtime
  • container_mount_propagation
  • container_backup
  • devlxd_images
  • container_local_cross_pool_handling
  • proxy_unix
  • proxy_udp
  • clustering_join
  • proxy_tcp_udp_multi_port_handling
  • network_state
  • proxy_unix_dac_properties
  • container_protection_delete
  • unix_priv_drop
  • pprof_http
  • proxy_haproxy_protocol
  • network_hwaddr
  • proxy_nat
  • network_nat_order
  • container_full
  • candid_authentication
  • backup_compression
  • candid_config
  • nvidia_runtime_config
  • storage_api_volume_snapshots
  • storage_unmapped
  • projects
  • candid_config_key
  • network_vxlan_ttl
  • container_incremental_copy
  • usb_optional_vendorid
  • snapshot_scheduling
  • snapshot_schedule_aliases
  • container_copy_project
  • clustering_server_address
  • clustering_image_replication
  • container_protection_shift
  • snapshot_expiry
  • container_backup_override_pool
  • snapshot_expiry_creation
  • network_leases_location
  • resources_cpu_socket
  • resources_gpu
  • resources_numa
  • kernel_features
  • id_map_current
  • event_location
  • storage_api_remote_volume_snapshots
  • network_nat_address
  • container_nic_routes
  • rbac
  • cluster_internal_copy
  • seccomp_notify
  • lxc_features
  • container_nic_ipvlan
  • network_vlan_sriov
  • storage_cephfs
  • container_nic_ipfilter
  • resources_v2
  • container_exec_user_group_cwd
  • container_syscall_intercept
  • container_disk_shift
  • storage_shifted
  • resources_infiniband
  • daemon_storage
  • instances
  • image_types
  • resources_disk_sata
  • clustering_roles
  • images_expiry
  • resources_network_firmware
  • backup_compression_algorithm
  • ceph_data_pool_name
  • container_syscall_intercept_mount
  • compression_squashfs
  • container_raw_mount
  • container_nic_routed
  • container_syscall_intercept_mount_fuse
  • container_disk_ceph
  • virtual-machines
  • image_profiles
  • clustering_architecture
  • resources_disk_id
  • storage_lvm_stripes
  • vm_boot_priority
  • unix_hotplug_devices
  • api_filtering
  • instance_nic_network
  • clustering_sizing
  • firewall_driver
  • projects_limits
  • container_syscall_intercept_hugetlbfs
  • limits_hugepages
  • container_nic_routed_gateway
  • projects_restrictions
  • custom_volume_snapshot_expiry
  • volume_snapshot_scheduling
  • trust_ca_certificates
  • snapshot_disk_usage
  • clustering_edit_roles
  • container_nic_routed_host_address
  • container_nic_ipvlan_gateway
  • resources_usb_pci
  • resources_cpu_threads_numa
  • resources_cpu_core_die
  • api_os
  • container_nic_routed_host_table
  • container_nic_ipvlan_host_table
  • container_nic_ipvlan_mode
  • resources_system
  • images_push_relay
  • network_dns_search
  • container_nic_routed_limits
  • instance_nic_bridged_vlan
  • network_state_bond_bridge
  • usedby_consistency
  • custom_block_volumes
  • clustering_failure_domains
  • resources_gpu_mdev
  • console_vga_type
  • projects_limits_disk
  • network_type_macvlan
  • network_type_sriov
  • container_syscall_intercept_bpf_devices
  • network_type_ovn
  • projects_networks
  • projects_networks_restricted_uplinks
  • custom_volume_backup
  • backup_override_name
  • storage_rsync_compression
  • network_type_physical
  • network_ovn_external_subnets
  • network_ovn_nat
  • network_ovn_external_routes_remove
  • tpm_device_type
  • storage_zfs_clone_copy_rebase
  • gpu_mdev
  • resources_pci_iommu
  • resources_network_usb
  • resources_disk_address
  • network_physical_ovn_ingress_mode
  • network_ovn_dhcp
  • network_physical_routes_anycast
  • projects_limits_instances
  • network_state_vlan
  • instance_nic_bridged_port_isolation
  • instance_bulk_state_change
  • network_gvrp
  • instance_pool_move
  • gpu_sriov
  • pci_device_type
  • storage_volume_state
  • network_acl
  • migration_stateful
  • disk_state_quota
  • storage_ceph_features
  • projects_compression
  • projects_images_remote_cache_expiry
  • certificate_project
  • network_ovn_acl
  • projects_images_auto_update
  • projects_restricted_cluster_target
  • images_default_architecture
  • network_ovn_acl_defaults
  • gpu_mig
  • project_usage
  • network_bridge_acl
  • warnings
  • projects_restricted_backups_and_snapshots
  • clustering_join_token
  • clustering_description
  • server_trusted_proxy
  • clustering_update_cert
  • storage_api_project
  • server_instance_driver_operational
  • server_supported_storage_drivers
  • event_lifecycle_requestor_address
  • resources_gpu_usb
  • clustering_evacuation
    api_status: stable
    api_version: “1.0”
    auth: trusted
    public: false
    auth_methods:
  • tls
    environment:
    addresses:
    • 192.168.42.5:8443
    • 172.17.0.1:8443
    • 10.66.42.1:8443
    • ‘[fd42:e66:1a5b:98b5::1]:8443’
      architectures:
    • x86_64
    • i686
      certificate: |
      -----BEGIN CERTIFICATE-----
      MIIB9zCCAX6gAwIBAgIRAPEiosaP658/fnA0QKDwlPEwCgYIKoZIzj0EAwMwMDEc
      MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEQMA4GA1UEAwwHcm9vdEBhOTAe
      Fw0yMTA4MjEyMTQwMzJaFw0zMTA4MTkyMTQwMzJaMDAxHDAaBgNVBAoTE2xpbnV4
      Y29udGFpbmVycy5vcmcxEDAOBgNVBAMMB3Jvb3RAYTkwdjAQBgcqhkjOPQIBBgUr
      gQQAIgNiAARvSlV17VJuBol2IT57Q2tdqjYxLczPfHuQSy5dj/ZPJZj6eCS0ai1e
      rzr1P8GqVMyb2mHsKpH4k5u2I1Jii1KS9luRk95ZShBQk5oOYSJYbNQ1scVwxbRU
      b3HzfU5pQR2jXDBaMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcD
      ATAMBgNVHRMBAf8EAjAAMCUGA1UdEQQeMByCAmE5hwR/AAABhxAAAAAAAAAAAAAA
      AAAAAAABMAoGCCqGSM49BAMDA2cAMGQCMGG7nn80gIOGnIZ7gFx2H7VMz2NhWVlW
      CO/a/4ZQnU69afdotcA5Py5y+HAKN1ZmVAIwa33cx27VfjgTXr0XgrVfmFsb68/K
      Pfcsk1Yx1Rxiq8gVziZ1kqLj+phlRmGi3Ae4
      -----END CERTIFICATE-----
      certificate_fingerprint: 525c1de4e49a36c5d528a7d96b226a6f3c0a5e855979ee9fffc98ff184e87d80
      driver: lxc
      driver_version: 4.0.10
      firewall: nftables
      kernel: Linux
      kernel_architecture: x86_64
      kernel_features:
      netnsid_getifaddrs: “true”
      seccomp_listener: “true”
      seccomp_listener_continue: “true”
      shiftfs: “false”
      uevent_injection: “true”
      unpriv_fscaps: “true”
      kernel_version: 5.11.0-31-generic
      lxc_features:
      cgroup2: “true”
      devpts_fd: “true”
      idmapped_mounts_v2: “true”
      mount_injection_file: “true”
      network_gateway_device_route: “true”
      network_ipvlan: “true”
      network_l2proxy: “true”
      network_phys_macvlan_mtu: “true”
      network_veth_router: “true”
      pidfd: “true”
      seccomp_allow_deny_syntax: “true”
      seccomp_notify: “true”
      seccomp_proxy_send_notify_fd: “true”
      os_name: Ubuntu
      os_version: “21.04”
      project: default
      server: lxd
      server_clustered: false
      server_name: a9
      server_pid: 7041
      server_version: “4.17”
      storage: btrfs
      storage_version: 5.4.1
      storage_supported_drivers:
    • name: dir
      version: “1”
      remote: false
    • name: lvm
      version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.43.0
      remote: false
    • name: zfs
      version: 2.0.2-1ubuntu5
      remote: false
    • name: ceph
      version: 15.2.13
      remote: true
    • name: btrfs
      version: 5.4.1
      remote: false
    • name: cephfs
      version: 15.2.13
      remote: true

$ lxc network list
±--------±---------±--------±--------------±-------------------------±------------±--------+
| NAME | TYPE | MANAGED | IPV4 | IPV6 | DESCRIPTION | USED BY |
±--------±---------±--------±--------------±-------------------------±------------±--------+
| docker0 | bridge | NO | | | | 0 |
±--------±---------±--------±--------------±-------------------------±------------±--------+
| enp7s0 | physical | NO | | | | 0 |
±--------±---------±--------±--------------±-------------------------±------------±--------+
| lxdbr0 | bridge | YES | 10.66.42.1/24 | fd42:e66:1a5b:98b5::1/64 | | 3 |
±--------±---------±--------±--------------±-------------------------±------------±--------+

$ lxc network info lxdbr0
Name: lxdbr0
MAC address: 00:16:3e:3a:91:ef
MTU: 1500
State: up

Ips:
inet 10.66.42.1
inet6 fd42:e66:1a5b:98b5::1
inet6 fe80::216:3eff:fe3a:91ef

Network usage:
Bytes received: 104.97kB
Bytes sent: 26.88kB
Packets received: 1231
Packets sent: 237

$ lxc network show lxdbr0
config:
ipv4.address: 10.66.42.1/24
ipv4.firewall: “False”
ipv4.nat: “true”
ipv6.address: fd42:e66:1a5b:98b5::1/64
ipv6.nat: “true”
description: “”
name: lxdbr0
type: bridge
used_by:

  • /1.0/instances/lxdgui
  • /1.0/instances/wine
  • /1.0/profiles/default
    managed: true
    status: Created
    locations:
  • none

Try running:

sudo systemctl reload snap.lxd.daemon

This will cause LXD to reapply firewal rules, in case something else on your system has messed with them.

Thanks for the answer, they was the exact command is was looking for.
But, that did not fix the problem.
I think the iptable command is not working.
I would expect a iptables rule that would MASQUERADE for the
lxcbr0 @ 10.0.42.1,
but I do not find that.
I tried to manually enter a rule

iptables -t nat -A POSTROUTING -s 10.0.4./24 ! -d 10.0.42.0/24 -j MASQUERADE

but that did not work

I have added the iptables dump of nat

Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
58 5861 DOCKER all – any any anywhere anywhere ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all – any any anywhere !localhost/8 ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all – any !docker0 172.17.0.0/16 anywhere

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all – docker0 any anywhere anywhere

I just noticed that LXD yaml has firewall set to nftables,
How do I set the environment firewall to use iptables, not nftables?

You need to clear all nftables rules (using sudo nft flush ruleset) and then have at least 1 active rule in iptables/ip6tables before reloading LXD which will then cause it to use xtables driver.

I have found something that may work.
I did a few U21.04 reinstalls.
the LDX reload trashes Docker and it does not work,

Start docker networking after lxd.

both docker and lxd seem to work now.

This may break again once LXD updates. See

Can you expand on that, we’ve not had reports previously of LXD breaking Docker networking on reload, only the other way around.

Yes I know, I put a “After snap.lxd.daemon” statement in docker systemd file. Have not tried it yet.