No IP-Address in host after copying image

I copied an image from lxd-3.0.3 to lxd-4.4
When I start the image theres is no ip address set, as it is on the origin.

lxc list
+-----------+---------+------+------+-----------+----------------+
|   NAME    | STATUS  | IPV4 | IPV6 |    TYP    | SCHNAPPSCHÜSSE |
+-----------+---------+------+------+-----------+----------------+
| webserver | RUNNING |      |      | CONTAINER | 0              |
+-----------+---------+------+------+-----------+----------------+

lxc Netzwerk list
+-----------+----------+-----------+-----------------+------+--------------+-------------+
|   NAME    |   TYP    | VERWALTET |      IPV4       | IPV6 | BESCHREIBUNG | BENUTZT VON |
+-----------+----------+-----------+-----------------+------+--------------+-------------+
| enp0s31f6 | physical | NEIN      |                 |      |              | 0           |
+-----------+----------+-----------+-----------------+------+--------------+-------------+
| lxdbr0    | bridge   | JA        | 10.115.123.1/24 | none |              | 2           |
+-----------+----------+-----------+-----------------+------+--------------+-------------+
| wlp58s0   | physical | NEIN      |                 |      |              | 0           |
+-----------+----------+-----------+-----------------+------+--------------+-------------+

What can i do, to have agian the image accessible through 10.115.123.60 i.e. ?
This shows ifconfig in the image:

ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::216:3eff:fe0a:7e5b  prefixlen 64  scopeid 0x20<link>
        ether 00:16:3e:0a:7e:5b  txqueuelen 1000  (Ethernet)
        RX packets 16  bytes 1460 (1.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 47  bytes 12065 (12.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I really have no ideas :frowning:

Btw.: Why I got mixed langages now in the lxc commands ? :weird:

Does a new container on that target system get a working IP?

I didn’t created a new one until now.

but how does it work in principal, that a container gets the proper IP address?

Is it set in
lxd set config <container> ...

Really, I getting crazy spending so much annoying hours of trail&error .
sry, bur lxd/lxc is a jungle to me.

Container runs a DHCP client which gets an IP from the DHCP server that runs on the host (dnsmasq).

Ok, if it is dhcp-client is in the origin container - and I always got the same ip address - then i sould get it also in the copied one.

How can I control that it is like that? Have I to set dnsmasq somewhere?

I’m really exhausted. Maybe this step of having the proper ip address will e hopefully the last one, that I can finally go ahaed with my real task

You’re not getting any address at all which suggests something else is wrong.

That’s why I asked if a clean container gets an address.

HELP!!

Now I messed up my container configuration while I wanted to change something on volatile.network while container still was started.
I getting these erros i.e.;

lxc start webserver
Error: Common start logic: Failed to start device “eth0”: Parent device “lxdbr0” doesn’t exist

Even when i try to delete container , it doesn’t work!

lxc delete webserver
Error: Failed to remove device ‘eth0’: route ip+net: no such network interface

Pls. how can I fix this somehow with a

lxc sql global ’ SQL statement’

here?
all my trials of putting a table name which I know for my last post in issue github (#6661) messed up in unknown table names.

Btw: If I eventually had missed to set remote access while “lxd init” … how could I fix this / proofe this afterwards?

furtheron appreciating any hints, cause I’m really getting crazy

Edit:
I think the number ‘3’ in the last column (used by) isn’t correct here, because there I have no container with id 3 :

lxc Netzwerk list
+-----------+----------+-----------+----------------+------+--------------+-------------+
|   NAME    |   TYP    | VERWALTET |      IPV4      | IPV6 | BESCHREIBUNG | BENUTZT VON |
+-----------+----------+-----------+----------------+------+--------------+-------------+
| enp0s31f6 | physical | NEIN      |                |      |              | 0           |
+-----------+----------+-----------+----------------+------+--------------+-------------+
| lxdbr0    | bridge   | JA        | 10.172.23.1/24 | none |              | 3           |
+-----------+----------+-----------+----------------+------+--------------+-------------+
| wlp58s0   | physical | NEIN      |                |      |              | 0           |
+-----------+----------+-----------+----------------+------+--------------+-------------+

But if so, how to fix it? I couln’d a proper sqlite table for it

got some more investigations.

The main problem seems to be DHCP.
Also installing a brand new container has the same issue of getting no IPV4

I set temp. a fix IP on the host (ifconfig … up) , afterwards i was able to get access via lxdbr ip range. But unfortunately even if I set a static route in the host, I am not able to get outside this range. Maybe I have to set also a static route on the system for the back route.

But seriously? Is this tested somehow?
I have now investigate 2 days with still no success. Sry, I’m NOT that linux expert - and I just even had manged lxd/lxc one yr ago now on lxd2 and lxd3 on ubuntu18.04 boxes.

Maybe ubuntu20.04 (LM20) and lxd-4.4 doesn’t still work together in this sense of dnsmasq dhcp stuff.

Can you show:

  • lxc network show lxdbr0
  • grep -i apparmor /var/log/kern.log
  • cat /var/snap/lxd/common/lxd/logs/lxd.log

Here we go:

lxc Netzwerk show lxdbr0
config:
  ipv4.address: 10.172.23.1/24
  ipv4.nat: "true"
  ipv6.address: none
  volatile.bridge.hwaddr: 00:16:3e:ab:0e:20
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/webserver
- /1.0/profiles/default
managed: true
status: Created
locations:
- none

The next one shows too much, so I post a snippet only

 grep -i apparmor /var/log/kern.log

> Aug  4 19:04:28 LenovoT470s kernel: [25456.347794] audit: type=1400 audit(1596560668.592:38): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/lib/lxd>" pid=28876 comm="apparmor_parser"
> Aug  4 20:04:45 LenovoT470s kernel: [29072.769034] audit: type=1400 audit(1596564285.054:39): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxd-webserver_</var/lib/lxd>" pid=34628 comm="apparmor_parser"
> Aug  4 20:58:12 LenovoT470s kernel: [32280.403926] audit: type=1400 audit(1596567492.732:40): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/lib/lxd>" pid=41172 comm="apparmor_parser"
> Aug  4 21:03:08 LenovoT470s kernel: [32575.993301] audit: type=1400 audit(1596567788.327:41): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxd-webserver_</var/lib/lxd>" pid=41799 comm="apparmor_parser"
> Aug  4 21:58:48 LenovoT470s kernel: [35916.437635] audit: type=1400 audit(1596571128.811:43): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxd-webserver_</var/lib/lxd>" pid=50473 comm="apparmor_parser"
> Aug  5 10:22:57 LenovoT470s kernel: [37638.375190] audit: type=1400 audit(1596615777.152:44): apparmor="DENIED" operation="capable" profile="/usr/sbin/cups-browsed" pid=55876 comm="cups-browsed" capability=23  capname="sys_nice"
> Aug  5 12:27:57 LenovoT470s kernel: [45139.437974] audit: type=1400 audit(1596623277.673:45): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/lib/lxd>" pid=62092 comm="apparmor_parser"
> Aug  5 12:29:27 LenovoT470s kernel: [45228.864393] audit: type=1400 audit(1596623367.102:46): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="lxd_dnsmasq-lxdbr0_</var/lib/lxd>" pid=62191 comm="apparmor_parser"

The last one is on my system:
cat /var/log/lxd/lxd.log
because I don’t have snap installed und compiled lxd-4.4 by my own, as you probably can remember by another thread here. It prints also too much, so again I show here only the tail:

t=2020-08-05T12:59:30+0200 lvl=info msg="Creating container" ephemeral=false name=blah project=default
t=2020-08-05T12:59:30+0200 lvl=info msg="Created container" ephemeral=false name=blah project=default
t=2020-08-05T12:59:37+0200 lvl=info msg="Starting container" action=start created=2020-08-05T12:59:30+0200 ephemeral=false name=blah project=default stateful=false used=1970-01-01T01:00:00+0100
t=2020-08-05T12:59:38+0200 lvl=info msg="Started container" action=start created=2020-08-05T12:59:30+0200 ephemeral=false name=blah project=default stateful=false used=1970-01-01T01:00:00+0100
t=2020-08-05T12:59:49+0200 lvl=warn msg="Detected poll(POLLNVAL) event." 
t=2020-08-05T13:03:10+0200 lvl=info msg="Shutting down container" action=shutdown created=2020-08-05T12:59:30+0200 ephemeral=false name=blah project=default timeout=-1s used=2020-08-05T12:59:38+0200
t=2020-08-05T13:03:11+0200 lvl=info msg="Shut down container" action=shutdown created=2020-08-05T12:59:30+0200 ephemeral=false name=blah project=default timeout=-1s used=2020-08-05T12:59:38+0200
t=2020-08-05T13:03:13+0200 lvl=info msg="Deleting container" created=2020-08-05T12:59:30+0200 ephemeral=false name=blah project=default used=2020-08-05T12:59:38+0200
t=2020-08-05T13:03:13+0200 lvl=info msg="Deleted container" created=2020-08-05T12:59:30+0200 ephemeral=false name=blah project=default used=2020-08-05T12:59:38+0200
t=2020-08-05T13:04:22+0200 lvl=warn msg="Detected poll(POLLNVAL) event." 
t=2020-08-05T13:04:52+0200 lvl=warn msg="Detected poll(POLLNVAL) event." 
t=2020-08-05T13:08:03+0200 lvl=warn msg="Detected poll(POLLNVAL) event." 
t=2020-08-05T23:04:12+0200 lvl=info msg="Pruning expired instance backups" 
t=2020-08-05T23:04:12+0200 lvl=info msg="Done pruning expired instance backups"

Please can you paste the full output of lxc info from the LXD host.

Also please can you post the full outout of iptables-save from the LXD host.

Please can you confirm that dnsmasq is running on your host using ps aux | grep dnsmasq.

Finally please also provide the output of ss -ulpn.

Thanks

Ok, so here we go again:

lxc info
config:
  core.https_address: 192.168.0.155:8443
  core.trust_password: true
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
  addresses:
  - 192.168.0.155:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    MIICEzCCAZmgAwIBAgIRAKAvENPYpjmBfTZIvoHOCpYwCgYIKoZIzj0EAwMwOTEc
    MBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEZMBcGA1UEAwwQcm9vdEBMZW5v        dm9UNDcwczAeFw0yMDA4MDMxNTU2MzNaFw0zMDA4MDExNTU2MzNaMDkxHDAaBgNV
    BAoTE2xpbnV4Y29udGFpbmVycy5vcmcxGTAXBgNVBAMMEHJvb3RATGVub3ZvVDQ3
    MHMwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAASbRi/Y69ne69RhGy/S04E2ceZk7O6f
    H8s8T3bc0a2oiYE/9xBmSDRnNMyPh4c0cNftljNLgSIFxjglcG6WXwuTRG4JBRFW
    R98FiPwl9PNR/WHkkrtFddlhhV6LS9VRz0yjZTBjMA4GA1UdDwEB/wQEAwIFoDAT      BgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMC4GA1UdEQQnMCWCC0xl        bm92b1Q0NzBzhwR/AAABhxAAAAAAAAAAAAAAAAAAAAABMAoGCCqGSM49BAMDA2gA
    MGUCMQCFyfO/svbDnjbRy6r7GqSDuxVdEykyWG74bkHp/vVmYgvwDUs8C2kzsCV
    dPrkVRACMD/tk+g9rBEbH/c2ZsFBSMRXlCpmXg+cw8ddlrxpg0TmUqbd7Wec9C3c
    R1Ae5fYSAw==
    -----END CERTIFICATE-----
  certificate_fingerprint: 6a22b9ffbf13b9261b0a48480e3431a75fa7cb0ff620f9123f72ef2f0921cf7
  driver: lxc
  driver_version: 4.0.2
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    shiftfs: "true"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 5.4.0-42-generic
  lxc_features:
    cgroup2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "false"
    seccomp_allow_deny_syntax: "false"
    seccomp_notify: "true"
  os_name: Linux Mint
  os_version: "20"
  project: default
  server: lxd
  server_clustered: false
  server_name: LenovoT470s
  server_pid: 3345
  server_version: "4.4"
  storage: zfs
  storage_version: 0.8.3-1ubuntu12.2

The next one is also long:

sudo iptables-save
# Generated by iptables-save v1.8.4 on Thu Aug  6 11:02:02 2020
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [51:2972]
:ufw-after-forward - [0:0]
:ufw-after-input - [0:0]
:ufw-after-logging-forward - [0:0]
:ufw-after-logging-input - [0:0]
:ufw-after-logging-output - [0:0]
:ufw-after-output - [0:0]
:ufw-before-forward - [0:0]
:ufw-before-input - [0:0]
:ufw-before-logging-forward - [0:0]
:ufw-before-logging-input - [0:0]
:ufw-before-logging-output - [0:0]
:ufw-before-output - [0:0]
:ufw-logging-allow - [0:0]
:ufw-logging-deny - [0:0]
:ufw-not-local - [0:0]
:ufw-reject-forward - [0:0]
:ufw-reject-input - [0:0]
:ufw-reject-output - [0:0]
:ufw-skip-to-policy-forward - [0:0]
:ufw-skip-to-policy-input - [0:0]
:ufw-skip-to-policy-output - [0:0]
:ufw-track-forward - [0:0]
:ufw-track-input - [0:0]
:ufw-track-output - [0:0]
:ufw-user-forward - [0:0]
:ufw-user-input - [0:0]
:ufw-user-limit - [0:0]
:ufw-user-limit-accept - [0:0]
:ufw-user-logging-forward - [0:0]
:ufw-user-logging-input - [0:0]
:ufw-user-logging-output - [0:0]
:ufw-user-output - [0:0]
-A INPUT -j ufw-before-logging-input
-A INPUT -j ufw-before-input
-A INPUT -j ufw-after-input
-A INPUT -j ufw-after-logging-input
-A INPUT -j ufw-reject-input
-A INPUT -j ufw-track-input
-A FORWARD -j ufw-before-logging-forward
-A FORWARD -j ufw-before-forward
-A FORWARD -j ufw-after-forward
-A FORWARD -j ufw-after-logging-forward
-A FORWARD -j ufw-reject-forward
-A FORWARD -j ufw-track-forward
-A OUTPUT -j ufw-before-logging-output
-A OUTPUT -j ufw-before-output
-A OUTPUT -j ufw-after-output
-A OUTPUT -j ufw-after-logging-output
-A OUTPUT -j ufw-reject-output
-A OUTPUT -j ufw-track-output
-A ufw-after-input -p udp -m udp --dport 137 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 138 -j ufw-skip-to-policy-input
-A ufw-after-input -p tcp -m tcp --dport 139 -j ufw-skip-to-policy-input
-A ufw-after-input -p tcp -m tcp --dport 445 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 67 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 68 -j ufw-skip-to-policy-input
-A ufw-after-input -m addrtype --dst-type BROADCAST -j ufw-skip-to-policy-input
-A ufw-after-logging-forward -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW BLOCK] "
-A ufw-after-logging-input -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW BLOCK] "
-A ufw-before-forward -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A ufw-before-forward -j ufw-user-forward
-A ufw-before-input -i lo -j ACCEPT
-A ufw-before-input -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-input -m conntrack --ctstate INVALID -j ufw-logging-deny
-A ufw-before-input -m conntrack --ctstate INVALID -j DROP
-A ufw-before-input -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A ufw-before-input -p udp -m udp --sport 67 --dport 68 -j ACCEPT
-A ufw-before-input -j ufw-not-local
-A ufw-before-input -d 224.0.0.251/32 -p udp -m udp --dport 5353 -j ACCEPT
-A ufw-before-input -d 239.255.255.250/32 -p udp -m udp --dport 1900 -j ACCEPT
-A ufw-before-input -j ufw-user-input
-A ufw-before-output -o lo -j ACCEPT
-A ufw-before-output -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-output -j ufw-user-output
-A ufw-logging-allow -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW ALLOW] "
-A ufw-logging-deny -m conntrack --ctstate INVALID -m limit --limit 3/min --limit-burst 10 -j RETURN
-A ufw-logging-deny -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW BLOCK] "
-A ufw-not-local -m addrtype --dst-type LOCAL -j RETURN
-A ufw-not-local -m addrtype --dst-type MULTICAST -j RETURN
-A ufw-not-local -m addrtype --dst-type BROADCAST -j RETURN
-A ufw-not-local -m limit --limit 3/min --limit-burst 10 -j ufw-logging-deny
-A ufw-not-local -j DROP
-A ufw-reject-input -j REJECT --reject-with icmp-port-unreachable
-A ufw-skip-to-policy-forward -j DROP
-A ufw-skip-to-policy-input -j REJECT --reject-with icmp-port-unreachable
-A ufw-skip-to-policy-output -j ACCEPT
-A ufw-track-output -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-output -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-user-input -s 192.168.0.221/32 -d 192.168.0.155/32 -p tcp -m tcp --dport 8443 -j ACCEPT
-A ufw-user-limit -m limit --limit 3/min -j LOG --log-prefix "[UFW LIMIT BLOCK] "
-A ufw-user-limit -j REJECT --reject-with icmp-port-unreachable
-A ufw-user-limit-accept -j ACCEPT
COMMIT
# Completed on Thu Aug  6 11:02:02 2020

the next: dnsmasq

ps aux | grep dnsmasq
lxd         3570  0.0  0.0  15192  4068 ?        Ss   Aug05   0:00 dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=lxdbr0 --dhcp-rapid-commit --quiet-dhcp --quiet-dhcp6 --quiet-ra --listen-address=10.172.23.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/lib/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/lib/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.172.23.2,10.172.23.254,1h -s lxd -S /lxd/ --conf-file=/var/lib/lxd/networks/lxdbr0/dnsmasq.raw -u lxd

and the last:

ss -ulpn
State         Recv-Q        Send-Q                Local Address:Port                Peer Address:Port        Process        
UNCONN        0             0                           0.0.0.0:4500                     0.0.0.0:*                          
UNCONN        0             0                           0.0.0.0:5353                     0.0.0.0:*                          
UNCONN        0             0                       10.172.23.1:53                       0.0.0.0:*                          
UNCONN        0             0                     127.0.0.53%lo:53                       0.0.0.0:*                          
UNCONN        0             0                    0.0.0.0%lxdbr0:67                       0.0.0.0:*                          
UNCONN        0             0                           0.0.0.0:500                      0.0.0.0:*                          
UNCONN        0             0                           0.0.0.0:631                      0.0.0.0:*                          
UNCONN        0             0                           0.0.0.0:50263                    0.0.0.0:*                          
UNCONN        0             0                           0.0.0.0:1701                     0.0.0.0:*                          
UNCONN        0             0                              [::]:52814                       [::]:*                          
UNCONN        0             0                                 *:4500                           *:*                          
UNCONN        0             0                              [::]:5353                        [::]:*                          
UNCONN        0             0                                 *:500                            *:*                  

thanks and good luck.
(What a tough nut to crack)

It looks like you ran the last command as non-root (I should’ve specified sudo ss -ulpn) so I can’t see which processes are listening, but I can see a DHCP service listening on specifically lxdbr0, so I’ll assume that is dnsmasq (as you’ve shown that is running).

From the lxc info output I can see that LXD has detected your firewall driver as nftables:

  firewall: nftables

However your iptables-save output shows you are actively using iptables, and looking at the chain names, suggests you are using the ufw wrapper around iptables.

So this could suggest that you’ve got a mixture of firewall systems running on your host, and LXD has picked the more recent one to add the DHCP rules to, but they aren’t taking effect (because running iptables and nftables concurrently is going to cause issues).

Can you show the output of sudo nft list ruleset as I’d like to see if LXD has added the DHCP allow rules there instead, and whether you’ve got other rules in place that has triggered LXD to prefer nftables.

ok.

sudo nft list ruleset
table ip lxd {
	chain in.lxdbr0 {
		type filter hook input priority filter; policy accept;
		iifname "lxdbr0" tcp dport 53 accept
		iifname "lxdbr0" udp dport 53 accept
		iifname "lxdbr0" udp dport 67 accept
	}

	chain out.lxdbr0 {
		type filter hook output priority filter; policy accept;
		oifname "lxdbr0" tcp sport 53 accept
		oifname "lxdbr0" udp sport 53 accept
		oifname "lxdbr0" udp sport 67 accept
	}

	chain fwd.lxdbr0 {
		type filter hook forward priority filter; policy accept;
		oifname "lxdbr0" accept
		iifname "lxdbr0" accept
	}

	chain pstrt.lxdbr0 {
		type nat hook postrouting priority srcnat; policy accept;
		ip saddr 10.172.23.0/24 ip daddr != 10.172.23.0/24 masquerade
	}
}

and again ss -ulpn as sudo:

sudo ss -ulpn
State    Recv-Q   Send-Q        Local Address:Port        Peer Address:Port   Process                                       
UNCONN   0        0                   0.0.0.0:4500             0.0.0.0:*       users:(("charon",pid=3318,fd=13))            
UNCONN   0        0                   0.0.0.0:5353             0.0.0.0:*       users:(("avahi-daemon",pid=2646,fd=12))      
UNCONN   0        0               10.172.23.1:53               0.0.0.0:*       users:(("dnsmasq",pid=3570,fd=6))            
UNCONN   0        0             127.0.0.53%lo:53               0.0.0.0:*       users:(("systemd-resolve",pid=2607,fd=12))   
UNCONN   0        0            0.0.0.0%lxdbr0:67               0.0.0.0:*       users:(("dnsmasq",pid=3570,fd=4))            
UNCONN   0        0                   0.0.0.0:500              0.0.0.0:*       users:(("charon",pid=3318,fd=12))            
UNCONN   0        0                   0.0.0.0:631              0.0.0.0:*       users:(("cups-browsed",pid=2769,fd=7))       
UNCONN   0        0                   0.0.0.0:50263            0.0.0.0:*       users:(("avahi-daemon",pid=2646,fd=14))      
UNCONN   0        0                   0.0.0.0:1701             0.0.0.0:*       users:(("xl2tpd",pid=3368,fd=3))             
UNCONN   0        0                      [::]:52814               [::]:*       users:(("avahi-daemon",pid=2646,fd=15))      
UNCONN   0        0                         *:4500                   *:*       users:(("charon",pid=3318,fd=11))            
UNCONN   0        0                      [::]:5353                [::]:*       users:(("avahi-daemon",pid=2646,fd=13))      
UNCONN   0        0                         *:500                    *:*       users:(("charon",pid=3318,fd=10))

OK so that confirms it, and there are there are the DHCP rules.

So I’m not sure how LXD decided that nftables was in use (its possible there was a rule in there at some point), but now those LXD rules are there it becomes a self-fulfilling prophecy as it will pick nftables every time now.

So what I suggest is:

  1. Stop LXD
  2. Run nft flush ruleset
  3. Start LXD
  4. Run lxc info again and check if firewall detected is xtables.

An additional step is to uninstall the nftables package to remove the nft command entirely.

The logic for firewall driver detection is here btw: https://github.com/lxc/lxd/blob/master/lxd/firewall/firewall_load.go

thanks, I will try to do so.

I think, the source of this issues lies then here in this threead, 2nd post (which i followed):

Yes its a bit of a tricky situation. You don’t have “proper” xtables installed (ebtables is the nftables wrapper which doesn’t support all of the features the legacy ebtables does, and LXD depends on those when using xtables driver). But ufw is using iptables, so we can’t use nftables properly.

So we’re left in a situation where you don’t have a fully functioning xtables system, but we can’t use nftables because iptables is in use (and can’t reliably mix rules concurrently on both systems).

Another solution maybe to switch to the ebtables-legacy command, see https://wiki.debian.org/nftables

So for my understanding (cause I’m really unkown to firewalls ), that is better NOT to de-install nftables again in my case?

Better only to flush ruleset and hoping it will work afterwards?

Well lets try that first, but you’ll probably end up at your original problem, a partially operational xtables implementation.

So assuming ufw doesn’t support nftables (I’m not familiar with ufw) and you want to continue using that, then you’re stuck with iptables, in which case you’re best bet is to remove nftables, and switch to ebtables-legacy to get a fully functioning xtables driver in LXD.