No name resolution for LXD containers using libvirt virbr0 as bridge

In my setup I wish to use a common bridge used by kvm libvirt VMs and LXD containers. VMs and LXD containers should be able to communicate with each other.

I use centOS 8 and set up a libvirt virbr0 and configured this bridge in LXD. Everything works fine but DNS resolution. I can ping from any VM to any container and vice versa using the ip address. I can ping each VM from any other VM and the host by name. And I can ping any VM from any container by name (e.g. either FQDN vm01.intra or just host vm01 )

But I can’t ping any container from a VM nor any other container by name (neither FQDN nor just host, mind, by IP address works fine).

I found a lot of info on the net about setting up LXD to use virbr0. But none mentioned a DNS problem.

In virbr0 configuration I found in /var/lib/libvirt/dnsmasq/default.addnhosts no host name entry for the container but for every VM:

  {
    "ip-address": "192.168.122.61",
    "mac-address": "52:54:00:be:63:b2",
    "hostname": "vmtest01",
    "client-id": "00:6f:75:72:61:6e:6f:73",
    "expiry-time": 1582628832
  },
  {
    "ip-address": "192.168.122.145",
    "mac-address": "52:54:00:c2:30:d3",
    "hostname": "vmtest02",
    "client-id": "01:52:54:00:c2:30:d3",
    "expiry-time": 1582629369
  },
  {
    "ip-address": "192.168.122.43",
    "mac-address": "00:16:3e:05:01:87",
    "client-id": "01:00:16:3e:05:01:87",
    "expiry-time": 1582629482
  }

The last entry is an LXD container according to the MAC:

#lxc config show contest-01  --expanded

architecture: x86_64
config:
  environment.LANG: de_DE.UTF-8
  environment.LC_ALL: de_DE.UTF-8
  image.architecture: amd64
  image.description: Centos 8 amd64 (20200222_07:41)
  image.os: Centos
  image.release: "8"
  image.serial: "20200222_07:41"
  image.type: squashfs
  volatile.base_image: c0d1c1964f0952b73ebf21c31bc726279570267e637a98d26d58d849f2dd506d
  volatile.eth0.host_name: vethc641281d
  volatile.eth0.hwaddr: 00:16:3e:05:01:87
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: virbr0
    type: nic
  root:
    path: /
    pool: lvmpool
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

My virbr0 configuration is:

# virsh  net-dumpxml default 

<network connections='2'>
  <name>default</name>
  <uuid>415529d9-f16a-4447-9354-3c4f8cc9d709</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' zone='trusted' stp='on' delay='0'/>
  <mac address='52:54:00:ea:1c:27'/>
  <domain name='intra'/>
  <dns forwardPlainNames='no'>
    <forwarder addr='127.0.0.1'/>
    <host ip='192.168.122.1'>
      <hostname>host</hostname>
      <hostname>host.intra</hostname>
    </host>
  </dns>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.20' end='192.168.122.200'/>
    </dhcp>
  </ip>

I’m wondering about the "connection=‘2’ above. Obviously it doesn’t count the container, just the vm’s.

Any hint is greatly appreciated.

Found a solution.

Inside a CentOS 8 container you find in

/etc/sysconfig/network-scripts/ifcfg-eth0

the term

DHCP_HOSTNAME=`cat /proc/sys/kernel/hostname`

If you replace it statically with your hostname like

DHCP_HOSTNAME=“test02”

everything works fine.

@stgraber could you tell me what to modify in my installation to create the DHCP_HOSTNAME the same way as HOSTNAME or might it be advantageous to modify the CentOS 8 container in general?

NBB: In CentOS7 the substitution worked fine. So it might be specific for version 8.

On the command line a cat /proc/... works fine, but not inside ifcfg_eth0

@monstermunchkin can you take a look at this since you’ll be poking at CentOS 8 anyway for VM support. Looks like we can’t call shell commands anymore so need an alternative.

I just noticed that it is already fixed in Cents 8 images. Thanks a lot.