Issue with DNS in container

I’m having an issue with DHCP lookup in my LXC container and I can’t work out what’s up.

I have set my container up to use an existing bridge on my system (br0), and it seems to work fine for networking; when the container starts up it gets an IP address on my external local network from DHCP. I can ping any other machine on the network, and I can ping the container from any other machine on the network. I can also ping any internal or external IP from the container, e.g. Google

ping 172.253.122.139
PING 172.253.122.139 (172.253.122.139) 56(84) bytes of data.
64 bytes from 172.253.122.139: icmp_seq=1 ttl=104 time=55.7 ms

but, no matter what I put in /etc/resolv.conf, I cannot get it to look up an address via the nameserver. e.g.:

# nslookup google.com
;; communications error to 127.0.0.53#53: timed out
;; communications error to 127.0.0.53#53: timed out
;; Got SERVFAIL reply from 127.0.0.53
Server:		127.0.0.53
Address:	127.0.0.53#53

** server can't find google.com: SERVFAIL

or, using google:

# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=34.1 ms
# nslookup google.com
;; communications error to 8.8.8.8#53: connection refused
;; communications error to 8.8.8.8#53: connection refused
;; communications error to 8.8.8.8#53: connection refused
;; no servers could be reached

so, I either get timeout or connection refused. Yet the main host has no issue. Both my host and the container are running Linux Mint 20.

Am I missing something here? Seems like I can’t get to port 53, yet the main host has no issue and I’m not running any firewall on either machine. Any ideas?

The problem appears to be a refused connection to port 53. On the host machine:

# nc -vz 8.8.8.8 53
Connection to 8.8.8.8 53 port [tcp/domain] succeeded!

In the container:

nc -vz 8.8.8.8 53
nc: connect to 8.8.8.8 port 53 (tcp) failed: Connection refused

There is no firewall enabled on the host:

# ufw status
Status: inactive

and ufw is not even installed in the container. What am I missing here?

Is that container a DNS Server?

If it is, are forwarders configured? In bind it looks like this, its in options section:

    forwarders {
                8.8.8.8;
                8.8.4.4;
        };

No, the container isn’t a DNS server. At this point, it’s just a barebones copy of Mint Una that I created using:

# lxc-create --name test2 --template download
.
.
---

Distribution: 
mint
Release: 
uma
Architecture: 
amd64
.
.
---
You just created an Mint uma amd64 (20250422_08:51) container.

Nothing special here, except that it us using an existing bridge (br0) which was already set up on the host (I also use it for my KVM VMs). All the networking and ports seem to work fine except 53, e.g:

root@test2:/etc/default# nc -vz 142.251.163.100 80
Connection to 142.251.163.100 80 port [tcp/http] succeeded!

If I try any IP and any other port, it works completely fine, but port 53 just doesn’t work to anywhere, e.g. even my local router, from my container:

root@test2:/etc/default# nc -vz 192.168.86.1 53
nc: connect to 192.168.86.1 port 53 (tcp) failed: Connection refused

but from the main host itself:

# nc -vz 192.168.86.1 53
Connection to 192.168.86.1 53 port [tcp/domain] succeeded!

There is something stopping me from getting outside the container for port 53, but yet every other port seems to be fine, and if I just go to the main host, 53 works fine from there. No firewall on either machine.

I don’t know how to solve your problem, but it looks like your container name server not working. So, please find out what your name server application is and restart or config it ( resolvconf or openresolv or systemd−resolved)

Does LXC even need a nameserver in this case? Remember my internal network has direct access to the outside world via the bridge, it gets an IP for my local network from my router DNS server when it starts up, but it cannot reach any external nameserver for name resolution.

In case it helps. From my container:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if2958: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:77:a3:aa brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.86.56/24 metric 100 brd 192.168.86.255 scope global dynamic eth0
       valid_lft 64219sec preferred_lft 64219sec
    inet6 fdb7:15eb:ab6b:169d:216:3eff:fe77:a3aa/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 1663sec preferred_lft 1663sec
    inet6 fe80::216:3eff:fe77:a3aa/64 scope link 
       valid_lft forever preferred_lft forever

It has IP address 92.168.86.56 (dynamically assigned).

From my main machine:

3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 88:51:fb:82:cb:83 brd ff:ff:ff:ff:ff:ff
    inet 192.168.86.99/24 brd 192.168.86.255 scope global noprefixroute br0
       valid_lft forever preferred_lft forever
    inet6 fdb7:15eb:ab6b:169d:4261:8c7d:8bdd:40ce/64 scope global temporary dynamic 
       valid_lft 1741sec preferred_lft 1741sec
    inet6 fdb7:15eb:ab6b:169d:794:4f1a:94f:bb6f/64 scope global temporary deprecated dynamic 
       valid_lft 1741sec preferred_lft 0sec
    inet6 fdb7:15eb:ab6b:169d:bcec:6640:a64a:2c17/64 scope global temporary deprecated dynamic 
       valid_lft 1741sec preferred_lft 0sec
    inet6 fdb7:15eb:ab6b:169d:729a:40ad:23ef:7a20/64 scope global temporary deprecated dynamic 
       valid_lft 1741sec preferred_lft 0sec
    inet6 fdb7:15eb:ab6b:169d:2ed4:e217:d1a:32a7/64 scope global temporary deprecated dynamic 
       valid_lft 1741sec preferred_lft 0sec
    inet6 fdb7:15eb:ab6b:169d:ae5b:1d59:5015:64e8/64 scope global temporary deprecated dynamic 
       valid_lft 1741sec preferred_lft 0sec
    inet6 fdb7:15eb:ab6b:169d:680d:86cb:49:3dae/64 scope global temporary deprecated dynamic 
       valid_lft 1741sec preferred_lft 0sec
    inet6 fdb7:15eb:ab6b:169d:8a51:fbff:fe82:cb83/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 1741sec preferred_lft 1741sec
    inet6 fe80::8a51:fbff:fe82:cb83/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

The bridge gets 192.168.86.99 (statically assigned because it’s my server machine).

Also:

2958: vethuzA86c@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether fe:11:d9:8e:4f:40 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::fc11:d9ff:fe8e:4f40/64 scope link 
       valid_lft forever preferred_lft forever

This looks like the interface for that container (2958) with master br0; all seems to be correct.

Anyone spot any issue? I’m wondering if it has something to do with the bridge configuration.

I’m actually talking about “DNS servers”, using the wrong word, my bad :sweat::
https://wiki.archlinux.org/title/Domain_name_resolution#DNS_servers

I saw this error, I guess you use systemd−resolved. I think it never forward your container dns queries.

Can you post results about: systemctl start systemd-resolved and resolvectl status and resolvectl query discuss.linuxcontainers.org
Or if you didn’t use systemd−resolved in container, please tell us what the DNS server you use.

So, I edited /etc.resolv.conf in the container to use 8.8.8.8 (Google nameserver). and:

# systemctl start systemd-resolved
root@test2:/etc/default# resolvectl status
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
                      21.172.in-addr.arpa 
                      22.172.in-addr.arpa 
                      23.172.in-addr.arpa 
                      24.172.in-addr.arpa 
                      25.172.in-addr.arpa 
                      26.172.in-addr.arpa 
                      27.172.in-addr.arpa 
                      28.172.in-addr.arpa 
                      29.172.in-addr.arpa 
                      30.172.in-addr.arpa 

root@test2:/etc/default# resolvectl query discuss.linuxcontainers.org
discuss.linuxcontainers.org: resolve call failed: Invalid argument

The last command fails because it can’t resolve the hostname. the startup works fine.

However, If I run the exact same command on my main host:

# resolvectl query discuss.linuxcontainers.org
discuss.linuxcontainers.org: 45.45.148.7       -- link: br0
                             2602:fc62:a:1::7  -- link: br0

-- Information acquired via protocol DNS in 74.2ms.
-- Data is authenticated: no

Didn’t catch all the output:

root@test2:/etc/default# resolvectl status
Global
       LLMNR setting: no                  
MulticastDNS setting: no                  
  DNSOverTLS setting: no                  
      DNSSEC setting: no                  
    DNSSEC supported: no                  
  Current DNS Server: 8.8.8.8             
         DNS Servers: 8.8.8.8             
          DNSSEC NTA: 10.in-addr.arpa     
                      16.172.in-addr.arpa 
                      168.192.in-addr.arpa
                      17.172.in-addr.arpa 
                      18.172.in-addr.arpa 
                      19.172.in-addr.arpa 
                      20.172.in-addr.arpa 
                      21.172.in-addr.arpa 
                      22.172.in-addr.arpa 
                      23.172.in-addr.arpa 
                      24.172.in-addr.arpa 
                      25.172.in-addr.arpa 
                      26.172.in-addr.arpa 
                      27.172.in-addr.arpa 
                      28.172.in-addr.arpa 
                      29.172.in-addr.arpa 
                      30.172.in-addr.arpa 
                      31.172.in-addr.arpa 
                      corp                
                      d.f.ip6.arpa        
                      home                
                      internal            
                      intranet            
                      lan                 
                      local               
                      private             
                      test                

Link 2 (eth0)
      Current Scopes: DNS         
DefaultRoute setting: yes         
       LLMNR setting: yes         
MulticastDNS setting: no          
  DNSOverTLS setting: no          
      DNSSEC setting: no          
    DNSSEC supported: no          
  Current DNS Server: 192.168.86.1
         DNS Servers: 192.168.86.1
          DNS Domain: lan

yeah, your container’s systemd-resolved doesn’t work.
first test your network: dig @8.8.8.8 discuss.linuxcontainers.org
if success, then is systemd-resolved’s fault. try edit /etc/systemd/resolved.conf(resolved.conf(5) — Arch manual pages):

DNS=8.8.8.8 2001:4860:4860::8888
FallbackDNS=8.8.4.4 2001:4860:4860::8844
MulticastDNS=yes
DNSSEC=no

and systemctl restart systemd-resolved.service
if failed, maybe your network is down, post result of ip r

you need to check the nftables filters

nft list ruleset

OK, and here’s the issue, port 53 is not working. so:

root@test2:/etc/default# dig @8.8.8.8 discuss.linuxcontainers.org
;; communications error to 8.8.8.8#53: connection refused
;; communications error to 8.8.8.8#53: connection refused
;; communications error to 8.8.8.8#53: connection refused

; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @8.8.8.8 discuss.linuxcontainers.org
; (1 server found)
;; global options: +cmd
;; no servers could be reached

Same connection refused error on port 53, this is the root of the issue.

Also:

root@test2:/etc/default# systemctl restart systemd-resolved.service
root@test2:/etc/default#
 # ip r
default via 192.168.86.1 dev eth0 proto dhcp src 192.168.86.56 metric 100 
192.168.86.0/24 dev eth0 proto kernel scope link src 192.168.86.56 metric 100 
192.168.86.1 dev eth0 proto dhcp scope link src 192.168.86.56 metric 100

It works without any issue. There is nothing wrong with the network, just port 53 is killing packets.

@ iotapi322 The container doesn’t have nft installed, and I can’t install it with apt because it can’t resolve any addresses. This is the main reason I need this to work; right now, I just can’t install any software because there is no name resolution.

Just in case the output on the main host is useful:

# nft list ruleset
table bridge filter {
	chain INPUT {
		type filter hook input priority filter; policy accept;
	}

	chain FORWARD {
		type filter hook forward priority filter; policy accept;
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
	}
}

Well, your network is fine, DNS server is working, 8.8.8.8 is up, container gets nameserver from router using DHCP. All I can think is either your container’s port 53 being blocked or multiple DNS servers conflicting. In your case, perhaps is something blocks 53, find whoever blocks it, and unblock it.

Yup, that’s what I’ve been trying to do. Since there is no firewall, it’s not that. From what I’ve read about networking “connection refused” doesn’t mean it’s blocked; the packets got through, but not back again the other way.

Since there is no firewall, the only logical answer is that somehow the bridge is blocking it, or the LXC configuration itself is either eating the packets or causing conflicts on that port. I don’t think it’s the bridge since I use the same bridge interface for KVM VMs and they all have no issue with the nameserver.

My current suspect is some LXC configuration, but I don’t know enough about LXC and how it does networking to work it out.

I found 3 related posts.
permission error:

iptables SNAT:

systemd-resolved conflict with ifupdown:

linux mint is based on ubuntu, so I guess third post may be your solution.