yeah if i set it to 1.1.1.1 it gets me outside connectivity to WAN, but then im back to where i started. Being that Alpine or OCI-containers cant ping based on hostname… which would make my life a lot easier because i want to create some containers that connect via guacamole(guacd) via hostname
ok
so not changing anything in the uplink network i get this on my alpine non-oci container
Global
Protocols: +LLMNR +mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: stub
Link 40 (eth0)
Current Scopes: DNS LLMNR/IPv4 LLMNR/IPv6
Protocols: +DefaultRoute +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: fd42:a685:fe63:91ea::1
DNS Servers: 4.0.4.1 fd42:a685:fe63:91ea::1
That looks correct to me. In case of interest here are some more details about OVN and DNS:
As you can see on default OVN will intercept DNS requests and tries to resolve local name. I’m not sure if it resolves just instance names it self or requires .incus at the end. Would try both.
Alternative create your own DNS as mentioned and liked above.
From the output above it doesn’t look like just instance name lookup will work as there is no default search entry in the resolve output. Try to use .incus at the end and it should resolve.
it just wont resolve, im beggining to think this is an alpine linux issue.
can anyone with a cluster try this out ?
the image is
incus launch images:641534674f48
Another interesting observation. I cant ping from the oci-container or Alpine container and i cant ping the alpine container from another container where the ping works fine by hostname. Im starting to believe this is an issue with alpine.
Here’s what I’m seeing with this issue.
The uplink network is configured and assume all the cluster members have a corresponding ovn
network with a physical parent of a VLAN adapter on the host.
$ incus network show ovn-uplink
config:
dns.nameservers: 192.168.1.254
ipv4.gateway: 192.168.1.254/24
ipv4.ovn.ranges: 192.168.1.1-192.168.1.126
volatile.last_state.created: "false"
description: ""
name: ovn-uplink
type: physical
used_by: []
managed: true
status: Created
locations:
- node1
- node2
- node3
project: default
Create the network and launch the containers.
$ incus network create ovn --type ovn
Network ovn created
$ incus launch docker.io:library/alpine c1 --network ovn -d root,size=1GiB
Launching c1
$ incus launch images:debian/12/cloud c2 --network ovn -d root,size=3GiB
Launching c2
$ incus launch docker.io:library/alpine c3 --network ovn -d root,size=1GiB
Launching c3
$ incus network show ovn
config:
bridge.mtu: "1442"
ipv4.address: 10.174.34.1/24
ipv4.nat: "true"
ipv6.address: fd42:c762:c20d:fd8f::1/64
ipv6.nat: "true"
network: ovn-uplink
volatile.network.ipv4.address: 192.168.1.1
description: ""
name: ovn
type: ovn
used_by:
- /1.0/instances/c1
- /1.0/instances/c2
- /1.0/instances/c3
managed: true
status: Created
locations:
- node2
- node3
- node1
project: default
DNS interception works okay once the containers are launched.
$ incus shell c1
c1:~# ping c2.incus
PING c2.incus (10.174.34.4): 56 data bytes
64 bytes from 10.174.34.4: seq=0 ttl=64 time=0.388 ms
64 bytes from 10.174.34.4: seq=1 ttl=64 time=0.131 ms
^C
--- c2.incus ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.131/0.259/0.388 ms
c1:~# nslookup c3.incus
Server: 192.168.1.254
Address: 192.168.1.254:53
Non-authoritative answer:
Name: c3.incus
Address: 10.174.34.3
Non-authoritative answer:
Name: c3.incus
Address: fd42:c762:c20d:fd8f:1266:6aff:fea3:63cb
$ incus shell c2
root@c2:~# apt install dnsutils
[...]
Setting up dnsutils (1:9.18.33-1~deb12u2) ...
Processing triggers for libc-bin (2.36-9+deb12u10) ...
root@c2:~# nslookup c3.incus
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: c3.incus
Address: 10.174.34.3
Name: c3.incus
Address: fd42:c762:c20d:fd8f:1266:6aff:fea3:63cb
root@c2:~# nslookup c1.incus
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: c1.incus
Address: 10.174.34.2
Name: c1.incus
Address: fd42:c762:c20d:fd8f:1266:6aff:fe5d:bdb
$ incus shell c3
c3:~# nslookup c1.incus
Server: 192.168.1.254
Address: 192.168.1.254:53
Non-authoritative answer:
Name: c1.incus
Address: 10.174.34.2
Non-authoritative answer:
Name: c1.incus
Address: fd42:c762:c20d:fd8f:1266:6aff:fe5d:bdb
c3:~# nslookup c2.incus
Server: 192.168.1.254
Address: 192.168.1.254:53
Non-authoritative answer:
Name: c2.incus
Address: 10.174.34.4
Non-authoritative answer:
Name: c2.incus
Address: fd42:c762:c20d:fd8f:1266:6aff:feea:7520
Now restart the containers and check DNS again.
$ incus restart c1 c2 c3
$ incus shell c1
c1:~# nslookup c2.incus
Server: 192.168.1.254
Address: 192.168.1.254:53
** server can't find c2.incus: NXDOMAIN
** server can't find c2.incus: NXDOMAIN
c1:~# nslookup c3.incus
Server: 192.168.1.254
Address: 192.168.1.254:53
** server can't find c3.incus: NXDOMAIN
** server can't find c3.incus: NXDOMAIN
$ incus shell c2
root@c2:~# nslookup c3.incus
Server: 127.0.0.53
Address: 127.0.0.53#53
** server can't find c3.incus: NXDOMAIN
root@c2:~# nslookup c1.incus
Server: 127.0.0.53
Address: 127.0.0.53#53
** server can't find c1.incus: NXDOMAIN
$ incus shell c3
c3:~# nslookup c1.incus
Server: 192.168.1.254
Address: 192.168.1.254:53
** server can't find c1.incus: NXDOMAIN
** server can't find c1.incus: NXDOMAIN
c3:~# nslookup c2.incus
Server: 192.168.1.254
Address: 192.168.1.254:53
** server can't find c2.incus: NXDOMAIN
** server can't find c2.incus: NXDOMAIN
After this I can delete the ovn
network and re-create and it works just the same as above. DNS interception will work after first launch but eventually fail once the containers have restarted. Lookups to other domains through the uplink DNS has always worked fine, so just seems to be with the interception. I have also deleted the uplink network and started from scratch but same result.
This is running on Debian 12 with the following OVS/OVN versions:
ovn-nbctl 23.03.1
Open vSwitch Library 3.1.0
DB Schema 7.0.0
Im surprised you got it to even work on launch, i cant get it to work at all. I wouldnt even care about a restart because i can make them ephemeral … its just that i cant get it to work ever.
@localgrp are you thinking this is a networking issue on ovn side ?
@stgraber any thoughts on this ?
I am able to confirm this as well. Is this primarily an oci issue ?
Not if the DNS interception is the issue, which is provided by OVN. I haven’t had time to try and figure out more.
@mtheimpaler what OS and OVN version are you running?
Im using Debian 12.11 and ovn version 23.03.01 and ovs 3.1.0.