DNS not created for containers created in user projects

Hi, few weeks ago I posted questions related to multi user setup LXD multi-user setup - Error: Failed instance creation: not authorized

Now I have configured server which works for me, but I have one issue. DNS records are not created for containers that are created by users in their user projects.

This is config:

$ lxc config show
config:
  core.dns_address: 192.168.11.250

$ lxc network zone show lxd2.private
description: ""
config:
  dns.nameservers: ns1.lxd2.private
  peers.serv1.address: 192.168.10.1
name: lxd2.private
used_by:
- /1.0/networks/lxdbr0

$ lxc network show lxdbr0
config:
  dns.zone.forward: lxd2.private
  dns.zone.reverse.ipv4: 11.168.192.in-addr.arpa
  ipv4.address: 192.168.11.252/23
  ipv4.dhcp.ranges: 192.168.11.101-192.168.11.199
  ipv4.nat: "false"
  ipv6.address: none
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/test0
- /1.0/instances/test1
- /1.0/instances/west1?project=user-1180
- /1.0/instances/west2?project=user-1180
- /1.0/instances/west3?project=user-1180
- /1.0/profiles/default
- /1.0/profiles/default?project=user-1180
- /1.0/profiles/default?project=user-2208
managed: true
status: Created
locations:
- none

on serv1, I can check zone transfers and I can dig/ping my test0 and test1 containers

$ dig test0.lxd2.private @192.168.10.1 +short
192.168.11.140

$ dig west3.lxd2.private @192.168.10.1 +short | wc -l
0

$ dig west3.lxd2.private @192.168.10.1
...
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;west3.lxd2.private.            IN      A
...
$ dig axfr lxd2.private @192.168.11.250

; <<>> DiG 9.10.3-P4-Debian <<>> axfr lxd2.private @192.168.11.250
;; global options: +cmd
lxd2.private.           3600    IN      SOA     lxd2.private. ns1.lxd2.private. 1668697290 120 60 86400 30
lxd2.private.           300     IN      NS      ns1.lxd2.private.
test1.lxd2.private.    300     IN      A       192.168.11.135
test0.lxd2.private.    300     IN      A       192.168.11.140
lxdbr0.gw.lxd2.private. 300     IN      A       192.168.11.250
lxd2.private.           3600    IN      SOA     lxd2.private. ns1.lxd2.private. 1668697290 120 60 86400 30
;; Query time: 9 msec
;; SERVER: 192.168.11.250#53(192.168.11.250)
;; WHEN: Thu Nov 17 15:01:30 GMT 2022
;; XFR size: 6 records (messages 1, bytes 332)

Please help, thank you.

Yes this does appear to be an issue. Investigatingā€¦

@stgraber when you added the DNS zones feature, did you specifically want the DNS zone record to be created in the effective network project?

This means that creating a zone in a non-default project that doesnā€™t have features.networks=true, will end up creating the zone in the default project.

This then has the side effect that when querying for the leases on the assigned networks for that zone the results end up being filtered down to the zoneā€™s project (always default for non-OVN networks).

Iā€™m not sure what your thinking was around zones, projects and non-OVN networks.

Did you want each project to define their own zone so that it only published leases for instances in the zoneā€™s project? Or did you expect a single zone on the default project to publish leases for all instances on those networks for all projects? The latter option introduces the possibility of DNS name conflicts across projects.

I do not mind where the records are created as long as they are created. Just want my users to be able to have DNS record created for their containers automatically.

I have tried setting
features.networks: "false" but did not help, and I cannot revert it to ā€œtrueā€ anymore as ā€œproject is not emptyā€.

I have also tried setting lxc project set user-2208 restricted.networks.zones="lxd2.private" and that did not help either.

They will have DNS records on the lxdbr0 DNS server created automatically for their containers.

The network zones feature is for publishing those records to a different DNS server via AXFR.

I was monitoring zone with $ watch dig axfr lxd2.private @192.168.11.250 and only dns records I see are from containers in default project. The records appear pretty fast so it is a fairly good way to monitor.

What is your use case for AXFR vs using the managed DNS server on lxdbr0?

As far as I understand this article

LXD only does zone transfers, so another DNS server must be used to serve DNS records to users (I have bind9 installed so not a problem).

Thats not a use case :wink:

What are you trying to do?

Before when a developer needed a dev container, I had to configure the IP Address on the container manually and create a DNS record on our DNS server manually as well.

Now, devs have their own projects, IP addresses are created in LXD automatically and reachable by IP, but DNS records are only created in default project, not in user (my developers) projects.

Edit: DNS records are only created for containers created in default project (me, admin), not for containers created in user projects (developers).

Iā€™m still not clear im afraid.

LXD by default (without the use of Network Zones feature) creates DNS records for all instances in all projects on the managed DNS server (dnsmask process running on the LXD host machine).

This allows DNS resolution between instances on the same LXD machine or LXD cluster.

Is this sufficient for your use case?

Or do you need to export those DNS records to an external DNS server (this is what the Network Zones feature is for) for devices external to the LXD server to use?

This is what Iā€™m getting at by asking for a use-case. I was expecting you to say something like:

ā€œWe want our instance records to be published to an external server so our users can use their names to access them directly because we are using directly routable addresses and not using SNATā€.

BTW I do think there is a bug in LXD with the Network Zones feature, but I am not sure which way we should fix it (hence questions to @stgraber ). Although im not clear whether you actually need the Networks Zone feature in this case. I think a network zone per project would make most sense.

This is correct:

ā€œWe want our instance records to be published to an external server so our users can use their names to access them directly because we are using directly routable addresses and not using SNATā€.

I was not aware that LXD creates those records by default. We use LXD hosts as standalone virtualisation environment and access it from other places.

What I am trying to achieve is a self-serve infrastructure for developers, where they can login into LXD server, have their own work space and only see their own containers, create them and have IP address and DNS record created automatically. for example they can setup apache web server there and share the link with other users , i.e. http://something123.lxd2.private
This should be reachable internally, we have separate DNS server internally.

Since LXD has dnsmasq internally, a simple setup would have been sufficient for me. It all works for me so far, as long as I stay in my default project as a member of local ā€œlxdā€ group.

It all works great for my developers thanks to multi-user setup, except last bit does not work - containers created in user projects do not have DNS record created in LXD, therefore not exported.

Example.
me, member of local lxd group.
on lxd host

$ lxc project ls
# default (current)

$ lxc launch images:debian/bullseye/amd64 hello-admin1

on another computer in my local network

$ ping hello-admin1.lxd2.private
PING hello-admin1.lxd2.private (192.168.11.114) 56(84) bytes of data.
64 bytes from 192.168.11.114 (192.168.11.114): icmp_seq=1 ttl=63 time=0.666 ms

$ dig hello-admin1.lxd2.private @192.168.10.1 +short
192.168.11.114
# 192.168.10.1 is our DNS server

logged in as dev

$ lxc project ls
# user-2208 (current)
$ lxc launch images:debian/bullseye/amd64 hello-dev1

on another computer in my local network

$ ping hello-dev1.lxd2.private
ping: hello-dev1.lxd2.private: Name or service not known

$ dig hello-dev1.lxd2.private @192.168.10.1 +short

$ dig hello-dev1.lxd2.private @192.168.10.1 +short | wc -l
0

So since I see there is a functionality in LXD, I would like to keep it simple and configure it for my use case.

But if it is not possible to make it work for multi-user projects, and if I am mis-using the LXD in this way, what are my options?

  1. Do not use this setup, create all containers in default projects
  2. Look into OVN network?

Right got it, I understand now. All makes sense.

In my view there is a bug in the Network Zones feature - in that it is excluding instances connected to the network but are in a different project than where the network itself is defined (in this case a non-default project).

You could use OVN networks, as these networks can be defined inside each project, but that does bring with it the complexity of OVN.

The other possibility is that we can modify LXD to allow network zones to be defined inside a project, even if project doesnā€™t have its own networks. This would then allow multiple zones to be defined, one for each project, that would then only export the instance records from within that project.

The other alternative is to have the zone export all instances connected to the network from all projects. Although this would introduce the possibility of naming conflicts. Although weā€™ve taken steps to prevent this via:

Discussion moved to Feature request: create DNS records for non-default projects (i.e. in multi user setup) Ā· Issue #11145 Ā· lxc/lxd Ā· GitHub

Hi @tomp

Thank you for the hard work really appreciate. This will be so good for our team.

I have updated my LXD to latest/edge which has been updated and includes the commits you pushed. However it is still not clear what should I do on my end to enable this. I have re-created user projects and fiddled with features.networks and features.networks.zones parameters but no luck.

Should I create subdomains for user projects for this to work?

Thanks.

Thanks!

Hereā€™s an example setup of sharing lxdbr0 that has a subnet of 10.165.233.0/24 with multiple zones:

Create two projects

lxc project create p1
lxc project create p2
lxc profile show default | lxc profile edit default --project p1
lxc profile show default | lxc profile edit default --project p2

Launch instances in default, p1 and p2 projects:

lxc launch images:alpine/3.16 c1 --project default
lxc launch images:alpine/3.16 c1p1 --project p1
lxc launch images:alpine/3.16 c1p2 --project p2

Enable network zones on the projects:

lxc project set p1 features.networks.zones=true
lxc project set p2 features.networks.zones=true

Create zones in default, p1 and p2 projects:

lxc network zone create lxd.home --project=default \
    dns.nameservers=ns1.lxd.home \
    peers.test.address=127.0.0.1 

lxc network zone create 233.165.10.in-addr.arpa --project=default \
    dns.nameservers=ns1.233.165.10.in-addr.arpa \
    peers.test.address=127.0.0.1

lxc network zone create p1.lxd.home --project=p1 \
    dns.nameservers=ns1.lxd.home \
    peers.test.address=127.0.0.1 

lxc network zone create p2.lxd.home --project=p2 \
    dns.nameservers=ns1.lxd.home \
    peers.test.address=127.0.0.1

Assign the zones to lxdbr0 network:

lxc network set lxdbr0 dns.zone.forward lxd.home,p1.lxd.home,p2.lxd.home
lxc network set lxdbr0 dns.zone.reverse.ipv4 233.165.10.in-addr.arpa

Now you can see the project zone views in action:

First lets have a look at the instance IPs:

lxc list --all-projects
+---------+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| PROJECT | NAME |  STATE  |         IPV4          |                     IPV6                      |   TYPE    | SNAPSHOTS |
+---------+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| default | c1   | RUNNING | 10.165.233.117 (eth0) | fd42:5ba3:9d44:c230:216:3eff:fe31:649b (eth0) | CONTAINER | 0         |
+---------+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| p1      | c1p1 | RUNNING | 10.165.233.104 (eth0) | fd42:5ba3:9d44:c230:216:3eff:fe8c:1fb9 (eth0) | CONTAINER | 0         |
+---------+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+
| p2      | c1p2 | RUNNING | 10.165.233.99 (eth0)  | fd42:5ba3:9d44:c230:216:3eff:fe0c:a49c (eth0) | CONTAINER | 0         |
+---------+------+---------+-----------------------+-----------------------------------------------+-----------+-----------+

And lets look at the associated leases for lxdbr0 network for each project (as this is where the zone content comes from):

lxc network list-leases lxdbr0 --project=default
+-----------+-------------------+----------------------------------------+---------+
| HOSTNAME  |    MAC ADDRESS    |               IP ADDRESS               |  TYPE   |
+-----------+-------------------+----------------------------------------+---------+
| c1        | 00:16:3e:31:64:9b | 10.165.233.117                         | DYNAMIC |
+-----------+-------------------+----------------------------------------+---------+
| c1        | 00:16:3e:31:64:9b | fd42:5ba3:9d44:c230:216:3eff:fe31:649b | DYNAMIC |
+-----------+-------------------+----------------------------------------+---------+
| lxdbr0.gw |                   | 10.165.233.1                           | GATEWAY |
+-----------+-------------------+----------------------------------------+---------+
| lxdbr0.gw |                   | fd42:5ba3:9d44:c230::1                 | GATEWAY |
+-----------+-------------------+----------------------------------------+---------+
lxc network list-leases lxdbr0 --project=p1
+----------+-------------------+----------------------------------------+---------+
| HOSTNAME |    MAC ADDRESS    |               IP ADDRESS               |  TYPE   |
+----------+-------------------+----------------------------------------+---------+
| c1p1     | 00:16:3e:8c:1f:b9 | 10.165.233.104                         | DYNAMIC |
+----------+-------------------+----------------------------------------+---------+
| c1p1     | 00:16:3e:8c:1f:b9 | fd42:5ba3:9d44:c230:216:3eff:fe8c:1fb9 | DYNAMIC |
+----------+-------------------+----------------------------------------+---------+
lxc network list-leases lxdbr0 --project=p2
+----------+-------------------+----------------------------------------+---------+
| HOSTNAME |    MAC ADDRESS    |               IP ADDRESS               |  TYPE   |
+----------+-------------------+----------------------------------------+---------+
| c1p2     | 00:16:3e:0c:a4:9c | 10.165.233.99                          | DYNAMIC |
+----------+-------------------+----------------------------------------+---------+
| c1p2     | 00:16:3e:0c:a4:9c | fd42:5ba3:9d44:c230:216:3eff:fe0c:a49c | DYNAMIC |
+----------+-------------------+----------------------------------------+---------+

Now lets look at the forward zone for lxd.home (which belongs to the default project) to get addresses in the default project:

dig @127.0.0.1 axfr lxd.home

; <<>> DiG 9.18.1-1ubuntu1.2-Ubuntu <<>> @127.0.0.1 axfr lxd.home
; (1 server found)
;; global options: +cmd
lxd.home.		3600	IN	SOA	lxd.home. ns1.lxd.home. 1669808419 120 60 86400 30
lxd.home.		300	IN	NS	ns1.lxd.home.
lxdbr0.gw.lxd.home.	300	IN	A	10.165.233.1
lxdbr0.gw.lxd.home.	300	IN	AAAA	fd42:5ba3:9d44:c230::1
c1.lxd.home.		300	IN	AAAA	fd42:5ba3:9d44:c230:216:3eff:fe31:649b
c1.lxd.home.		300	IN	A	10.165.233.117
lxd.home.		3600	IN	SOA	lxd.home. ns1.lxd.home. 1669808419 120 60 86400 30

Next, the forward zone for p1.lxd.home (which belongs to the p1 project) to get addresses in the p1 project:

dig @127.0.0.1 axfr p1.lxd.home

; <<>> DiG 9.18.1-1ubuntu1.2-Ubuntu <<>> @127.0.0.1 axfr p1.lxd.home
; (1 server found)
;; global options: +cmd
p1.lxd.home.		3600	IN	SOA	p1.lxd.home. ns1.lxd.home. 1669808525 120 60 86400 30
p1.lxd.home.		300	IN	NS	ns1.lxd.home.
c1p1.p1.lxd.home.	300	IN	AAAA	fd42:5ba3:9d44:c230:216:3eff:fe8c:1fb9
c1p1.p1.lxd.home.	300	IN	A	10.165.233.104
p1.lxd.home.		3600	IN	SOA	p1.lxd.home. ns1.lxd.home. 1669808525 120 60 86400 30

Next, the forward zone for p2.lxd.home (which belongs to the p2 project) to get addresses in the p2 project:

dig @127.0.0.1 axfr p2.lxd.home

; <<>> DiG 9.18.1-1ubuntu1.2-Ubuntu <<>> @127.0.0.1 axfr p2.lxd.home
; (1 server found)
;; global options: +cmd
p2.lxd.home.		3600	IN	SOA	p2.lxd.home. ns1.lxd.home. 1669808559 120 60 86400 30
p2.lxd.home.		300	IN	NS	ns1.lxd.home.
c1p2.p2.lxd.home.	300	IN	AAAA	fd42:5ba3:9d44:c230:216:3eff:fe0c:a49c
c1p2.p2.lxd.home.	300	IN	A	10.165.233.99
p2.lxd.home.		3600	IN	SOA	p2.lxd.home. ns1.lxd.home. 1669808559 120 60 86400 30

And finally the reverse zone 233.165.10.in-addr.arpa which belongs to the default project, but will generate PTR records for all active addresses that have an associated forward zone (in all projects) for networks that have this zone set. The PTR target will use the addressā€™ associated forward zone name.

dig @127.0.0.1 axfr 233.165.10.in-addr.arpa

; <<>> DiG 9.18.1-1ubuntu1.2-Ubuntu <<>> @127.0.0.1 axfr 233.165.10.in-addr.arpa
; (1 server found)
;; global options: +cmd
233.165.10.in-addr.arpa. 3600	IN	SOA	233.165.10.in-addr.arpa. ns1.233.165.10.in-addr.arpa. 1669808750 120 60 86400 30
233.165.10.in-addr.arpa. 300	IN	NS	ns1.233.165.10.in-addr.arpa.
1.233.165.10.in-addr.arpa. 300	IN	PTR	lxdbr0.gw.lxd.home.
117.233.165.10.in-addr.arpa. 300 IN	PTR	c1.lxd.home.
104.233.165.10.in-addr.arpa. 300 IN	PTR	c1p1.p1.lxd.home.
99.233.165.10.in-addr.arpa. 300	IN	PTR	c1p2.p2.lxd.home.
233.165.10.in-addr.arpa. 3600	IN	SOA	233.165.10.in-addr.arpa. ns1.233.165.10.in-addr.arpa. 1669808750 120 60 86400 30

So now in your upstream DNS server you can setup delegated zones for each project.

1 Like

I just noticed a bug in lxc network zone list that was using the wrong effective project.

Fixed here:

Thanks, I will try that. I was hoping I could do with one DNS zone across all projects, but multiple zones might work as well.

Yes we discussed that in the Github issue (option 1), but it cannot work because although its not possible to start instances with the same name in different projects, it is possible for them to exist (and thus any static DHCP assignments and their automatic SLAAC IPv6 address would be generated for them).

This would lead to name conflicts in the zone (or worse unexpected load balancing between instances!).