Hello,
I’m looking for a way to add a /64 prefix to a routed nic described here: Instances | LXD
But it doesn’t look like ipv6.address supports a prefix, only an individual IP. Is there anything I’m missing?
Thank you!
Hello,
I’m looking for a way to add a /64 prefix to a routed nic described here: Instances | LXD
But it doesn’t look like ipv6.address supports a prefix, only an individual IP. Is there anything I’m missing?
Thank you!
The way to do this would be to add support for ipv4.routes
and ipv6.routes
on the routed
nictype.
@tomp would that work? If so, should be a pretty easy tweak.
Ah, I see.
I was also curious – could the routed nictype be extended to vms as well? I tried it out and got the following error, the docs seem to also indicate that it is currently only supported for containers.
Config parsing error: Invalid devices: Device validation failed for "eth0": Unsupported device type
Press enter to open the editor again or ctrl+c to abort change
Thank you!
This isn’t currently supported as it’s quite tricky to do.
For routed mode, DHCP/SLAAC isn’t used, instead LXD pre-configures the network interface in the container to match what’s needed to get connectivity (assigns an address, sets up a default gateway, …).
In a VM, LXD cannot pre-configure a network interface the same way it does with a container. The earliest we can start setting something up like that would be when the LXD agent starts. However this may be too late and may conflict with the distribution’s own network management tooling (networkd, ifupdown, network-manager, …) and not all virtual machines will be running the agent either.
Hmm, I see. Is it possible to have a “routed-lite” interface for VMs without the integrations?
We only use ubuntu VMs and can tool around the LXD api to insert a generated netplan file into each vm ourselves with lxc file push
, or something like that.
You could build your own equivalent solution on top of a p2p
device as that will provide you with an unconfigured device on the host, then you can add static routes over that host device to replicate what routed
does (along with the matching network config in the VM).
Yes adding ipv{n}.routes
to the routed
NIC type should work fine. As the routed
NIC type already requires at least 1 statically allocated IP to operate (as it adds the static routes and IP neighbour proxy entries to the host) we could add the static routes specified in this new setting to route via
the 1st static IP specified. That way the container wouldn’t need to respond to L2 neighbour requests for those routes (which would be useful if using nesting).
One thing I would ask @gpl to confirm/understand though is that this wouldn’t be advertising the /64 on the parent interface at the L2 level (i.e ARP and NDP) like it would be with the statically defined addresses in the ipv{n}.address
fields.
This is because Linux only allows advertising/proxying individual IPs and a /64 is simply too many IPs to be adding statically.
@stgraber I suppose we could also do what we do with the ovn
networks and limit the size of L2 advertised external subnets, although this would then prevent the use of a /64 in this case.
As for VMs, we could add limited routed
NIC support to VMs, and just perform the host-side configuration and make it clear in the docs that the user needs to configure the VM side network settings themselves (either manually or via something like cloud-init).
As it stands, most users of routed
NICs have to do this with containers anyway, as the default network config in our images often causes the addresses and routes added to the NIC to be wiped out when the container starts anyway.
Hence this guide to use cloud-init: How to get LXD containers get IP from the LAN with routed network
So the experience wouldn’t be so different anyway.
So @gpl you wouldn’t need the Layer 2 ARP/NDP proxy for this, and you would route the /64 directly to the LXD host’s external IP?
Indeed – I wouldn’t need the proxy for this; I’m currently intending on using GoBGP with Zebra in order to pick up the routes from the kernel and advertise them out to my upstream.
Thank you!
Sounds good. Issue created: