Regarding the unmanaged bridge interface, are you talking about an IPv6 address or the IPv4 address that would have to be assigned to the bridge? Because if it is just the IPv6 I anyways have to assign it/them manually as is, so might not be much more complicated as what I anyways have to do. It is also the only way I have found another tutorial on. (http://www.makikiweb.com/Pi/lxc_on_the_pi.html)
Otherwise if it is really that much more complicated we can go with the simplest setup then.
Regarding a step by step tutorial: it would need to include all the different things to find out to choose the best option for your case and then how to implement it.
With the unmanaged bridge option you have to move all of the host’s eth0 IPs to the newly created bridge interface (e.g. br0) because when you connect eth0 to the br0, any IPs on eth0 will stop working.
This can be tricky because often times, people are logged in via SSH over the same IPs they are trying to move, and can get locked out.
You’d also need to confirm that your ISP is providing a router advertisement daemon on their network, otherwise using an unmanaged bridge or macvlan isn’t going to work with SLAAC, and you’d have to sue static assignments (at which point it’d be easier to use routed or ipvlan).
You should also check whether the ISP allows multiple MAC addresses on the eth0 interface, before you go down the unmanaged bridge or macvlan approach.
If you were going to use the routed NIC type, then the steps would be:
Remove lxdbr0 (or at least change its IP prefix so it doesnt conflict with your public /64).
Ensure that your host has IPv6 connectivity.
Pick an IP in your /64 that isn’t being used.
Then run lxc config device add <container> eth0 nic nictype=routed ipv6.address=<your IPv6 address> parent=eth0
This last step will check for the required sysctl settings and inform you if you need to tweak them. Remember to persist these if you do need to change any of them so a reboot doesn’t wipe them out.
This will then configure the IP inside your container, and a default gateway, as well as the proxy NDP and static routes on the host required to make it appear that your container is on the external network.
The image from my ISP seems to have included net.ipv6.conf.all.disable_ipv6 = 1 in /etc/sysctl.conf and made my life difficult.
How do I persist the routed NIC type changes that you mentioned?
Also, is there a way to have the eth0 host ipv6 adresses be routed to the lxdbr0 container ipv6 adresses, which would also allow automatic ipv6 assignment to the containers? I guess that needs the NDP proxy again which is broken in netplan which I now have to use as setting back to ifupdown has proven too difficult.
Tried different approaches, the last one being not to set up a bridge when initializing. Then added with your command the eth0 to the container but when trying to start it I get this error: Error: Common start logic: Failed to start device "eth0": Routed mode requires sysctl net.ipv6.conf.all.forwarding=1
I added this on the host and in the instance and ran sudo netplan apply but not luck:
After rebooting and adding the eth0 nic the container comes up but has a totally different ipv6 address and lxc list shows an empty ipv6 field. So something is wrong there also.
root@container1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 56:40:d3:e8:91:e0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::5440:d3ff:fee8:91e0/64 scope link
valid_lft forever preferred_lft forever
It may be something inside the container resetting the global ip LXD setup before it started. Try disabling dhcp and any network config in the container. The ip u see is the randomly generated link local address, thats normal and will remain even with a static ip.
Removing that yaml file from the containers netplan config finally made it accept the ipv6!
Pinging outside sources also successful.
Now I guess I just need to set up DNS in the container and I am good to go. Unfortunately that is the next wall I am hitting. Trying to install with lxdbr0 again so the container can resolve at least.
You have been very helpful and I thank you very much!
1111:aaaa:3004:9978::2 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via fe80::1 dev eth0 proto static metric 1024 pref medium
But I cannot resolve any domain names. Also resolving ipv4 domain names should be possible as I will want to install programs and stuff.
(I hence included gateway6: fe80::1 to the netplan config.)
What is the contents of /etc/resolv.conf and systemd-resolve --status?
Can you still ping outside IPs?
As for resolving IPv4 domains, once you get DNS working, you will be able to resolve IPv4 domains, but they will be unreachable as you’ve not configured any IPv4 address (not even private ones that could be NATed by your host).
Before the reset I just did for the 1000th time I cloud ping outside IPv6 adresses yes.
I assigned the container an 192.168.1.2 domain, but the pinging of outside domain names still didn’t work.
I had put the following contents in /etc/resolv.conf:
systemd-resolve --status had shown me the nameservers in one of the setups before but still no name resolving, I have no idea why not.
Right now I am trying to have a normal managed bridge on eth0 of the containers and add an eth1 with the static ipv6 address. In the hopes that the managed bridge will handle the DNS.
I also allowed port 53 on ufw on the host but that seems to not have been the issue either.
Ah, ok so you’ve introduced 2 interfaces inside the container, and the requirement to access IPv4 nameservers (before you said you didn’t need IPv4, only IPv6), which will change things a fair bit.
If you have a 2nd interface in the container, connected to the managed bridge, its likely that the SLAAC autoconfiguration for IPv6 on that interface will wipe out the default IPv6 gateway route for the routed interface and instead replace it with a default route out of the managed bridge interface instead.
So either disable IPv6 on the managed bridge, ipv6.dhcp=false and ipv6.address=none so that your 2nd interface is just for IPv4, or add a static private IPv4 address to the routed NIC:
lxc config device edit c1 eth0 nic ipv4.address=192.168.0.n
Then you need to add an outbound masquerade firewall rule so that outbound traffic is translated to your host’s external IP. That way you’d only need 1 interface inside the container.
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 82:a6:92:ed:12:67 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 2a02:nnn:76f4:1::1234/128 scope global
valid_lft forever preferred_lft forever
inet6 fe80::80a6:92ff:feed:1267/64 scope link
valid_lft forever preferred_lft forever
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:16:3e:b0:fb:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.87.57.6/24 brd 10.87.57.255 scope global dynamic eth0
valid_lft 3274sec preferred_lft 3274sec
inet6 fe80::216:3eff:feb0:fb9c/64 scope link
valid_lft forever preferred_lft forever
ip -4 r
default via 10.87.57.1 dev eth0 proto dhcp src 10.87.57.6 metric 100
10.87.57.0/24 dev eth0 proto kernel scope link src 10.87.57.6
10.87.57.1 dev eth0 proto dhcp scope link src 10.87.57.6 metric 100
ip -6 r
2a02:nnn:76f4:1::1234 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
default via fe80::1 dev eth1 proto static metric 1024 pref medium
systemd-resolve --status
Global
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
168.192.in-addr.arpa
17.172.in-addr.arpa
18.172.in-addr.arpa
19.172.in-addr.arpa
20.172.in-addr.arpa
21.172.in-addr.arpa
22.172.in-addr.arpa
23.172.in-addr.arpa
24.172.in-addr.arpa
25.172.in-addr.arpa
26.172.in-addr.arpa
27.172.in-addr.arpa
28.172.in-addr.arpa
29.172.in-addr.arpa
30.172.in-addr.arpa
31.172.in-addr.arpa
corp
d.f.ip6.arpa
home
internal
intranet
lan
local
private
test
Link 9 (eth0)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 10.87.57.1
DNS Domain: lxd
lxc ls
+------+---------+-------------------+------------------------------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+-------------------+------------------------------+-----------+-----------+
| c1 | RUNNING | 10.87.57.6 (eth0) | 2a02:nnn:76f4:1::1234 (eth1) | CONTAINER | 0 |
+------+---------+-------------------+------------------------------+-----------+-----------+
I didn’t think I need IPv4 in the container, but DNS would not work. And of course I need DNS so I can install stuff in the container. I will try the setup adjustments you have given me, thanks a lot!
I thought my config was fine as pinging the ipv6 from the outside worked. But of course I wouldn’t want any conflicts and also trying to connect to the containers through ports 80 and 443 from the outside is proving difficult…
How do I do that?
(I will try to use Cloudflare to host a site inside the container and provide the ipv4 connectivity.)
PS: The config works for me without setting the netplan config inside the container.