LXC GRE Tunnel doesn't work

i’m following this tutorial https://linuxacademy.com/blog/linux/multiple-lxd-hosts-can-share-a-discreet-layer-2-container-only-network/ and i’ve built two servers with and have followed each and every single step, when i start containers inside my alpha server it has an IP range, but when i do the same for my bravo server it doesn’t show any ip addresses for the containers, also when i try to test the dns from within the containers, none of the containers over at the bravo host is recognizable. but when i move the containers over to alpha they receive an IP and work normally.

Here is the ifconfig result from Host Alpha
contgre Link encap:Ethernet HWaddr 52:d7:71:72:a0:fe
inet6 addr: fe80::50d7:71ff:fe72:a0fe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1462 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:38 dropped:0 overruns:0 carrier:38
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 06:90:57:e2:9f:a2
      inet addr:172.31.28.248  Bcast:172.31.31.255  Mask:255.255.240.0
      inet6 addr: fe80::490:57ff:fee2:9fa2/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
      RX packets:136539 errors:0 dropped:0 overruns:0 frame:0
      TX packets:51218 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:198565472 (198.5 MB)  TX bytes:3815865 (3.8 MB)

lo        Link encap:Local Loopback
      inet addr:127.0.0.1  Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host
      UP LOOPBACK RUNNING  MTU:65536  Metric:1
      RX packets:192 errors:0 dropped:0 overruns:0 frame:0
      TX packets:192 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1
      RX bytes:14456 (14.4 KB)  TX bytes:14456 (14.4 KB)

lxdbr0    Link encap:Ethernet  HWaddr 52:d7:71:72:a0:fe
      inet addr:10.119.106.1  Bcast:0.0.0.0  Mask:255.255.255.0
      inet6 addr: fdc2:5811:bf6f:3e41::1/64 Scope:Global
      inet6 addr: fe80::a844:56ff:fe36:9f5c/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1462  Metric:1
      RX packets:132 errors:0 dropped:0 overruns:0 frame:0
      TX packets:223 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:9654 (9.6 KB)  TX bytes:264725 (264.7 KB)

vethS00AKH Link encap:Ethernet  HWaddr fe:73:49:1e:fa:61
      inet6 addr: fe80::fc73:49ff:fe1e:fa61/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1462  Metric:1
      RX packets:132 errors:0 dropped:0 overruns:0 frame:0
      TX packets:214 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:11502 (11.5 KB)  TX bytes:262955 (262.9 KB)

and from Host Bravo

contgre   Link encap:Ethernet  HWaddr 3a:c5:73:f5:f4:b6
      inet6 addr: fe80::38c5:73ff:fef5:f4b6/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1462  Metric:1
      RX packets:0 errors:0 dropped:0 overruns:0 frame:0
      TX packets:0 errors:8 dropped:0 overruns:0 carrier:8
      collisions:0 txqueuelen:1000
      RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 06:5a:e1:27:47:6a
      inet addr:172.31.21.195  Bcast:172.31.31.255  Mask:255.255.240.0
      inet6 addr: fe80::45a:e1ff:fe27:476a/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
      RX packets:135921 errors:0 dropped:0 overruns:0 frame:0
      TX packets:53664 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:198233389 (198.2 MB)  TX bytes:3898083 (3.8 MB)

lo        Link encap:Local Loopback
      inet addr:127.0.0.1  Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host
      UP LOOPBACK RUNNING  MTU:65536  Metric:1
      RX packets:192 errors:0 dropped:0 overruns:0 frame:0
      TX packets:192 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1
      RX bytes:14456 (14.4 KB)  TX bytes:14456 (14.4 KB)

vethPBKGEJ Link encap:Ethernet  HWaddr fe:7e:9c:f0:4a:7c
      inet6 addr: fe80::fc7e:9cff:fef0:4a7c/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1462  Metric:1
      RX packets:32 errors:0 dropped:0 overruns:0 frame:0
      TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:8856 (8.8 KB)  TX bytes:648 (648.0 B)

vethR5YHS6 Link encap:Ethernet  HWaddr fe:fe:ed:c1:64:dc
      inet6 addr: fe80::fcfe:edff:fec1:64dc/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1462  Metric:1
      RX packets:30 errors:0 dropped:0 overruns:0 frame:0
      TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000
      RX bytes:8172 (8.1 KB)  TX bytes:648 (648.0 B)

I’m running LXD/LXC 2.0.11 on each of them

Also i would like to note that i’m running these instances up on AWS and have configured two different network interfaces, one for each. please let me know what i can do to debug/fix this. I’m new to LXC/LXD, thanks.

I have also been working on the problem of LXD and LXC communication across EC2 instances. There have been some posts about using a proxy arp method as described here and here but so far I have not been able to get them to work on AWS EC2 but maybe you will have better luck with them and/or better insights to figure out what is missing from these “solutions”. HTH

Orabuntu-LXC can do this on EC2 ( https://github.com/gstanden/orabuntu-lxc ). It uses OpenvSwitch SDN to span LXC container networks across multiple hosts.

I’m just finishing up the beta release for Ubuntu 16.04 on EC2. The 6.10-beta AMIDE release should be out sometime later today (Friday March 23).

Orabuntu-LXC 6.10-beta AMIDE (support for Ubuntu 16.04 EC2 t2.micro and larger)
multi-host (span a single container subnet across multiple hosts)

https://github.com/gstanden/orabuntu-lxc/releases/tag/6.10-beta

Wow, this sounds cool. Will definitely give it a shot. Thank you for your support.

The documentation needs some improvement at the github site.

Before doing the install create an “orabuntu” user on each EC2 Ubuntu 16.04 host.
In the orabuntu account, run “ssh-keygen -t rsa” and then in the usual way put both id_rsa.pub keys in the authorized_keys file and copy the authorized_keys file to both hosts. As described below the orabuntu-services-0.sh file should be used to create the orabuntu use on each host.

Basically, connect to the EC2 instance (this will put you in as “ubuntu” user)

mkdir Downloads
sudo apt-get install wget unzip net-tools bind9utiils
sudo apt-get update
sudo apt-get upgrade
sudo reboot

After reboot, connect in as ubuntu default user again. These steps are just to get the orabuntu-services-0.sh script which creates the “orabuntu” user:

cd Dowloads
pwd
/home/ubuntu/Downloads
wget https://github.com/gstanden/orabuntu-lxc/releases/tag/6.10-beta
unzip 6.10-beta.zip
cd orabuntu-lxc-6.10-beta/orabuntu
./orabuntu-services-0.sh (this creates the “orabuntu” user)
sudo su - orabuntu
password: orabuntu

Now connected as the “orabuntu” user download the zip file distribution again and unzip:

cd Downloads
pwd
/home/orabuntu/Downloads
wget https://github.com/gstanden/orabuntu-lxc/releases/tag/6.10-beta
unzip 6.10-beta.zip
cd orabuntu-lxc-6.10-beta/anylinux
./anylinux-services.HUB.HOST.sh new

This should build the entire infrastructure on the HUB host which is always the first install in an Orabuntu-LXC deployment.

Once that is done, do similar steps on the second EC2 host but run the “anylinux-services.GRE.HOST.sh” instead. And, before you run this script, you have to set the following values in the script:

SPOKEIP = private EC2 ip of the 2nd (GRE) EC2 host
HUBIP = private EC2 ip of the 1st (HUB) EC2 host
HubUserAct = orabuntu
HubSudoPwd = orabuntu

After setting these variables, run “./anylinux-services.GRE.HOST.sh new” .

If something goes wrong for any reason, don’t sweat it, there is a reinstaller feature which uses “rei” as the parameter instead of “new” which will erase the install and allow starting over.

Reach out if you have any issues. I’m glad to do a webex or google hangouts to help if needed.

Here are some recent videos showing the installs.

Finished product (containers on same network across multiple hosts can ssh to each other):

HUB install: https://youtu.be/Y7xaI4_qoE0

GRE install: https://youtu.be/2n6iqMUrhnY

I will post a short video later this afternoon with the setup steps described above (creating orabuntu use, setting up id_rsa.pub keys, etc) for EC2

It’s beta software just released for EC2 and it’s probably got a few bugs.

However, I’ve tested it for the 1 HUB and 1 GRE host case and it seems working ok.

I already found some issues with the sshpass step that goes out to the HUB host and does the lookups for the switch ip. It works ok for the first GRE host (one HUB and one GRE) but for the second GRE host etc. it keeps returning “201” for the 4th Ip octet so that bug is being worked on now.

You should be ok for two hosts - 1 HUB and 1 GRE host though but additional GRE hosts would need the next release 6.10.1 where this is fixed.

Released v. 6.10.1-beta AMIDE.

Please use version 6,10.1-beta AMIDE which has the fix for the remote nslookup of the next available fixed OpenvSwitch IP when adding additional GRE hosts.

Released v 6.10.2-beta AMIDE.

Just tested 6.10.2-beta fully on a 2nd GRE host. The software correctly assigned 202 (next available static IP) to the OpenvSwitches on the 2nd GRE host. This should fix the issue for Ubuntu hosts, including EC2.

Released v 6.10.6-beta AMIDE.

LXC containerized DNS/DHCP replicated across all EC2 instances (“ns1” DNS/DHCP instances are updated from the running “ns1” DNS/DHCP every 5 minutes, user-settable) configured automatically as part of the Orabuntu-LXC install. Note Orabuntu-LXC also automatically ensures containers are named consistently in a monotonically increasing sequence across installs.

Wow, this is really exciting. I’m learning so much thanks a lot @Gilbert_Standen.