AWS with LXD Container On Ubuntu 16.04 LTS

Hi Guys,

Just a quick question. I have one ec2 on aws running on Ubuntu 16.04 and say I setup the lxd using “apt install lxd”. This ec2 has given me only 1 public IP address with one internal IP which starts with 172.x.x.x

My question will be can i use this link ===> https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/ to setup similarly on this AWS EC2? Or should I use the MacVLAN concept from this same article link?

I’ve tried the above url link in my local LAN and it works and the each container i create it gets for me on the same range based on dhcp.

Is this possible if i want to do similar on AWS? I can’t seem to find any direct similar reference for this type of projects.

Appreciate the input from you guys.

Thanks.

My understanding of AWS’ network is that it doesn’t support broadcast packets, so using a bridge or macvlan will not work as it relies on layer 2 ARP/NDP to resolve IPs to MAC addresses.

What do you want to use the AWS LXD container for?

If its for a web site then you can just use LXD’s Proxy Device command to forward port traffic from the host to you container. You just have to remember to open that port in AWS’s security tab for the Host.

I do this for multiple websites and it works well for me.

example cmd on Host to forward Port 80 to the container named CN1:

$ lxc config device add CN1 myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80

However, on AWS you can get more than 1 IP address but you have to pay for it:

EC2 gives you a pair of public IP address PLUS a corresponding internal IP address. You use the EC2 management interface to map a public IP address to the corresponding internal IP address.
But since you are paying for a single public IP address, then there is not much you can do; the network (either the LAN or EC2) must be able to grant you any additional public IP addresses. On your local LAN, by default you get freely such IP addresses, with EC2 you would need to add them to your account.

Having said that, your goal (after getting more public IP addresses) is essentially to map different LXD containers to different 172.x.x.x internal IP addresses. If the containers cannot get their IP address automatically from the network (using bridge or macvlan), then you should be able to assign it manually. More on multiple IP address on EC2.

Hi Guys,

Thanks a lot for the feedback and answers given.

Actually my main intention is to see whether I can run up an Ubuntu 16.04 as the main host, then inside I have lots of lxd containers which consist of centos and others. Each of this containers I would just need to have access to external such as services for http, https, mysql, nagios, etc.

Also my other intention is to help my company save cost because right now i have few EC2 running and out of this few EC2 there are some with CentOS 6.10 and one with CentOS 7. The mix environment running on this few EC2 has 1 for dev, couples for uat and couples more for prod.

I do have a testing server running in my company local LAN and I have manage to run few containers lxd and it works well. I see this as a new challenge on cloud service such as AWS as you guys have pointed out on the IP addresses limitations.

I believe this is not easy to achieve and requires careful planning as there are UAT & PROD environment running on AWS.

Thank you once again.

many instance types in EC2 can support 2 NICs. the 2nd NIC can give you a 2nd private IPv4 address and public IPv6 (if you choose to have IPv6) (in the range(s) of the SUBNET and VPC you attach it to, which can be different than what the 1st NIC is attached to, though it must be in the same AZ). you can then associate another EIP with that 2nd NIC’s private IPv4 (172.x.x.x which is most likely 172.31.x.x if you are using a default network setup). some big instance types can have as many as 8 NICs.

it is usually best to create a new separate AWS account for development testing to avoid risks of impacting the production. AWS encourages this common practices. it costs no more to run the same instance in another account. here at my company, each developer gets one and each project gets one.

Are you trying to give the internal lxd containers the extra internally routable addresses given to your host inside AWS?
You might be able get the host to respond on behalf of the containers using the AWS ip’s by using proxy-arp and some interface routes internally on the hosts bridge.

You need to enable proxy-arp in sysctl on the interface in question. e.g.
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp

Give the container the new IP address you want to use which Amazon gave you:
e.g. 10.170.10.100/24

then on the container as it will be in the wrong subnet, create an interface route to get out via the lxd bridge, in my case the lxd bridge gw is 10.132.125.1

ip route add 10.132.125.1/32 dev eth0 (this enables connectivity to the gateway even thought its on the wrong subnet.)

now add a default route via the gateway above (this is like a recursive lookup)
ip route add 0.0.0.0/0 10.132.125.1

on the container host, create a /32 host route for the AWS ip to go internally via the lxd bridge:
ip route add 10.170.10.100/32 via dev lxdbr0

I’ve done this but not in pub cloud so theres a good chance it will not work as some things just don’t work in public cloud as your bound to the strict rules of their overlay networking. As stated, broadcasts just don’t exist in the usual sense. Not sure about the ARP side.

other options would be to just nat containers on egress of the host and port forward traffic in through the host, or forward the traffic to an internal haproxy container which then forwards tcp connections or http on to the containers.

You could also run tunnels between the aws vms and create an overlay within their overlay, using vxlan, ipsec tunnels, fan bridges etc, but you would likely have to nat on egress to get to the outside world. You may have issues with mtu though .

edit: to say I think you can route to your internal lxd ranges if needed but you have to modify the routing tables in AWS to say the next hop for your lxdbrX network is via VM1 or whatever, also you have to disable “enable source dest check” (source / destination routing verification , otherwise it drops the traffic)

Hi @Skaperen & @bodleytunes,

Thanks again for those information given. I will certainly test and see if all this will works and hope it can bring changes to leverage on cost and managing the environment better for me on this AWS.

1 Like