Thanks for the prompt reply
windlassbr0_1 is a vlan on the host connected to the windlassbr0 bridge. Id like the container to be connected to the vlan. What nictype does that make then? and does any other part of the command change?
More importantly, as ill eventually doing this all through Go, how would i go about it programmatically through Go if not through KV via ContainerPut.Config?
Just a single container per VLAN. Thanks a mil for your help was having an awful time trying to figure this out. Will report back progress tomorrow evening
13: windlassbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff
inet 10.69.1.0/16 scope global windlassbr0
valid_lft forever preferred_lft forever
14: windlassbr0_1@windlassbr0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff
inet 10.69.1.1/32 scope global windlassbr0_1
valid_lft forever preferred_lft forever
15: windlassbr0_2@windlassbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff
inet 10.69.1.2/32 scope global windlassbr0_2
valid_lft forever preferred_lft forever
windlassbr0_1 disappears from ip addr show until i run lxc stop a1, and it returns without the subnet assigned. The a1 container also doesnt show an IP address in lxc list nor can it reach any external hostnames or ip addresses.
My idea was to have a multitude of VLANs connected to a single bridge with VLAN filtering for isolation. Each VLAN would have one LXC container attached to it. A /32 seemed ideal for the lowest amount of IP addresses âwastedâ, but Iâm not sure if its the right configuration for the job, as I want each LXC container to be externally accessible from a reverse proxy on the host. Maybe you can offer some advice on the matter and point me in the right direction for this using LXC
lxc config device add a1 eth0 nic nictype=physical parent=windlassbr0_1 name=eth0 vlan=1 this command seemed to keep the VLAN visible on the host through ip addr show. Im assuming this is âcorrect/goodâ behaviour. From here im not sure how to be able to access the external network from the container, ping give me ping: sendto: Network unreachable and lxc list shows no IP address allocated.
Please could you describe a bit more about your desired network setup?
Are you trying to assign IPs from your external network to your containers (you mentioned a /32), or are you looking to use a private IP range on just your host?
Related to that, are you expecting your containers to get IPs from an external DHCP server, or rather one running on your host?
What is the purpose of using multiple VLANS inside your host? Are you looking to span these VLANS across a network trunk port to multiple hosts?
Iâm not exactly sure what your aim is with this, but reading between the lines, Iâm thinking you are looking to assign a single private IP to the container, with isolation between containers, but still reachable from a reverse proxy service running on the host. Is this correct?
Each VLAN would be connected to a VLAN aware bridge to limit connectivity between containers/vlans to having to go through the reverse proxy. Perhaps the VLAN alone provides that isolation.
At the same time, I would want to have some method through which I can probe for a container being up/down. Initially Im thinking having a service in each container that registers with a Consul cluster on the host.
I will also need to have each container accessible from the host network to interact with a Docker daemon that would be exposed on each container
Re. IP allocation, im simply wanting to assign private IPs within the host. I chose a 10.x.x.x/32 to increase to available IPs to me with the bridge each vlan is connected to being on a 10.x.x.x/16, but this may not be the right choice for various reasons im unaware of.
The DHCP server would be running on the host, unless there is a way for me to easily assign IPs programmatically without the use of a DHCP server.
Ideally, setup of this system would be taken care of by my services so 3rd parties could grab the services from github/etc, run the services and it would all be self configuring. Im not sure if routes need to be configured on the Linux host or not to facilitate this setup and whether or not that would add much complexity.
Hopefully this answers your questions. Maybe you have a better network architecture suggestion than what Ive come up with. @tomp
@Strum355 I havenât used a VLAN filtering bridge before, but I had a quick look at the docs.
But my understanding is that in order for the containers on the VLANs to communicate with the host you would need the host to have a VLAN interface in each VLAN. Is this your understanding too?
This would also mean youâd need the DHCP server to be listening on all of those host-side interfaces too.
In effect I believe youâre proposing a peer-to-peer link between host and container using a single bridge and multiple VLANs?
If this is the case, Iâd be tempted to explore using plain bridged mode with an LXD managed bridge, and then adding additional firewall rules to the bridge (using ebtables or bridge aware iptables) to isolate container traffic so they can only talk to the host and vice versa.
Weâve recently added IP spoof protection to LXD (see LXD 3.15 has been released) which would do some of the job for you. Weâve talked about adding isolation mode too in the future.
An alternative to this would be to use the p2p NIC type, and then setup IP addresses inside the containers manually, and setup host routes and ARP proxy entries manually. As this would provide a private peer-to-peer link between each container to the host (not bridged), and then you can control whether the host routes packets between interfaces or not.
We are looking to automate this sort of setup in a future version of LXD, called âroutedâ mode, and we added initial support for it in LXC recently (Weekly Status #106).
This sounds like the better path for me to take, Iâm trying to keep things as simple as I can while still providing important functionality. The P2P NIC type option sounds quite complex and manual
Assuming I have the DHCP server running on the bridge to which the containers will be attached to (ill hopefully find out how to do that in the LXD codebase), is there a reliable way to get a containers IP on creation without a poll loop that waits for its IP to be set? Ill need to set a routing rule in the reverse proxy based on the containers IP @tomp
If you have a fresh LXD instance, start the lxd server and then run separately âlxd initâ, if you accept all the defaults youâll end up with an LXD managed bridge and DHCP server with a private subnet.
e.g.
lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=43GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, âautoâ or ânoneâ) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, âautoâ or ânoneâ) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
New containers will then be automatically connected to this bridge with automatically allocated IPs, e.g.
lxc launch ubuntu:18.04 c1
We wonât know what the containerâs IP is until it has finished booting and requested an IP from the DHCP server.
However if you can allocate them yourself, then you can manually specify them:
Great thanks for the info! Ill have to get around to it after work before I come back with more questions haha
You say to run lxd init, is this something that can be done programmatically with the Go client library? Worst case Iâll have to shell out or have it as a prerequisite step for users to do manually. If yes, would this overwrite settings from a previous run of lxd init, in the case that users want to have different instances of lxd init so to speak.
@Strum355 the lxd init example was just there to show you how to get a fresh known setup for testing.
In reality you can use the API to add/config managed networks (i.e a bridge and dhcp server combinations managed by LXD) and to create containers and add interfaces with static IPs to them.