Unknown configuration key: lxc.net.0.flags

Ive got the following virtual network interfaces

13: windlassbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
   link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff
   inet 10.69.1.0/16 scope global windlassbr0
      valid_lft forever preferred_lft forever
14: windlassbr0_1@windlassbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
   link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff
15: windlassbr0_2@windlassbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
   link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff

and the following output is received for the following command

>lxc launch windlass-alpine e2 -c net.[0].type=veth -c net.[0].link=windlassbr0_1 -c net.[0].name=eth0 -c net.[0].ipv4.address=10.69.1.1/32       
Creating e2
Error: Create container: Unknown configuration key: net.[0].link

Even when doing it from Go i get an unknown configuration error, generally a random one as well.

CreateContainer(api.ContainersPost{
		ContainerPut: api.ContainerPut{
			Config: map[string]string{
				"lxc.network.type": "veth",
				"lxc.network.link": "windlassbr0_1",
				"lxc.network.name": "eth0",
			},
		},
		Name: opts.Name,
		Source: api.ContainerSource{
			Type:        "image",
			Fingerprint: "be0f1def31be", // Alpine 3.9 Windlass Edition
		},
	})

Im not sure where to go from here.

lxc version
Client version: 3.15
Server version: 3.15

Those are liblxc config options you’re passing to LXD, this isn’t going to work.

Based on the above, you seem to want a peer to peer veth interface, named eth0 in the container and plugged into the windlassbr0_1 bridge on the host?

If so, the way to do that is:

  • lxc init windlass-alpine e2
  • lxc config device add e2 eth0 nic nictype=bridged parent=windlassbr0_1 name=eth0
  • lxc start e2

Or have a profile including that device, then pass the profile using -p.

Email reply didnt seem to work so here goes.

Thanks for the prompt reply :slight_smile:
windlassbr0_1 is a vlan on the host connected to the windlassbr0 bridge. Id like the container to be connected to the vlan. What nictype does that make then? and does any other part of the command change?

More importantly, as ill eventually doing this all through Go, how would i go about it programmatically through Go if not through KV via ContainerPut.Config?

Will you have more than one container using windlassbr0_1?
If not, you can use nictype=physical parent=windlassbr0_1

The equivalent in rest API is by setting ContainerPut.Devices.

You can pass --debug to the command line tool to see the REST query.

Just a single container per VLAN. Thanks a mil for your help :slight_smile: was having an awful time trying to figure this out. Will report back progress tomorrow evening

I ran the following commands:

lxc init windlass-alpine a1 
lxc config device add a1 eth0 nic nictype=physical parent=windlassbr0_1 name=eth0
lxc start a2

and with the following network configuration:

13: windlassbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff
    inet 10.69.1.0/16 scope global windlassbr0
       valid_lft forever preferred_lft forever
14: windlassbr0_1@windlassbr0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff
    inet 10.69.1.1/32 scope global windlassbr0_1
       valid_lft forever preferred_lft forever
15: windlassbr0_2@windlassbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether 8e:15:10:56:52:35 brd ff:ff:ff:ff:ff:ff
    inet 10.69.1.2/32 scope global windlassbr0_2
       valid_lft forever preferred_lft forever

windlassbr0_1 disappears from ip addr show until i run lxc stop a1, and it returns without the subnet assigned. The a1 container also doesnt show an IP address in lxc list nor can it reach any external hostnames or ip addresses.

My idea was to have a multitude of VLANs connected to a single bridge with VLAN filtering for isolation. Each VLAN would have one LXC container attached to it. A /32 seemed ideal for the lowest amount of IP addresses ‘wasted’, but I’m not sure if its the right configuration for the job, as I want each LXC container to be externally accessible from a reverse proxy on the host. Maybe you can offer some advice on the matter and point me in the right direction for this using LXC

lxc config device add a1 eth0 nic nictype=physical parent=windlassbr0_1 name=eth0 vlan=1 this command seemed to keep the VLAN visible on the host through ip addr show. Im assuming this is ‘correct/good’ behaviour. From here im not sure how to be able to access the external network from the container, ping give me ping: sendto: Network unreachable and lxc list shows no IP address allocated.

Hi @Strum355

Please could you describe a bit more about your desired network setup?

Are you trying to assign IPs from your external network to your containers (you mentioned a /32), or are you looking to use a private IP range on just your host?

Related to that, are you expecting your containers to get IPs from an external DHCP server, or rather one running on your host?

What is the purpose of using multiple VLANS inside your host? Are you looking to span these VLANS across a network trunk port to multiple hosts?

I’m not exactly sure what your aim is with this, but reading between the lines, I’m thinking you are looking to assign a single private IP to the container, with isolation between containers, but still reachable from a reverse proxy service running on the host. Is this correct?

What sort of isolation are you looking for?

Thanks
Tom

Hi Tom, this is absolutely correct. I have a diagram here https://github.com/UCCNetworkingSociety/Windlass/blob/master/doc/windlass-network.png

Each VLAN would be connected to a VLAN aware bridge to limit connectivity between containers/vlans to having to go through the reverse proxy. Perhaps the VLAN alone provides that isolation.

At the same time, I would want to have some method through which I can probe for a container being up/down. Initially Im thinking having a service in each container that registers with a Consul cluster on the host.

I will also need to have each container accessible from the host network to interact with a Docker daemon that would be exposed on each container

Re. IP allocation, im simply wanting to assign private IPs within the host. I chose a 10.x.x.x/32 to increase to available IPs to me with the bridge each vlan is connected to being on a 10.x.x.x/16, but this may not be the right choice for various reasons im unaware of.

The DHCP server would be running on the host, unless there is a way for me to easily assign IPs programmatically without the use of a DHCP server.

Ideally, setup of this system would be taken care of by my services so 3rd parties could grab the services from github/etc, run the services and it would all be self configuring. Im not sure if routes need to be configured on the Linux host or not to facilitate this setup and whether or not that would add much complexity.

Hopefully this answers your questions. Maybe you have a better network architecture suggestion than what Ive come up with. @tomp

@Strum355 I haven’t used a VLAN filtering bridge before, but I had a quick look at the docs.

But my understanding is that in order for the containers on the VLANs to communicate with the host you would need the host to have a VLAN interface in each VLAN. Is this your understanding too?

This would also mean you’d need the DHCP server to be listening on all of those host-side interfaces too.

In effect I believe you’re proposing a peer-to-peer link between host and container using a single bridge and multiple VLANs?

If this is the case, I’d be tempted to explore using plain bridged mode with an LXD managed bridge, and then adding additional firewall rules to the bridge (using ebtables or bridge aware iptables) to isolate container traffic so they can only talk to the host and vice versa.

We’ve recently added IP spoof protection to LXD (see LXD 3.15 has been released) which would do some of the job for you. We’ve talked about adding isolation mode too in the future.

An alternative to this would be to use the p2p NIC type, and then setup IP addresses inside the containers manually, and setup host routes and ARP proxy entries manually. As this would provide a private peer-to-peer link between each container to the host (not bridged), and then you can control whether the host routes packets between interfaces or not.

We are looking to automate this sort of setup in a future version of LXD, called “routed” mode, and we added initial support for it in LXC recently (Weekly Status #106).

This sounds like the better path for me to take, I’m trying to keep things as simple as I can while still providing important functionality. The P2P NIC type option sounds quite complex and manual :smiley:

Assuming I have the DHCP server running on the bridge to which the containers will be attached to (ill hopefully find out how to do that in the LXD codebase), is there a reliable way to get a containers IP on creation without a poll loop that waits for its IP to be set? Ill need to set a routing rule in the reverse proxy based on the containers IP @tomp

If you have a fresh LXD instance, start the lxd server and then run separately “lxd init”, if you accept all the defaults you’ll end up with an LXD managed bridge and DHCP server with a private subnet.

e.g.

lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: 
Create a new BTRFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=43GB]: 
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

New containers will then be automatically connected to this bridge with automatically allocated IPs, e.g.

lxc launch ubuntu:18.04 c1

We won’t know what the container’s IP is until it has finished booting and requested an IP from the DHCP server.

However if you can allocate them yourself, then you can manually specify them:

lxc init ubuntu:18.04 c1
lxc config device add eth0 nic nictype=bridged name=eth0 parent=lxdbr0 ipv4.address=x.x.x.x 
lxc start c1

Optionally if you want to prevent containers from potentially using IPs other than those allocated to them you can enable IP filtering:

lxc config device add eth0 nic nictype=bridged name=eth0 parent=lxdbr0 ipv4.address=x.x.x.x security.ipv4_filtering=true

For more info see https://lxd.readthedocs.io/en/latest/containers/#nictype-bridged

Great thanks for the info! Ill have to get around to it after work before I come back with more questions haha

You say to run lxd init, is this something that can be done programmatically with the Go client library? Worst case I’ll have to shell out or have it as a prerequisite step for users to do manually. If yes, would this overwrite settings from a previous run of lxd init, in the case that users want to have different instances of lxd init so to speak.

@Strum355 the lxd init example was just there to show you how to get a fresh known setup for testing.

In reality you can use the API to add/config managed networks (i.e a bridge and dhcp server combinations managed by LXD) and to create containers and add interfaces with static IPs to them.

1 Like