Hello i have a small Qustion.
I Guess i am doing something wrong but not sure what.
So i Create a New Network with
lxc network create lxd
Then i add that to a Profile as lxd0 and restart the containers. They will get an ipv6 but not a ipv4 address. I tryed to enable ipv4.dhcp and dhcp range but that didnt help either.
When you create a managed network (as you did with lxc network created lxd0), you get a DHCP server from LXD (not a system DHCP server).
If you have some other DHCP server on your host, it may conflict with the one by LXD.
I assume that you are using Ubuntu, and have either the deb package of LXD, or the snap package of LXD. If that is not the case, then tell us.
The command to show (on the host) what processes are listening on port 53 (domain), is the following.
sudo lsof -i :53
If you do not get anything in the output, then something is wrong.
You might get Command not found or something. If you get that, you are supposed to sudo apt install lsof.
I also tried my steps in the test area of LXD but same resulPreformatted textt cannot get an ip address. Im prety sure i am doing something wrong. What did i do. Create new network. network to profile, add profile to new container. No IP, set ipv4.dhcp, no ip, edited network and added parent eth0, no ip. What am i doing wrong?
eth0 (macvlan), which must get an IP address from your host’s network, and cannot get it from an LXD managed network.
lxd0 (bridged to an interface lxd0 , LXD should give an IP address here), and
test (that must exist on the host and have a dedicated independent DHCP server already running, because there is no managed LXD network with the name test).
Therefore, for eth0 with macvlan to work, your computer should not connect to the network with WIFI because WIFI security (WPA) does not allow more than one MAC address to connect to the WiFI router. In addition, if you use LXD inside a virtual machine (KVM, VMWare, VirtualBox), it will probably not work because by default because these virtualization environment require specific configuration for macvlan.
The interface lxd0 should work and get an IP address, but I do not know whether the container sticks to the first network interface (which cannot get an IP address) and ignores the rest. This is up to the container image, and whether it tries to use DHCP only for the eth0, or tries all available interfaces.
The interface test should give you the error Error: Common start logic: Failed to start device ‘test’: Parent device ‘test’ doesn’t exist if there is not test network interface on the host.
I suggest to clean up the list of network interfaces.
If you read a guide on setting up network interfaces for LXD, which one was it? We might be able to help get it better by explaining more about network interfaces.
Thanks! I tried it on my productin system with no macvlan only a bridge and it worked. I Guess what you were saying is kinda the key. My only remaining question is why this still did not work in the testing enviorment of LXD since it gets an ip address over MCVLAN but not over a second connected bridge.
as of the container? since i have multiple game server which i want to put into containers and dont want to screw with the config. Like in Docker port forwarding?
For the first question, it is up to the container image whether it will try to DHCP the configuration of an interface. The instructions are found in /etc/netplan/. I think the default is to do just eth0.
You can affect this file and make Ubuntu to get an IP address from additional interfaces by using cloud-init in your LXD profile. The configuration you will put in the LXD profile will make it into the netplan configuration in the container. It is really neat.
On the second question, you want to specify the allocated IP of the container instead of localhost?
You can definitely do so, but I think it is cleaner to use the localhost. Because if the IP address changes, you do not need to edit the proxy device. It is a matter of personal preference in the end.
Thanks, What i was trying to do yesturday to give a container a Public IPV6 that didnt seem to work. I cant just add a proxy since i want every container to have their owen public ipv6? Is that currently posible?
As well as using the proxy, there are several options for getting public IPs into containers (which the proxy doesn’t really achieve, as the source address of outbound connections from the container will not use the public IP):
MACVLAN: This requires either manual config of IPs inside the container, or SLAAC/DHCPv6 to be running on the parent network. It also does not allow the container to communicate with the host, and will require the parent network port to allow multiple MAC addresses (which some ISPs do not allow).
IPVLAN: This requires static address configuration and cannot use SLAAC/DHCPv6, like MACVLAN it does not allow the container to talk to the host, but unlike MACVLAN, the containers will share the parent port’s MAC address.
Routed: This required static address configuration (like IPVLAN), but does allow the container to talk to the host, and like IPVLAN will share the parent port’s MAC address.
Bridged: This requires some more complex setup, but if you create a new bridge and connect the parent port to it, then multiple containers can also be connected to the bridge which will act like a ‘switch’. This allows the containers to talk to the host, and will support SLAAC/DHCPv6. However like MACVLAN, the parent network port will need to support multiple MAC addresses.
The default bridged network created by lxd, called lxdbr0 works like the last option, but because it is not connected to the wider parent network, your containers will not get public IPs, and will re-use the host’s IP for outbound connections (NAT).
So you can see that there are various options, but the best option will depend on your requirements and any restrictions that your network provider enforces.
I had a similar problem - ip6 address was allocated but ip4 address was not allocated reliably. I solved it by re-initializing the lxd snap and specifying not to use ip6 addresses. The default is to enable both ip4 and ip6.
$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm, zfs, ceph, btrfs) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] no
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config:
images.auto_update_interval: "0"
networks:
- config:
ipv4.address: auto
ipv6.address: none
description: ""
name: lxdbr0
type: ""
project: default
storage_pools:
- config: {}
description: ""
name: default
driver: dir
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
I think that lxc-dnsmasq is listening on the lxdbr0 network interface.
To verify, run the following. It will show the IP address of the interfaces. You want to verify which process is listening on 10.46.233.1 on port domain/53.
I do not see here a reference to the network 10.46.233.1, which came from the output of the lxc network list command. Here, LXD (i.e. its dnsmasq) is listening on 10.20.110.1.
If this discrepancy is not due to a possible reinstall of LXD, then someone more knowledgeable should look into this.
If you do not use LXC, you can remove those packages. You would be using LXC if you run the command lxc-create to create containers.