Public IPv4 for a container using network bridge?

OK, can you show output of ip a on the host, so I can see if the 169.254.0.1 was added, also is this pingable from inside the container?

I cannot ping 169.254.0.1 from inside the container.

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 90:1b:0e:8d:f8:70 brd ff:ff:ff:ff:ff:ff
inet 138..16.132/26 brd 138..16.191 scope global eth0
valid_lft forever preferred_lft forever
inet 138..16.151/26 brd 138..16.191 scope global secondary eth0:0
valid_lft forever preferred_lft forever
inet6 2a01:4f8:171:2783::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::921b:eff:fe8d:f870/64 scope link
valid_lft forever preferred_lft forever
3: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:18:8a:1a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.10/16 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:1657:a2e5:b7e6::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fe18:8a1a/64 scope link
valid_lft forever preferred_lft forever
7: veth4161f417@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:72:02:00:18:13 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.0.1/32 scope global veth4161f417
valid_lft forever preferred_lft forever
inet6 fe80::fc72:2ff:fe00:1813/64 scope link
valid_lft forever preferred_lft forever

Should I try to enable net.ipv4.ip_forward=1 ? Just in case ?
EDIT: no difference when I enable it

EDIT 2: I tried to ping 10.10.10.10 and 169.254.0.1 from a new container that is just in lxdbr0 and I can ping both of those IPs.

Yeah it wont hurt.

What version of LXD and the host OS are you running? Is the container debian buster.

Its rather odd you cannot ping the 169.254.0.1 address, as the static default route is there, and you can see it bound to the veth4161f417 interface on the host side, so that doesn’t depend on forwarding to allow it, its just straight veth communication.

Suffice to say I tested it here this morning on Debian and it worked fine.

Yes, I use Debian Buster as host OS and container os as well.

LXC info:

driver: lxc
driver_version: 4.0.4
firewall: xtables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
netnsid_getifaddrs: “false”
seccomp_listener: “false”
seccomp_listener_continue: “false”
shiftfs: “false”
uevent_injection: “true”
unpriv_fscaps: “true”
kernel_version: 4.19.0-10-amd64
lxc_features:
cgroup2: “true”
devpts_fd: “false”
mount_injection_file: “true”
network_gateway_device_route: “true”
network_ipvlan: “true”
network_l2proxy: “true”
network_phys_macvlan_mtu: “true”
network_veth_router: “true”
pidfd: “true”
seccomp_allow_deny_syntax: “true”
seccomp_notify: “true”
seccomp_proxy_send_notify_fd: “false”
os_name: Debian GNU/Linux
os_version: “10”
project: default
server: lxd
server_clustered: false
server_name: Debian-83-jessie-64-minimal
server_pid: 15037
server_version: “4.6”
storage: dir
storage_version: “1”

I tried to ping 10.10.10.10 and 169.254.0.1 from a new container that is just in lxdbr0 and I can ping both of those IPs.

Can I get access to the host?

Would you mind if that would be via teamviewer ? I have a web server and db running there so its not completely empty “test” server.

Sure

Issue resolved, IP alias was still bound on the host. Removing it allowed the static route to take effect.

2 Likes

I have the same problem with you (the two public IP provided by the data center. I want to bind the second IP to the LXD container so that it can access the container from the outside), but I still can’t solve it. Can you write a course on the public IP of the container using the bridge

My problem is also that the container cannot access the network

I’m trying to get this to work but I can’t find the file /etc/network/interfaces to disable DHCP. I’m using the latest version of LXD on Amazon Linux. Is there a different way to disable DHCP for this setup? Thank you in advance for your help.

Please could you create a separate thread detailing your setup?