Incus proxy bind to host address issue

Hi, as expected I have an issue creating proxy connections when trying to bind to host. (expected because I’d tried in the past and had problems and ignored it because I didn’t need it)
Now I need it.

I have an instance called “Demo” with a network section that looks like this;

      "network": {
        "eth0": {
          "addresses": [
            {
              "family": "inet",
              "address": "10.103.0.3",
              "netmask": "24",
              "scope": "global"
            },
          ],

And when I try to set up a proxy I get;

# incus config device add demo proxy-to-web proxy \
    listen=tcp:192.168.234.4:85 \
    connect=tcp:10.103.0.3:80 \
    nat=true \
    bind=host
Error: Failed to start device "proxy-to-web": Failed to start device "proxy-to-web": Connect IP "10.103.0.3" must be one of the instance's static IPv4 addresses

I’m at a bit of a loss, the IP is not marked as dynamic (and even if it was it would be pinned which really “should” be Ok) , the interfaces all exist and ping … and the ports aren’t in use elsewhere.

To be honest I have tried this in the past and failed, but in this instance it’s a new machine, the current Zabbily install and as far as I can see it all seems to be “in order”.

Any idea what I’m doing wrong?

(I’ve tried many combinations of this based on dhcp interfaces and “incus config device override demo eth-1 ipv4.address=10.103.0.3”, but this is specifically set as static inside the instance …)

Gah, so if I set up a blank profile with no network, then;

incus config device add demo eth0 nic network=private ipv4.address=10.103.0.3 name=eth0
incus config device add demo proxy-to-web proxy listen=tcp:91.99.102.97:85 connect=tcp:10.103.0.3:80 nat=true bind=host

That works … so it’s looking like it doesn’t like overridden interfaces … (?!)

Spoke too soon … it set up Ok, but won’t start …

# incus config device add npm eth0 nic network=private ipv4.address=10.103.0.100 name=eth0
# incus config device add npm http proxy listen=tcp:91.99.102.97:80 connect=tcp:10.103.0.100:80 ' nat=true bind=host
# incus start npm

Error: Failed to start device "http": Connect IP "10.103.0.100" must be one of the instance's static IPv4 addresses
Try `incus info --show-log npm` for more info

incus config show npm
...
  eth0:
    ipv4.address: 10.103.0.100
    name: eth0
    network: private
    type: nic
...

Is private here an OVN network?

I think the error message is wrong but the failure is correct.

Basically we can only handle nat=true if the host is guaranteed to be the ingress and egress gateway and is the one handling that subnet itself.

If using OVN, that can’t be the case. The host doesn’t normally have an IP on the network itself and isn’t the one handling NAT. NAT may also be handled by another server entirely (active chassis).

Ahh, Ok. It is an OVN.
I guess in that case I need two interfaces / networks … thanks, will experiment.

Mmm, curiouser and curiouser said Alice … so I have it all configured such that it should be working, but it’s not opening the natted #443 port on the Incus host (but is is opening the non-nat #81):

# incus ls -cns4t npm
+------+---------+---------------------+-----------------+
| NAME |  STATE  |        IPV4         |      TYPE       |
+------+---------+---------------------+-----------------+
| npm  | RUNNING | 10.103.0.100 (eth0) | CONTAINER (APP) |
|      |         | 10.100.0.100 (eth1) |                 |
+------+---------+---------------------+-----------------+

# incus config show npm
...
 eth0:
    ipv4.address: 10.103.0.100
    name: eth0
    network: private
    type: nic
  eth1:
    ipv4.address: 10.100.0.100
    name: eth1
    network: public
    type: nic
...
 Admin:
    bind: host
    connect: tcp:127.0.0.1:81
    listen: tcp:0.0.0.0:81
    nat: "false"
    type: proxy
 https:
    bind: host
    connect: tcp:10.100.0.100:443
    listen: tcp:a.b.c.d:443
    nat: "true"
    type: proxy

#curl https://10.100.0.100/
curl: (35) OpenSSL/3.0.16: error:0A000458:SSL routines::tlsv1 unrecognized name
### So this IS working

# netstat -natp | grep LISTEN
tcp        0      0 0.0.0.0:660             0.0.0.0:*               LISTEN      1248/tincd          
tcp        0      0 10.100.0.1:53           0.0.0.0:*               LISTEN      32624/dnsmasq       
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1382/sshd: /usr/sbi 
tcp        0      0 192.168.234.4:8443      0.0.0.0:*               LISTEN      32548/incusd        
tcp6       0      0 :::660                  :::*                    LISTEN      1248/tincd          
tcp6       0      0 :::81                   :::*                    LISTEN      31731/incusd        
tcp6       0      0 :::22                   :::*                    LISTEN      1382/sshd: /usr/sbi 

Yet no open # 443.
Nothing in the Incus log file …

# incus network list -c ntm4d | grep YES
| UPLINK         | physical | YES     |               |               |
| private        | ovn      | YES     | 10.103.0.1/24 | OVN Network   |
| public         | bridge   | YES     | 10.100.0.1/24 | Public Bridge |

If I set NAT to “NO” for #443, it works (!)

Ok, gottit. Many thanks for your kenp @stgraber , not sure I would have got there at all without you :slight_smile: .

if the host is guaranteed to be the ingress and egress gateway

Turns out I had asymmetrical routing on the listening port. It was accepting the config because the port itself was on the same network as the host bridge, but the default route inside the instance was pointing back out via the interface connected to the OVN network.

So, I now have real IP logging inside Nginx Proxy Manager … :slight_smile:

Suggestions

  • The error message that’s displayed when you try to use an inappropriate IP address as you pointed out, may not be optimal. A pointer with the error to the effect of your comment about ingress and egress would probably be a very useful aid.

  • The test itself seems to be flawed in that it assumes traffic to the port will be symmetrical and as a result only tests the flow in one direction … so it can end up validating a configuration that is in effect “invalid” which will subsequently fail silently by not opening the host port.

Note (security)

  • I tried to set up a listener on port 443 on the VPN interface I use for management purposes. This fails in the same way. I’m not entirely sure “why” this is the case, but it does mean that I apparently can’t route incoming connections from my VPN into the instance, at lease not with nat=true … which is a little confusing as it is something I want to be able to do. The use-case in this instance is access to the Proxy Manager UI, via the VPN, validated inside the container as requests only from the VPN IP range.

Note (OCI containers)

  • Are great and I’m starting to use them everywhere, although the migration issue with attached volumes it a bit of an problem atm.
  • This issue with OVN, proxy and nat means that two interfaces inside the container seem to be required, yet stock docker images often seem to be set up for a single interface. Whereas I understand I can build my own images, one of the attractions of docker images is that I can let someone else do the image maintenance, then pull standard images and just run them. If there is any sort of mechanism to automatically configure a second static IP inside a docker container that anybody is aware of, this information would be very useful. At the moment I seem to be stuck with a script that I run every time something touches the container;
# cat install_ip.sh 
#!/usr/bin/env bash
#
incus file push /bin/ip npm/bin/ip
incus file push /lib/aarch64-linux-gnu/libbpf.so.1 npm/lib/aarch64-linux-gnu/libbpf.so.1
incus file push /lib/aarch64-linux-gnu/libelf.so.1 npm/lib/aarch64-linux-gnu/libelf.so.1
incus file push /lib/aarch64-linux-gnu/libmnl.so.0 npm/lib/aarch64-linux-gnu/libmnl.so.0
incus file push /usr/bin/ping npm/usr/bin/ping
incus exec npm ip addr add 10.103.0.100/24 dev eth1
incus exec npm ip link set dev eth1 up
incus exec npm ip route add 10.4.0.0/24 via 10.103.0.1
  • Alternatively, is there a way to run this automatically whenever INCUS reconfigures the container’s networking. Or is it just the case that I need to set up my own docker repo and maintain my own images?
    (yes you read that right, the sock NPM instance doesn’t include “ip” or “ping” :frowning: )

Just in case anyone else has the same or a similar problem; Nginx Proxy Manager (in particular) uses an init system called “s6” which seems to be popular with lightweight docker images, but it’s missing some key tools like “ip” and “ping”, which make setting up a second interface a little tricky. This is what I’ve ended up with for now, it’s a bit of a kludge, but it survives instance restarts and and docker image rebuilds, so for now it’s “a” way to work with stock docker images.

This will not work “as-is” for you, you will need to tweak it

First you need an instance and we’re assuming that it has a persistent “/data” attachment. Then you need to assign it a static IP address for each of your interfaces, the first (eth0) needs to be on your host bridge network, the second on your OVN network. (otherwise you won’t be able to do a proxy with nat bound to the host, which means you won’t see real client IP’s in the container, which means you won’t have any IP based access control or useful logging …)

# Assume "private" is attached to your OVN network range on 10.103.0.0/24
# Assume "public" is attached to your host network bridge on 10.100.0.100/24
incus config device add (instance) eth0 nic network=private ipv4.address=10.103.0.100 name=eth0
incus config device add (instance) eth1 nic network=public ipv4.address=10.100.0.100 name=eth1

The following script should;

  • blindly detach and delete volume “(instance)-cont-init.d”
  • (re)create that same volume
  • create a 10-config-eth1.sh script and copy it into this volume
  • copy binaries for “ip” and “ping” to the instance’s /data/s6 folder
  • copy missing shared libraries to the instance’s /data/s6 folder

Making a blind assumption that the host distro is the same as the container distro, if not, you may need to tweak the ip and ping binaries and associated shared libraries

make_eth1.sh

#!/usr/bin/env bash
#
ADDRESS="10.103.0.100/24"
GATEWAY="10.103.0.1"
ROUTING="10.4.0.0/24"
INSTANCE="npm"
#
incus storage volume detach default ${INSTANCE}-cont-init.d ${INSTANCE} > /dev/null 2>&1
incus storage volume delete default ${INSTANCE}-cont-init.d > /dev/null 2>&1
#
incus storage volume create default ${INSTANCE}-cont-init.d
incus storage volume attach default ${INSTANCE}-cont-init.d ${INSTANCE} /etc/cont-init.d
#
cat - > 10-config-eth1.sh << EOF
#!/usr/bin/env bash
#
# Script for s6-init based systems to initialise
# a second interface with a static IP address
#
IP="/data/s6/bin/ip"
export LD_LIBRARY_PATH=/data/s6/lib

echo "\$(date) Attempting to configure eth1 with static IP $ADDRESS"

\${IP} link set eth1 up
\${IP} addr add "${ADDRESS}" dev eth1
if [ -n "${GATEWAY}" ]; then
  echo "\$(date) - Adding route ${ROUTING} via ${GATEWAY} on eth1"
  \${IP} route add ${ROUTING} via ${GATEWAY} dev eth1
fi

echo "\$(date) Static routing complete"

EOF
chmod +x 10-config-eth1.sh
incus file push 10-config-eth1.sh ${INSTANCE}/etc/cont-init.d/
incus file create --type=directory ${INSTANCE}/data/s6
incus file create --type=directory ${INSTANCE}/data/s6/bin
incus file create --type=directory ${INSTANCE}/data/s6/lib
incus file push /usr/sbin/ip ${INSTANCE}/data/s6/bin/ip
incus file push /usr/bin/ping ${INSTANCE}/data/s6/bin/ping
incus file push /lib/aarch64-linux-gnu/libbpf.so.1 ${INSTANCE}/data/s6/lib/libbpf.so.1
incus file push /lib/aarch64-linux-gnu/libelf.so.1 ${INSTANCE}/data/s6/lib/libelf.so.1
incus file push /lib/aarch64-linux-gnu/libmnl.so.0 ${INSTANCE}/data/s6/lib/libmnl.so.0

Then on the host;

bash make_eth1.sh

When you restart the container, the s6 init system should see this new script and run it “first” as a part of the container boot sequence. The script should use the “ip” binary to add the specified address to “eth1” along with the specified OVN “routing” address. (so you’ll get a default route on the host bridge and a route to your OVN network on eth1)

This relies on s6 apparently looking in /etc/cont-init.d for startup scripts, which is appears to do even for containers that don’t actually come with an /etc/cont-init.d directory. We’re also assuming that nothing else is using this folder, if it is, again a tweak will be needed. One option would be to use a file path and bind to the actual file rather than the folder.

It seems like a lot of hoop jumping, but from a long-term maintnance perspective using docker images seems like the way to go, and the OCI setup just has a nice consistent Incus feel to it.

What I’d REALLY like to do is push the init script into /data, scrap the conf-init.d volume, and instead be able to map a symlink from /etc/conf-init.d/10-config-eth1.sh to /data/10-config-eth1.sh , that would be a neater / more efficient solution. Not sure how doable virtual symlinks are tho’ … :slight_smile:

Thanks for sharing your milage to success on adding two interfaces to OCI images.

I struggled with this a few times and have taken different approaches due to not existing features in Incus 6.3 at the time been. Since than a lot of things have changed / improved. Your solution encounters me to revisit my setup the next time I rebuild my OCI images.

Nowadays I would properly play around with providing my own init script to perform all the things by using the OCI entrypoint configuration which came with 6.11:

incus config show nginx | grep oci\\.
  oci.cwd: /
  oci.entrypoint: /docker-entrypoint.sh nginx -g 'daemon off;'
  oci.gid: "0"
  oci.uid: "0"

I would use normal apt calls to install required packages, cleanup all temp files / cache etc, bring the interface up like you did.
Alternative way might be to just mount the script into the right location. Yes, you can just mount files with Incus host into the instance.

Is a bit cleaner, simplify the whole process and might not require a storage volume at all.

Again your milage varies on what you want to archive…

Thanks again for sharing

I just discovered (!) I can actually do what I want to do as per my comment about symlinks, i.e. I can effectively mount a path into another volume, it just seems to be a feature not supported by the UI (unless I’m missing something).

When I came to configure the stock NGINX docker container, the recommended approach to configuration seems to be to override /etc/nginx/conf.d. What I’ve been able to do is create a volume called npm-data, into which I’m putting my configuration and some static web pages, then;

incus config device add nginx npm-data disk pool=default source=npm-data path=/data
incus config device add nginx npm-conf disk pool=default source=\
    npm-data/static-nginx/conf.d path=/etc/nginx/conf.d readonly=true

So npm-data contains a bunch of stuff, but I’m able to mount static-nginx/conf.d from within that volume onto /etc/nginx/conf.d within the container. i.e. I can have one custom storage volume, then mount different paths within that volume at different points within the container.

This means I only one ONE custom volume per container, rather than one volume per custom mount point. This is WAY more efficient and much easier to manage. I’m getting the feeling that using OCI containers in anger is actually way easier and more flexible than it seems at first sight … maybe a docs issue :wink:

I’ve now managed to get NPM, GOACCESS and NGINX stock containers all working in harmony against a bunch or services accessed over the OVN network via the IC. It’s all looking pretty happy atm.

Incidentally (!) after 1+ years of struggling with Incus UI and server certificates, I’ve now moved to SSO with Nginx Proxy Manager on the Front-end (with real SSL certs) , it’s soooo much nicer … :slight_smile: