Proxy devices for VMs

I’ve been trying to find out if proxy devices are supported for VMs (yet?) but because there is so much development happening and the information is mostly in release notes it’s hard to figure out.

I don’t get any errors when adding a proxy device.

root@dream:~# lxc config device add vmtester aa-web proxy connect=tcp:0.0.0.0:4567 listen=tcp:0.0.0.0:25923
Device aa-web added to vmtester

However when querying the config it’s not in there.

root@dream:~# lxc config show vmtester | grep devices -A 20
devices:
  root:
    limits.max: 40iops
    path: /
    pool: test-pool
    size: 200GB
    type: disk

When I do the same with a container it does show up in the config there.

My questions:

  • Is the return message from the initial add command simply wrong and is it not supported yet?
  • If so what’s the recommended best practice for port-forwarding to VMs

Not supported yet it seems.

You can check here:
https://linuxcontainers.org/lxd/docs/master/instances#type-proxy
Every category or config key will tell you what it supports:
Supported instance types: container

Follow the newest annoucements in the forum, when a new version of lxd is released, it will certainly contain information like this.

@Maran yes proxy devices are not supported yet for VMs. I’ll investigate why you were able to add a proxy device, as it should have resulted in an error.

We are probably going to end up adding limited proxy device support for VMs, at least when using nat=true mode, which avoids the need to pass any file handles into the instance.

1 Like

When I try to add a proxy device to a VM I get an usupported error as expected:

lxc init images:ubuntu/focal v1 --v
lxc config device add v1 aa-web proxy connect=tcp:0.0.0.0:4567 listen=tcp:0.0.0.0:25923
Error: Invalid devices: Device validation failed "aa-web": Unsupported device type

Can you show me the output of lxc config show vmtester please as well as the version of LXD you are running?

Thanks

Sure thing, here is my output:

root@dream:~# lxd --version
4.4
root@dream:~# lxc config show vmtester
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu bionic amd64 (20200820_07:42)
  image.os: Ubuntu
  image.release: bionic
  image.serial: "20200820_07:42"
  image.type: disk-kvm.img
  limits.cpu: "3"
  limits.memory: 4500MB
  volatile.base_image: ea80c860e1b4ac5a59a84ba34c7f489a6598b72efe0940f2173c60aaf76695cd
  volatile.eth0.host_name: tap20c0aaf2
  volatile.eth0.hwaddr: 00:16:3e:cf:ae:f8
  volatile.last_state.power: RUNNING
  volatile.vm.uuid: e6c00fcf-b1d1-4078-9ac4-f5cae70d55ed
devices:
  root:
    limits.max: 40iops
    path: /
    pool: test-pool
    size: 200GB
    type: disk
ephemeral: false
profiles:
- bysh
stateful: false
description: ""

We are probably going to end up adding limited proxy device support for VMs, at least when using nat=true mode, which avoids the need to pass any file handles into the instance.

I’m sorry but what does this mean exactly? How should I be port-forwarding into VMs and why is this structurally different from containers? Isn’t the technique behind it the same? Are you not simply taking a package for one ip and port and forwarding it on to an other duo?

Sorry for all the questions trying to wrap my head around things :slight_smile:

I can’t see the device has been added to your VM, are you certain you added it to this VM and another container or profile? Can you show the output of lxc config show vmtester --expanded also please?

As for the difference between proxy device modes, the ‘normal’ proxy mode creates a listening socket on the LXD host (or in the container depending on the configuration) and then for each inbound connection to that socket it switches network namespace into the container and opens a new connection inside the container to the specified target address.

This has several advantages:

  • It doesn’t require the container’s listening socket to be reachable from the LXD host (i.e the service can be listening on 127.0.0.1 inside the container).
  • It doesn’t require a static address (for the reason above).
  • It allows protocol translation (i.e tcp to udp).
  • It allows binding to the wildcard address on the listening socket, which NAT mode doesn’t allow.

However the downside is that it runs a small process per proxy device.

Because LXD cannot switch network namespaces for a VM (because it has no namespaces) we cannot use forkproxy as it is.

The alternative is to do ‘true’ network level port forwarding, which the proxy device also supports when nat=true is specified as an option.

In NAT mode, it requires the instance to have a static IP (so the the NAT rules can be added to the host), and it requires the listening service to be listening on that IP. It is this particular mode that we may be able to add for VMs, as it uses the existing network connection of the VM. However it is not supported at this time.

Thanks for the ellaborate answer!

No it’s not added, however the message Device aa-web added to vmtester got me chasing ghosts for a while because it seems to imply it was added. Sorry if that was unclear.

root@dream:~# lxc config show vmtester --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu bionic amd64 (20200820_07:42)
  image.os: Ubuntu
  image.release: bionic
  image.serial: "20200820_07:42"
  image.type: disk-kvm.img
  limits.cpu: "3"
  limits.memory: 4500MB
  limits.memory.enforce: hard
  limits.memory.swap: "false"
  security.nesting: "true"
  user.user-data: |
    #cloud-config
    phone_home:
      url: http://x/api/internal/containers
      tries: 15
      post:
        - hostname
        - instance_id
    #package_upgrade: true
    packages:
      - apache2
      - unzip
      - wget
      - ruby2.5
      - build-essential
      - libsqlite3-dev
      - sqlite3
      - ruby-dev
    timezone: Europe/Amsterdam
  volatile.base_image: ea80c860e1b4ac5a59a84ba34c7f489a6598b72efe0940f2173c60aaf76695cd
  volatile.eth0.host_name: tap20c0aaf2
  volatile.eth0.hwaddr: 00:16:3e:cf:ae:f8
  volatile.last_state.power: RUNNING
  volatile.vm.uuid: e6c00fcf-b1d1-4078-9ac4-f5cae70d55ed
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    limits.max: 40iops
    path: /
    pool: test-pool
    size: 200GB
    type: disk
ephemeral: false
profiles:
- bysh
stateful: false
description: ""

If the proxy device is not supported at this time would adding iptable rules to forward incoming traffic to the VMs be a good idea or is there some other trick I could use to get my traffic where it needs to go?

Yeah I can’t reproduce that output here, so something must have been different.

Yep adding manual iptables DNAT rules would achieve the same effect that a nat-mode VM proxy device should eventually add.