Running Tailscale in LXD VMs

Had a question about running Tailscale in LXD VMs. I see the following piece of advice on the tailscale website - https://tailscale.com/kb/1130/lxc-unprivileged/

Had a couple of questions about this

a. Do I still need to do “lxc.cgroup2.devices.allow: c 10:200 rwm” and “lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file” if I am running an LXD VM as opposed to a container?
b. All the advice I see online is in the proxmox context and asks me to modify /etc/pve/lxc/112.conf. I am using lxd directly (via its api). What should I be doing to apply the above configuation, which part needs to be on the host and which part needs to be in the guest LXD vm?

1 Like

I was able to solve this, but as I started writing things down for this post to describe what I did I notice that the vm no longer has the config keys I applied to it. So its possible that I messed things up but since the steps caused me to go from working to non working am still sharing those

lxc config show <vm_name> > tmp_file
<edit tmp_file to add following lines in config section>
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
<end edit>
lxc config edit <vm_name> < tmp_file

I then rebooted the vm and checked that /dev/net/tun existed and that I could reach the LXD vm from another machine via the tailscale ip.

I will try configuring another vm from scratch over the coming days and update this thread with what I find.

I just tried a new lxd vm completely from scratch and am happy to report that things work perfectly out of the box with tailscale. I just setup nginx and was immediately able to reach it using the tailscale ip. No configuration changes are required at all and the steps I listed above are completely unnecessary.

My initial troubles were linked to the fact that I was trying to get Hashicorp tools working inside the VM and the issue about things not being reachable was a result of me not having configured those tools correctly. This should serve as a reminder to me to change on thing at a time :slight_smile:

1 Like

good news, not much about ubuntu kvm/vibr0/lxc/lxdbr0 containers on the net in re: tailscale , so I found this info useful thanks for sharing. I installed tailscale on my juju controller /lxd container w/lxdbr0 nat’d 10.'s - for grins,

  • juju status
    ubuntu@vmi971095:~$ juju status
    Model Controller Cloud/Region Version SLA Timestamp
    controller localhost-localhost localhost/localhost 2.9.42 unsupported 02:02:45-05:00

Machine State Address Inst id Series AZ Message

0 started 100.95.144.17 juju-4f918e-0 focal Running

ubuntu@vmi971095:~$ lxc list
±--------------±--------±---------------------------±-----------------------------------------------------±----------±----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
±--------------±--------±---------------------------±-----------------------------------------------------±----------±----------+
| juju-4f918e-0 | RUNNING | 100.x.x.17 (tailscale0) | fd7a:115c:a1e0:ab12:4843:cd96:625f:9011 (tailscale0) | CONTAINER | 0 |
| | | 10.105.137.177 (eth0) | fd42:a4b5:77b7:d273:216:3eff:fe12:de65 (eth0) | | |
±--------------±--------±---------------------------±-----------------------------------------------------±----------±----------+

juju controllers primary IP defaulted to the tailscale IP… the 10.x is still present all is working well,
in addition I enabled DNS and sub routes… w/
tailscale up --accept-routes --accept-dns=true --advertise-routes=10.x.x.0/24 --snat-subnet-routes=true.
tested in baremetal and stndrd vps…

sharing is caring,cheers:)
– juju deploy ubuntu
– juju debug-log
this is interesting - after bootstrap tailscale was installed but the controller api/is showing on the talescale ip:port below.)
machine-1: 03:07:21 INFO juju.api connection established to “wss://100.95.144.17:17070/model/35e07502-ddf7-4f8e-8cea-89f9e34f918e/api”
machine-0: 03:07:21 INFO juju.apiserver.connection agent login: unit-tupac-0 for 35e07502-ddf7-4f8e-8cea-89f9e34f918e
machine-0: 03:07:21 INFO juju.apiserver.common setting password for “unit-tupac-0”
unit-tupac-0: 03:07:21 INFO juju Starting unit workers for “tupac/0”
unit-tupac-0: 03:07:21 INFO juju.worker.apicaller [35e075] “unit-tupac-0” successfully connected to “100.95.144.17:17070”
unit-tupac-0: 03:07:21 INFO juju.worker.apicaller [35e075] password changed for “unit-tupac-0”
unit-tupac-0: 03:07:21 INFO juju.worker.apicaller [35e075] “unit-tupac-0” successfully connected to “100.95.144.17:17070”

Sub-routes /10.x’s should be available in theory … via talescale

Model Controller Cloud/Region Version SLA Timestamp
controller localhost-localhost localhost/localhost 2.9.42 unsupported 03:14:42-05:00

App Version Status Scale Charm Channel Rev Exposed Message

tupac 20.04 active 1 ubuntu stable 22 no

Unit Workload Agent Machine Public address Ports Message
tupac/0* active idle 1 10.105.137.211

Machine State Address Inst id Series AZ Message
0 started 100.95.144.17 juju-4f918e-0 focal Running
1 started 10.105.137.211 juju-4f918e-1 focal Running

**NOTE the “wss://100.95.144.17:17070/model/35e… WSS web socket with TLS… pretty cool out of the box. ~ cheers!

1 Like