First I just want to say thank you to everyone here, especially Stefan for all of your diligent work on both LXD as well as supporting it for the rest of us. I’ve found this site to be extremely important to understanding and working with LXD.
I have set my default profile as such:
lxc profile show default
config:
limits.memory.swap: “false”
linux.kernel_modules: overlay,nf_nat,ip_tables,ip6_tables,netlink_diag,br_netfilter,xt_conntrack,nf_conntrack,ip_vs,vxlan
raw.lxc: “lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
sys:rw\n”
security.nesting: “true”
security.privileged: “true”
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: local
type: disk
name: default
used_by:
- /1.0/instances/jjtest-cluster-master
I’m having problems running docker containers in privileged mode inside lxd containers AND Virtual Machines. The latter was a bit of a surprise to me as I thought perhaps there was just an issue with containers within containers.
For Instance, I can run this command in a VMWare VM without an issue
If I run it in a lxd container or in a virtual machine I get:
> ERROR: Rancher must be ran with the --privileged flag when running outside of Kubernetes
I’ll admit I’ve had issues with formatting the appropriate profile parameters and had a lot of confusion between the documentation for various versions of LXD as it pertains to raw.lxc.
I’m running lxd 4.18 on Ubuntu 20.04 latest patching applied.
If someone could guide me on this issue, I’d be very grateful…
Do you know what that error means? What is it checking for?
Running inside a LXD VM runs a full standard kernel separate from the host (so AppArmor and raw.lxc do not apply from the host), but it looks like thats not what this command is checking for.
@tomp -
I can’t confirm 100% but it would appear that it’s trying to create a bridged interface and a new subnet (assumably for transport communication.) Not finding much in the logs sadly.
Here’s a snippet of what repeats on the parent vm/container with every docker container restart.
For instance on the VMWare VM I see this in the logs on-launch
Sep 27 15:23:10 lv-juju-01 kernel: [600098.363489] docker0: port 1(veth7b861ba) entered blocking state
Sep 27 15:23:10 lv-juju-01 kernel: [600098.363494] docker0: port 1(veth7b861ba) entered disabled state
Sep 27 15:23:10 lv-juju-01 kernel: [600098.363677] device veth7b861ba entered promiscuous mode
Sep 27 15:23:10 lv-juju-01 kernel: [600098.364740] IPv6: ADDRCONF(NETDEV_UP): veth7b861ba: link is not ready
Sep 27 15:23:10 lv-juju-01 networkd-dispatcher[1100]: WARNING:Unknown index 5 seen, reloading interface list
Sep 27 15:23:10 lv-juju-01 systemd-networkd[793]: veth7b861ba: Link UP
Sep 27 15:23:10 lv-juju-01 systemd-timesyncd[690]: Network configuration changed, trying to establish connection.
Sep 27 15:23:10 lv-juju-01 systemd-timesyncd[690]: Synchronized to time server IP_ADDRESS:123 (10.192.14.10).
Sep 27 15:23:10 lv-juju-01 systemd-udevd[31085]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Sep 27 15:23:10 lv-juju-01 systemd-udevd[31085]: Could not generate persistent MAC address for veth4d099f4: No such file or directory
Sep 27 15:23:10 lv-juju-01 systemd-timesyncd[690]: Network configuration changed, trying to establish connection.
Sep 27 15:23:10 lv-juju-01 systemd-timesyncd[690]: Synchronized to time server <IP_Address>:123 (<IP_ADDRESS).
Sep 27 15:23:10 lv-juju-01 systemd-timesyncd[690]: Network configuration changed, trying to establish connection.
Hm, I just gave a quick spin for this in a LXD ubuntu:20.04 VM (so not in a container), and it works OOTB…? I just installed docker (with a quick “curl -L get.docker.com | sh”) and then just ran rancher as a privileged docker container (and checked logs and GUI).
I have the same issue in LXD 4/5… using the --privilege flag and a proper docker profile for lxc/lxd.
Rancher is k8 or k3’s - you may need to use a microk8s profile. it used to work in a standard lxc docker profile but things may have changed.
That may be why it is working with a vm and not a container. try microk8s lxc setup then install docker then try it again. A vm is probably best with long horn.
→ https://microk8s.io/docs/install-lxd