Missing /lib/modules in container

Hello, I’ve created two instances for testing purposes, one container and one VM with new storage pool of btrfs, folder /lib/modules is missing from the container instance, also the kernel is nested from the host machine although i’ve launched a different image.

I noticed also that container boots eth0 while VM boots enp5s0

Host:

LXC 4.13
LXD 4.13

Host Kernel:

Linux aurax 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux

Container Kernel:

Linux aurax 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux

VM Kernel:

Linux aurax 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux

Profiles:

VM:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu impish amd64 (20210501_07:42)
  image.os: Ubuntu
  image.release: impish
  image.serial: "20210501_07:42"
  image.type: disk-kvm.img
  image.variant: default
  limits.cpu: "2"
  limits.memory: 2048MB
  linux.kernel_modules: overlay
  raw.lxc: |-
    lxc.apparmor.profile = unconfined
    lxc.cgroup.devices.allow = a
    lxc.mount.auto = proc:rw sys:rw
    lxc.cap.drop=
  security.nesting: "true"
  security.privileged: "true"
  volatile.base_image: 301d8806435b2f7cbc7c8296e264fd9e2e4ac694315371b96363434e6c0fa25e
  volatile.eth0.host_name: tap086084d1
  volatile.eth0.hwaddr: 00:16:3e:62:72:cc
  volatile.last_state.power: RUNNING
  volatile.uuid: 0ef99603-26f4-44cd-bcfb-7b9af5896e3d
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: rancher
    type: disk
ephemeral: false
profiles:
- default
- rancher
stateful: false
description: ""

Container:

architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu impish amd64 (20210501_07:42)
  image.os: Ubuntu
  image.release: impish
  image.serial: "20210501_07:42"
  image.type: squashfs
  image.variant: default
  limits.cpu: "2"
  limits.memory: 2048MB
  linux.kernel_modules: overlay
  raw.lxc: |-
    lxc.apparmor.profile = unconfined
    lxc.cgroup.devices.allow = a
    lxc.mount.auto = proc:rw sys:rw
    lxc.cap.drop=
  security.nesting: "true"
  security.privileged: "true"
  volatile.base_image: 5494af6cc35f1213aef97428006b1f6ee072728a42c243096fff7d6ca0133a51
  volatile.eth0.host_name: veth5f73417e
  volatile.eth0.hwaddr: 00:16:3e:18:fa:50
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 7d356b72-1d2e-4d09-93ff-3cce7f0d4df6
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: rancher
    type: disk
ephemeral: false
profiles:
- default
- rancher
stateful: false
description: ""

LXD init dump:

config: {}
networks:
- config:
    ipv4.address: 10.188.128.1/24
    ipv4.nat: "true"
    ipv6.address: none
  description: ""
  name: lxdbr0
  type: bridge
  project: default
storage_pools:
- config:
    size: 30GB
    source: /var/snap/lxd/common/lxd/disks/rancher.img
  description: ""
  name: rancher
  driver: btrfs
profiles:
- config:
    linux.kernel_modules: bridge,br_netfilter,ip_tables,ip6_tables,ip_vs,netlink_diag,nf_nat,overlay,xt_conntrack
    raw.lxc: "lxc.aa_profile = unconfined                                                                                       \nlxc.cgroup.devices.allow
      = a                                                                                      \nlxc.mount.auto=proc:rw
      sys:rw                                                                                     \nlxc.cap.drop
      =                                                                                                    "
    security.nesting: "true"
    security.privileged: "true"
  description: Default LXD profile
  devices:
    eth0:
      name: eth0
      nictype: bridged
      parent: lxdbr0
      type: nic
  name: default
- config:
    limits.cpu: "2"
    limits.memory: 2048MB
    linux.kernel_modules: overlay
    raw.lxc: |-
      lxc.apparmor.profile = unconfined
      lxc.cgroup.devices.allow = a
      lxc.mount.auto = proc:rw sys:rw
      lxc.cap.drop=
    security.nesting: "true"
    security.privileged: "true"
  description: ""
  devices:
    root:
      path: /
      pool: rancher
      type: disk
  name: rancher
projects:
- config:
    features.images: "true"
    features.networks: "true"
    features.profiles: "true"
    features.storage.volumes: "true"
  description: Default LXD project
  name: default

That’s all perfectly normal.

The definition of a container is that it’s a virtual environment provided by the host kernel, so a container cannot ever run a different kernel than the host and so never needs a /lib/modules. Similarly, containers do not have a PCI bus and so get the standard naming for ethernet devices (ethX).

Virtual machines are a full virtual computer, they boot their own kernel and so have a /lib/modules, because they’re a virtual computer, they also have a PCI bus and so the devices get named based on that (enp5s0).

Thank you @stgraber, but some applications fails with module validation.

How can i overcomet that? linking /lib/modules?

If you’re using software that needs to look at or load modules inside a container, you’re going to have a bad day down the line. If all they care about is /lib/modules existing, then create it as an empty directory.

If they need actual kernel modules in there, then that software is likely not a good fit for a container, use a VM instead (and/or report a bug against that software to properly handle containers).

Good point @stgraber

I think my profile configuration is broken:

sudo lxc profile edit default < ../config/lxc-profile-default.yaml

cat ../config/lxc-profile-default.yaml 
config:                                                                                                               
  linux.kernel_modules: bridge,br_netfilter,ip_tables,ip6_tables,ip_vs,netlink_diag,nf_nat,overlay,xt_conntrack       
  raw.lxc: |-                                                                                                         
    lxc.kmsg=1
    lxc.aa_profile = unconfined                                                                                       
    lxc.cgroup.devices.allow = a                                                                                      
    lxc.mount.auto=proc:rw sys:rw                                                                                     
    lxc.cap.drop =                                                                                                    
  security.nesting: "true"                                                                                            
  security.privileged: "true"            
description: Default LXD profile                                                                                      
devices:                                                                                                              
  eth0:                                                                                                               
    name: eth0
    nictype: bridged                                                                                                  
    parent: lxdbr0                                                                                                    
    type: nic                                                                                                         
name: default

I used now
lxc config device add ftkc modules disk source=/lib/modules path=/lib/modules
Device modules added to ftkc which mounts host /lib/modules to guest container

But still receiving errors.

Would it be possible to hardcode /lib/modules mount in profile?

application logs (k3s distribution of kubernetes)
shows:

ay  3 04:39:21 ftkc k3s[6565]: time="2021-05-03T04:39:21.091569661Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: WARNING: Module ip_vs not found in directory /lib/modules/4.19.0-16-amd64`, error: exit status 1"
May  3 04:39:31 ftkc modprobe[6641]: modprobe: FATAL: Module br_netfilter not found in directory /lib/modules/4.19.0-16-amd64
May  3 04:39:31 ftkc modprobe[6642]: modprobe: FATAL: Module overlay not found in directory /lib/modules/4.19.0-16-amd64
May  3 04:39:34 ftkc k3s[6643]: W0503 04:39:34.215777    6643 proxier.go:651] Failed to read file /lib/modules/4.19.0-16-amd64/modules.builtin with error open /lib/modules/4.19.0-16-amd64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
May  3 04:39:34 ftkc k3s[6643]: W0503 04:39:34.217197    6643 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
May  3 04:39:34 ftkc k3s[6643]: W0503 04:39:34.218442    6643 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
May  3 04:39:34 ftkc k3s[6643]: W0503 04:39:34.219649    6643 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
May  3 04:39:34 ftkc k3s[6643]: W0503 04:39:34.220843    6643 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
May  3 04:39:34 ftkc k3s[6643]: W0503 04:39:34.222034    6643 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
May  3 04:39:34 ftkc k3s[6643]: time="2021-05-03T04:39:34.250017919Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: WARNING: Module ip_vs not found in directory /lib/modules/4.19.0-16-amd64`, error: exit sta

@Tal_Hazan
I am facing this issue as well - did you find any solution to this problem ?