Problem creating a nested container

  • thois is my info of my container

Name: prueba2
Location: none
Remote: unix://
Architecture: x86_64
Created: 2019/04/01 11:47 UTC
Status: Running
Type: persistent
Profiles: default
Pid: 2645
Ips:
eth0: inet 10.142.118.237 vethADP8BM
eth0: inet6 fd42:3053:8b55:58c2:216:3eff:feed:6e27 vethADP8BM
eth0: inet6 fe80::216:3eff:feed:6e27 vethADP8BM
lo: inet 127.0.0.1
lo: inet6 ::1
lxdbr0: inet 10.245.84.1
lxdbr0: inet6 fd87:9585:53a:c557::1
lxdbr0: inet6 fe80::84da:a8ff:fea3:dfc7
Resources:
Processes: 35
CPU usage:
CPU usage (in seconds): 37
Memory usage:
Memory (current): 457.59MB
Memory (peak): 920.69MB
Network usage:
eth0:
Bytes received: 191.68MB
Bytes sent: 2.53MB
Packets received: 76644
Packets sent: 31911
lo:
Bytes received: 0B
Bytes sent: 0B
Packets received: 0
Packets sent: 0
lxdbr0:
Bytes received: 0B
Bytes sent: 1.68kB
Packets received: 0
Packets sent: 13

Do i need to do something more than setting the security.nesting to true ?
Tahnks in advance.

  • And this is my lxc info:

config:
core.https_address: ‘[::]:8443’
core.trust_password: true
api_extensions:

  • storage_zfs_remove_snapshots

  • container_host_shutdown_timeout

  • container_stop_priority

  • container_syscall_filtering

  • auth_pki

  • container_last_used_at

  • etag

  • patch

  • usb_devices

  • https_allowed_credentials

  • image_compression_algorithm

  • directory_manipulation

  • container_cpu_time

  • storage_zfs_use_refquota

  • storage_lvm_mount_options

  • network

  • profile_usedby

  • container_push

  • container_exec_recording

  • certificate_update

  • container_exec_signal_handling

  • gpu_devices

  • container_image_properties

  • migration_progress

  • id_map

  • network_firewall_filtering

  • network_routes

  • storage

  • file_delete

  • file_append

  • network_dhcp_expiry

  • storage_lvm_vg_rename

  • storage_lvm_thinpool_rename

  • network_vlan

  • image_create_aliases

  • container_stateless_copy

  • container_only_migration

  • storage_zfs_clone_copy

  • unix_device_rename

  • storage_lvm_use_thinpool

  • storage_rsync_bwlimit

  • network_vxlan_interface

  • storage_btrfs_mount_options

  • entity_description

  • image_force_refresh

  • storage_lvm_lv_resizing

  • id_map_base

  • file_symlinks

  • container_push_target

  • network_vlan_physical

  • storage_images_delete

  • container_edit_metadata

  • container_snapshot_stateful_migration

  • storage_driver_ceph

  • storage_ceph_user_name

  • resource_limits

  • storage_volatile_initial_source

  • storage_ceph_force_osd_reuse

  • storage_block_filesystem_btrfs

  • resources

  • kernel_limits

  • storage_api_volume_rename

  • macaroon_authentication

  • network_sriov

  • console

  • restrict_devlxd

  • migration_pre_copy

  • infiniband

  • maas_network

  • devlxd_events

  • proxy

  • network_dhcp_gateway

  • file_get_symlink

  • network_leases

  • unix_device_hotplug

  • storage_api_local_volume_handling

  • operation_description

  • clustering

  • event_lifecycle

  • storage_api_remote_volume_handling

  • nvidia_runtime

  • container_mount_propagation

  • container_backup

  • devlxd_images

  • container_local_cross_pool_handling

  • proxy_unix

  • proxy_udp

  • clustering_join

  • proxy_tcp_udp_multi_port_handling

  • network_state

  • proxy_unix_dac_properties

  • container_protection_delete

  • unix_priv_drop

  • pprof_http

  • proxy_haproxy_protocol

  • network_hwaddr

  • proxy_nat

  • network_nat_order

  • container_full

  • candid_authentication

  • backup_compression

  • candid_config

  • nvidia_runtime_config

  • storage_api_volume_snapshots

  • storage_unmapped

  • projects

  • candid_config_key

  • network_vxlan_ttl

  • container_incremental_copy

  • usb_optional_vendorid

  • snapshot_scheduling

  • container_copy_project

  • clustering_server_address

  • clustering_image_replication

  • container_protection_shift

  • snapshot_expiry

  • container_backup_override_pool

  • snapshot_expiry_creation
    api_status: stable
    api_version: “1.0”
    auth: trusted
    public: false
    auth_methods:

  • tls
    environment:
    addresses:

    • 192.168.150.94:8443
    • 10.222.234.1:8443
    • 10.142.118.1:8443
    • ‘[fd42:3053:8b55:58c2::1]:8443’
      architectures:
    • x86_64
    • i686
      certificate: |
      -----BEGIN CERTIFICATE-----
      MIICEDCCAZWgAwIBAgIQbAyVWEsOEJhzx6dFhGbeLzAKBggqhkjOPQQDAzA+MRww
      GgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMR4wHAYDVQQDDBVyb290QGRhbmll
      bC1CMjUwTS1EM0gwHhcNMTkwMzIyMDgwNjAzWhcNMjkwMzE5MDgwNjAzWjA+MRww
      GgYDVQQKExNsaW51eGNvbnRhaW5lcnMub3JnMR4wHAYDVQQDDBVyb290QGRhbmll
      bC1CMjUwTS1EM0gwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQq435aSRb2+6SUcQXn
      hZLF/O9XSi3/38/pxfIMcJpPJnscCZ0RGLqSBfvX14VBjlU2D70tR/ywABs9vbEK
      MG2Zfq6b1O000H7lopbsX7E9l3VOXqRnwoPplLa2MYoJ5h+jWDBWMA4GA1UdDwEB
      /wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCEGA1Ud
      EQQaMBiCEGRhbmllbC1CMjUwTS1EM0iHBMColl4wCgYIKoZIzj0EAwMDaQAwZgIx
      AIWc9L8+iE7X09Oai7zhifBZ+nmrYxxZkuJjiEiN7W11sFP+PGbEQN/X2CrfYqnI
      XgIxAME6bFvPoHxHJFFvC3AZUGwyf/cHCCgytV7UMR9JH8scBWhpoOdCU3J6cx27
      puJHUw==
      -----END CERTIFICATE-----
      certificate_fingerprint: 3bac98398cfa886fc3e8a2652cf328a474c471c90b8c9d9cba2edd44b351e217
      driver: lxc
      driver_version: 3.1.0
      kernel: Linux
      kernel_architecture: x86_64
      kernel_version: 4.15.0-46-generic
      server: lxd
      server_pid: 2019
      server_version: “3.11”
      storage: dir
      storage_version: “1”
      server_clustered: false
      server_name: daniel-B250M-D3H
      project: default
  • This the msg that i get when i am trying to start the nested container:

error: Error calling ‘lxd forkstart nested /var/lib/lxd/containers /var/log/lxd/nested/lxc.conf’: err='Failed to run: /usr/bin/lxd forkstart nested /var/lib/lxd/containers /var/log/lxd/nested/lxc.conf: ’
lxc 20190401120416.105 ERROR lxc_utils - utils.c:safe_mount:1739 - Operation not permitted - Failed to mount proc onto /usr/lib/x86_64-linux-gnu/lxc/proc
lxc 20190401120416.105 ERROR lxc_conf - conf.c:lxc_mount_auto_mounts:734 - Operation not permitted - error mounting proc on /usr/lib/x86_64-linux-gnu/lxc/proc flags 14
lxc 20190401120416.105 ERROR lxc_conf - conf.c:lxc_setup:4008 - failed to setup the automatic mounts for ‘nested’
lxc 20190401120416.105 ERROR lxc_start - start.c:do_start:811 - Failed to setup container “nested”.
lxc 20190401120416.105 ERROR lxc_sync - sync.c:__sync_wait:57 - An error occurred in another process (expected sequence number 3)
lxc 20190401120416.144 ERROR lxc_start - start.c:__lxc_start:1358 - Failed to spawn container “nested”.
lxc 20190401120416.714 ERROR lxc_conf - conf.c:run_buffer:416 - Script exited with status 1.
lxc 20190401120416.714 ERROR lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container “nested”.

Did you restart the container after setting security.nesting to true?

It may be needed in this case as it appears the kernel is refusing to mount proc and sys, likely due to the overmounting protection.

2 Likes

yes i did and now is working thanks @stgraber you are always supporting us :slight_smile: thank and have a Great day thanks.

Hello.

To the host OS (Ubuntu 18.04 ) i create a container with name Server1

lxc launch ubuntu:18.04 Server1

after that

lxc Server1 stop
lxc config set Server1 security.privileged true
lxc config set Server1 security.nesting true
lxc start Server1

So i am trying to create container inside the container

root@Server1:~lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]: yes
Would you like to use an existing block device? (yes/no) [default=no]: no
Size in GB of the new loop device (1GB minimum) [default=15GB]: 8GB
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: lxdbr0
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
Error: Failed to create network 'lxdbr0': open /proc/sys/net/ipv6/conf/lxdbr0/autoconf: read-only file system

Any help with that ? @stgraber @tomp

I suspect that you disabled IPv6 on the host, and in the container you are enabling IPv6 when initializing LXD. Therefore, LXD tries to enable IPv6 in the container (by editing /proc/sys/net/ipv6/conf/lxdbr0/autoconf), cannot perform it, and you get the error.

If the above is the case, then reply none to the following configuration.

What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none

root@Server1:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]: yes
Would you like to use an existing block device? (yes/no) [default=no]: no
Size in GB of the new loop device (1GB minimum) [default=15GB]: 8GB
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: yes
What should the new bridge be called? [default=lxdbr0]: lxdbr0
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like LXD to be available over the network? (yes/no) [default=no]: no
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] yes
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
Error: Failed to create storage pool 'default': Failed to prepare loop device for "/var/lib/lxd/disks/default.img": bad file descriptor

now what ?

Is there a certain procedure for this ?
I mean, when is the appropriate moment to enable security.nesting to true ?
After/Before lxd init inside the container ?

Your first issue was not related to security.nesting.

For the second issue, the documentation says:

  • btrfs can be used as a storage backend inside a container (nesting), so long as the parent container is itself on btrfs. (But see notes about btrfs quota via qgroups.)

You can launch a container as follows:

$ lxc launch ubuntu: mycontainer -c security.nesting=true -c security.privileged=true
Creating mycontainer
Starting mycontainer
$ lxc shell mycontainer
root@mycontainer:~#

Alright @simos my mistake. The parent container uses zfs.
I made the appropriate changes.
When i am inside Server1 container and i am doing snap install lxd this happens

root@Server1:~ snap install lxd
error: cannot perform the following tasks:
- Setup snap "snapd" (10707) security profiles (cannot reload udev rules: exit status 2
udev output:
)
root@Server1:~


I destroy Server1 container, recreate it with nesting and priviled flags. Inside Server1
i did lxd init and everything worked fine ( btrfs everywhere ).

Why snapd fails ?

I want also the nested container to be accessible from other containers. I tried with routed method in the same steps as when i wanted a container to get an IP from the LAN but i failed .


root@Server1:~ lxc list
+---------+---------+-------------------+------+------------+-----------+
|  NAME   |  STATE  |       IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-------------------+------+------------+-----------+
| Service | RUNNING | 10.34.2.45 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-------------------+------+------------+-----------+
root@Server1:~ lxc profile edit nested < nested.yaml
Error: Bad nic type: routed

@tomp Any help with that ?

UPDATE

I fixed a few things:

At the host

lxc launch ubuntu:18.04 Server1
root@tkasidakis-HP-Pavilion-Gaming-Laptop-15-cx0xxx:~/Desktop/fog lxc list
+---------+---------+-----------------------+------+-----------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |   TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+-----------+-----------+
| Server1 | RUNNING | 10.105.230.190 (eth0) |      | CONTAINER | 0         |
+---------+---------+-----------------------+------+-----------+-----------+
| vUAV    | RUNNING | 10.105.230.223 (eth0) |      | CONTAINER | 0         |
+---------+---------+-----------------------+------+-----------+-----------+

Inside Server1 i install lxd through

snap install lxd
systemctl start snap.lxd.daemon

Then i stopped Server1, enable priviliges and nesting and started again

Inside Server1

lxd init

I selected not to create a new network bridge and i choose the containers inside Server1 to obtain ip from the eth0 of the Server1 container.

So now we have :

root@Server1:~lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| Service | RUNNING | 10.105.230.129 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+

vUAV container can ping Service container inside the Server container and the opposite.

The PROBLEM is that Server1 container CAN NOT ping the nested container Service and the opposite.

Any help with that ?

@tomp @simos @stgraber @bmullan

I tried to disable firewall rules

iptables -t filter -F

but nothing changed

Please can you show the output of ip a and ip r inside each container and on the host so I can get an idea of your network setup.

Also please show the ping command that isn’t working so I can see clearly which IP is not reachable from where.

Thanks

@tomp

HOST

root@tkasidakis-HP-Pavilion-Gaming-Laptop-15-cx0xxx:~ lxc list
+---------+---------+-----------------------+------+-----------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |   TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+-----------+-----------+
| Server1 | RUNNING | 10.105.230.50 (eth0)  |      | CONTAINER | 0         |
+---------+---------+-----------------------+------+-----------+-----------+
| vUAV    | RUNNING | 10.105.230.223 (eth0) |      | CONTAINER | 0         |
+---------+---------+-----------------------+------+-----------+-----------+





root@tkasidakis-HP-Pavilion-Gaming-Laptop-15-cx0xxx:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether e4:e7:49:82:2d:a5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.6/24 brd 192.168.2.255 scope global dynamic noprefixroute eno1
       valid_lft 84859sec preferred_lft 84859sec
    inet6 fe80::b123:b11b:a034:eae1/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:96:e6:02:c0:49 brd ff:ff:ff:ff:ff:ff
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 82:f2:2c:f5:3c:0f brd ff:ff:ff:ff:ff:ff
5: wifi_rawfie_tb: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 02:55:a8:e0:15:46 brd ff:ff:ff:ff:ff:ff
6: wifi_quad_a: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 6a:a6:6a:8d:44:4f brd ff:ff:ff:ff:ff:ff
7: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:bc:78:12 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:bc:78:12 brd ff:ff:ff:ff:ff:ff
9: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:90:62:6b brd ff:ff:ff:ff:ff:ff
    inet 10.105.230.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe90:626b/64 scope link 
       valid_lft forever preferred_lft forever
11: veth5c8958c3@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether de:7e:93:9e:52:25 brd ff:ff:ff:ff:ff:ff link-netnsid 0
15: veth1c7c325d@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether 6e:31:a6:60:5d:66 brd ff:ff:ff:ff:ff:ff link-netnsid 2

I want to delete 4,5,6,7 and 8 interfaces i don’t need them ( if some of them causes the problem )

root@tkasidakis-HP-Pavilion-Gaming-Laptop-15-cx0xxx:~# ip r
default via 192.168.2.1 dev eno1 proto dhcp metric 100 
10.105.230.0/24 dev lxdbr0 proto kernel scope link src 10.105.230.1 
169.254.0.0/16 dev eno1 scope link metric 1000 
192.168.2.0/24 dev eno1 proto kernel scope link src 192.168.2.6 metric 100 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown 
root@tkasidakis-HP-Pavilion-Gaming-Laptop-15-cx0xxx:~# 



inside Server1 container

root@Server1:~# lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| Service | RUNNING | 10.105.230.102 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+

root@Server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:f3:58:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.105.230.50/24 brd 10.105.230.255 scope global dynamic eth0
       valid_lft 2722sec preferred_lft 2722sec
    inet6 fe80::216:3eff:fef3:58c8/64 scope link 
       valid_lft forever preferred_lft forever


root@Server1:~# ip r
default via 10.105.230.1 dev eth0 proto dhcp src 10.105.230.50 metric 100 
10.105.230.0/24 dev eth0 proto kernel scope link src 10.105.230.50 
10.105.230.1 dev eth0 proto dhcp scope link src 10.105.230.50 metric 100 

INSIDE the Service container ( nested container to the Server1 container )

root@Service:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:66:59:ab brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.105.230.102/24 brd 10.105.230.255 scope global dynamic eth0
       valid_lft 3093sec preferred_lft 3093sec
    inet6 fe80::216:3eff:fe66:59ab/64 scope link 
       valid_lft forever preferred_lft forever

root@Service:~# ip r
default via 10.105.230.1 dev eth0 proto dhcp src 10.105.230.102 metric 100 
10.105.230.0/24 dev eth0 proto kernel scope link src 10.105.230.102 
10.105.230.1 dev eth0 proto dhcp scope link src 10.105.230.102 metric 100 

Attempt for ping


root@Server1:~# lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| Service | RUNNING | 10.105.230.102 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
root@Server1:~ ping 10.105.230.102
PING 10.105.230.102 (10.105.230.102) 56(84) bytes of data.
From 10.105.230.50 icmp_seq=1 Destination Host Unreachable
From 10.105.230.50 icmp_seq=2 Destination Host Unreachable
From 10.105.230.50 icmp_seq=3 Destination Host Unreachable
From 10.105.230.50 icmp_seq=4 Destination Host Unreachable
From 10.105.230.50 icmp_seq=5 Destination Host Unreachable
From 10.105.230.50 icmp_seq=6 Destination Host Unreachable

Inside vUAV and attempt for ping

root@vUAV:~ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:fb:0f:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.105.230.223/24 brd 10.105.230.255 scope global dynamic eth0
       valid_lft 3304sec preferred_lft 3304sec
    inet6 fe80::216:3eff:fefb:fe1/64 scope link 
       valid_lft forever preferred_lft forever
root@vUAV:~ ip r
default via 10.105.230.1 dev eth0 proto dhcp src 10.105.230.223 metric 100 
10.105.230.0/24 dev eth0 proto kernel scope link src 10.105.230.223 
10.105.230.1 dev eth0 proto dhcp scope link src 10.105.230.223 metric 100 
root@vUAV:~# ping 10.105.230.102
PING 10.105.230.102 (10.105.230.102) 56(84) bytes of data.
64 bytes from 10.105.230.102: icmp_seq=1 ttl=64 time=0.106 ms
64 bytes from 10.105.230.102: icmp_seq=2 ttl=64 time=0.088 ms
64 bytes from 10.105.230.102: icmp_seq=3 ttl=64 time=0.116 ms
64 bytes from 10.105.230.102: icmp_seq=4 ttl=64 time=0.091 ms
^C
--- 10.105.230.102 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3053ms
rtt min/avg/max/mdev = 0.088/0.100/0.116/0.013 ms

At the host tcpdump for the IP of the nested Service container showes this


root@tkasidakis-HP-Pavilion-Gaming-Laptop-15-cx0xxx:~ sudo tcpdump -i any -nn host 10.105.230.102
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
15:12:42.441167 ARP, Request who-has 10.105.230.102 tell 10.105.230.50, length 28
15:12:42.441130 ARP, Request who-has 10.105.230.102 tell 10.105.230.50, length 28
15:12:43.445965 ARP, Request who-has 10.105.230.102 tell 10.105.230.50, length 28
15:12:43.446097 ARP, Request who-has 10.105.230.102 tell 10.105.230.50, length 28
15:12:43.445965 ARP, Request who-has 10.105.230.102 tell 10.105.230.50, length 28

So the issue here is that you’re using the same subnet 10.105.230.0/24 for both the containers on the host and the nested containers. The ip r output on Server1 shows that to reach 10.105.230.0/24 it will go via eth0 which will then go back to the host, not into the nested container.

Please can you show lxc config show <instance> --expanded for both Server1 and Service.

Please can you also explain what you are trying to achieve and why (so we have some background information) - primarily what is the reason for nested containers?

root@tkasidakis-HP-Pavilion-Gaming-Laptop-15-cx0xxx:~ lxc config show Server1 --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20210129)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20210129"
  image.type: squashfs
  image.version: "18.04"
  security.nesting: "true"
  security.privileged: "true"
  volatile.base_image: 3d8ba1d6c3fb411ddc1a7f13cc9b4652cb0b011cb7de6946c842407d132aa065
  volatile.eth0.host_name: veth0ac34f22
  volatile.eth0.hwaddr: 00:16:3e:f3:58:c8
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.uuid: a905d931-54a0-4aff-a990-98e0cf7c2c42
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""



root@Server1:~ lxc config show Service --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20210129)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20210129"
  image.version: "18.04"
  volatile.base_image: 3d8ba1d6c3fb411ddc1a7f13cc9b4652cb0b011cb7de6946c842407d132aa065
  volatile.eth0.hwaddr: 00:16:3e:66:59:ab
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eth0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

@tomp What I am trying to achieve is to create a simulation enviroment with Drones and Servers ( each of them in a seperate container ) . Sometimes the drone sends a request to a server in order to load a service packed as container. The fact that the Server is already a container creates the need for nested containers .

I imagined that eth0 will be a problem …

Right so you are using macvlan inside the nested container to connect that container back to the host’s lxdbr0. That should work fine. However remember that macvlan does not allow communication between the parent device and the container device. So communication between Service and Service won’t be allowed.

I see. Is there an alternative way in order to achieve the desired functionality ?

@tomp

You could create a bridge inside the Server1 called br0, move the IP config from eth0 to br0, and then add eth0 to br0 (this would link your Server1 container back to the host). You could use netplan to achieve this. See https://netplan.io/examples/#configuring-network-bridges

Then you can have the Service1 container use the br0 as its parent using the bridged NIC type rather than macvlan. This would then allow the container to communicate with both the host and the Server1 br0 interface.

1 Like

OK. Thanks a lot. Is the above link describes is OK, or it will need something else ?
@tomp

Simos guide maybe help also

Yep that should do it.

OK i am starting it. I hope to fix it. Thanks a lot. I will post when i finish it

@tomp Sorry for keep asking but i understand what i must do but i have difficulties to achieve it.

The link says :

To create a very simple bridge consisting of a single device that uses DHCP, write:

network:
    version: 2
    renderer: networkd
    ethernets:
        enp3s0:
            dhcp4: no
    bridges:
        br0:
            dhcp4: yes
            interfaces:
                - enp3s0

So i will create a .yaml file which in my case will have the below info

network:
    version: 2
    renderer: networkd
    ethernets:
        eth0:
            dhcp4: no
    bridges:
        br0:
            dhcp4: yes
            interfaces:
                - eth0

Where eth0 is the eth0 of the Server1 container.
‘’ You could create a bridge inside the Server1 called br0, move the IP config from eth0 to br0, and then add eth0 to br0‘’
This procedure confuses me in terms of implementation not in terms of understanding