Container Networking

So I am trying to get 4 containers up and running on a VM, communicating with the LAN as well as the
Internet. Here is the diagram:

image

Containers1-4 must be able to communicate over the internet.
I only have Container1 created right now.

Container1 can ping the host VM (192.168.0.37) and the host VM can ping Container1 (10.80.42.182)
This is good, however since the Containers are on the 10.80.42.xxx network, the internet cannot get to them obviously.

I am confused why the containers are getting the 10.80.42.xxx IP addresses? Does LXD have its own DHCP server?

So, I am drawing a blank on what I need to do to get this working on my LAN as well as on my VPS?

Thanks and I appreciate your comments.

Ray

Have you read through the blog posts on LXD networking and taken a look at the networking documentation?

Without knowing what your LXD profile looks like, it seems like you created the containers using the default LXD bridge that you created during lxd init. In which case, yes, in the default config LXD will be creating a DHCP server on the bridge.

If you can live without pinging the containers from the LXD host, then take a look at using macvlan for the LXD network – you won’t see the containers on the host but the containers will appear to your local network (not the LXD bridge) as if they were just another interface on the network e.g. if the host is in 192.168.1.0/24 and the DHCP server is at 192.168.1.1 and your containers are configured for obtaining DHCP addresses, they’ll be distrubted addresses in that subnet.

Thanks for the response stemcc.

The profile I am using is:
ray@USN-LPC:/var/lib/lxd$ lxc profile show lanprofile
config:
environment.http_proxy: ""
user.network_mode: ""
description: Default LXD profile
devices:
eth0:
nictype: macvlan
parent: enp0s3
type: nic
root:
path: /
pool: lxdpool
type: disk
name: lanprofile
used_by:

  • /1.0/containers/LPC2

When I try to start the container, I get this error:

ray@USN-LPC:~$ lxc start LPC2
error: Failed to run: /usr/bin/lxd forkstart LPC2 /var/lib/lxd/containers /var/log/lxd/LPC2/lxc.conf:

Ideas?

Ray

Try starting your container with the --debug flag and paste the output here.

Could you also you paste the output of:

lxc info --show-log LPC2

It might be a permissions issue but you’ll need more info to know for sure.

ray@USN-LPC:~$ lxc start LPC2 --debug
DBUG[05-26|17:06:34] Connecting to a local LXD over a Unix socket
DBUG[05-26|17:06:34] Sending request to LXD etag= method=GET url=http://unix.socket/1.0
DBUG[05-26|17:06:34] Got response struct from LXD
DBUG[05-26|17:06:34]
{
“config”: {},
“api_extensions”: [
“storage_zfs_remove_snapshots”,
“container_host_shutdown_timeout”,
“container_stop_priority”,
“container_syscall_filtering”,
“auth_pki”,
“container_last_used_at”,
“etag”,
“patch”,
“usb_devices”,
“https_allowed_credentials”,
“image_compression_algorithm”,
“directory_manipulation”,
“container_cpu_time”,
“storage_zfs_use_refquota”,
“storage_lvm_mount_options”,
“network”,
“profile_usedby”,
“container_push”,
“container_exec_recording”,
“certificate_update”,
“container_exec_signal_handling”,
“gpu_devices”,
“container_image_properties”,
“migration_progress”,
“id_map”,
“network_firewall_filtering”,
“network_routes”,
“storage”,
“file_delete”,
“file_append”,
“network_dhcp_expiry”,
“storage_lvm_vg_rename”,
“storage_lvm_thinpool_rename”,
“network_vlan”,
“image_create_aliases”,
“container_stateless_copy”,
“container_only_migration”,
“storage_zfs_clone_copy”,
“unix_device_rename”,
“storage_lvm_use_thinpool”,
“storage_rsync_bwlimit”,
“network_vxlan_interface”,
“storage_btrfs_mount_options”,
“entity_description”,
“image_force_refresh”,
“storage_lvm_lv_resizing”,
“id_map_base”,
“file_symlinks”,
“container_push_target”,
“network_vlan_physical”,
“storage_images_delete”,
“container_edit_metadata”,
“container_snapshot_stateful_migration”,
“storage_driver_ceph”,
“storage_ceph_user_name”,
“resource_limits”,
“storage_volatile_initial_source”,
“storage_ceph_force_osd_reuse”,
“storage_block_filesystem_btrfs”,
“resources”,
“kernel_limits”,
“storage_api_volume_rename”,
“macaroon_authentication”,
“network_sriov”,
“console”,
“restrict_devlxd”,
“migration_pre_copy”,
“infiniband”,
“maas_network”
],
“api_status”: “stable”,
“api_version”: “1.0”,
“auth”: “trusted”,
“public”: false,
“auth_methods”: [
“tls”
],
“environment”: {
“addresses”: [],
“architectures”: [
“x86_64”,
“i686”
],
“certificate”: “-----BEGIN CERTIFICATE-----\nMIIFRDCCAyygAwIBAgIRAOPYIkYOAm0/lsSb2igUAe0wDQYJKoZIhvcNAQELBQAw\nNTEcMBoGA1UEChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBV\nU04tTFBDMB4XDTE4MDUyNTE2MjE0OFoXDTI4MDUyMjE2MjE0OFowNTEcMBoGA1UE\nChMTbGludXhjb250YWluZXJzLm9yZzEVMBMGA1UEAwwMcm9vdEBVU04tTFBDMIIC\nIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAmXUWn2J80ASYYsJA1QOzlNJz\nwBPb9W1bFtbx+nHlVdE9bzfULDyxRmP0lgmU2elESaBM0j7B1J+YB/zd80x3TONq\nXpQGOBw/+8x5s4u2F59E8WVPdQdJY9qSga3k2fdBwBf2Cl0Z01QGcBRWDktGvTkw\nxyEwnUBDuAZYDVCnrylBC4tfZ4/Z2qGv/7DfCR5qF7JFO0LuyNfBWNCqcs0wjmvU\n4A961jT6u0MMqG8Yl/JKa3b4NCxXpC3EqNRWHurpY1k9AHv0YnZbmt9gRhajB95p\n5x/6A9m5jW4oMDXa1Zbg7G+lBymSWYT0TiwoIr/RkQU4XKSw6yZJLx4jdAWt80JM\nE0vhoLr0kS5XoJFV2pxe/cy4PZI8wl42+wEI4lmLBcdzgWG7ffrNMguh70SiDD2j\nVNTo7AlFvS3AsgHpw5dYbcM1YtcHS2Db3s4B6YXcsRQwp4mUwgXCcSemf1ocxD0v\nt821hhNpctykOfI7KzdC5w44+dIaTotGngdNFjYcgqqFDPPHfuWvGznGt3vK+74q\nbl19nyoAmTzXeqgW8+ZkxaYyNl+UHqfKlmw9ViGkyP83SZlUnxh9kLhThb9PfXTC\nVeo3m/jq+nEjWzGroeNSxiyEIFSVx6eRXsfbNKGBmlVGPnTeCZIQa3nh1EqUooct\nxBI9SkhcNJGDTF5MZZMCAwEAAaNPME0wDgYDVR0PAQH/BAQDAgWgMBMGA1UdJQQM\nMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwGAYDVR0RBBEwD4IHVVNOLUxQQ4cE\nwKgAJTANBgkqhkiG9w0BAQsFAAOCAgEAN9yNWS4gGQ3uKUFHVzwaHy2JzPA+nUqz\nXlmKXIktfA4vbSCfhKzgOESLXTlhPghTt+LZ0X0rthsGfkJu9GzVu5ys7z3F8I05\nOH04lVu0mVyKn6oufjxwO7NhFt4RoogQPA77A4Kt3OlSa0yAxpGZ4jgLGSccaI5Y\nawRO0i4E2j90hl62uNgxaQErZUlHREj078okF++212/fn6I5949TB10o0mJm6yiD\nFILl6oqa+KuJ0eyqAyCAWxSELnoY4vlPCWhEfe7lmbSe8XdLD7BENrnOlCDX/TEE\n1j2PQosKw623j/AJ6KoPfOmfiWiCKHnuBZxcqD+O1PdB50BkXA1sOfmocHQsOmTT\nNnkgQIAgzL0B8YDH++qqilt0CzObMSY/CD2QTq+WHAJYhP0ekzyRjNnkFQtB7VnB\nvnKDjL+tPQjZk3Jnmpzv0Z4j5mQMduTutLNsw6w80Nfq/PschcRGE/FX4o5Y50Hl\nqlIcVe3IEDqs++Nc3wfRLBvy39TNyZ/ybJYiYu2e1ZDXoKPMMy2pPw99wEgNZNUK\npg7YlqHkzyAbaP6p4Fb4696ip1mRamccpcaoEtB4nLW8dIwrvm8wuZfYhLN9B3YO\niEWLy/80BhzFM7FxlruuHDSCbHoUrBY6h1BsHWkgxfBzNa9HZckgqiCKKpCPDWdt\nfIvvLgvA2p4=\n-----END CERTIFICATE-----\n”,
“certificate_fingerprint”: “09a8473ce1aceb590d75b76bc1f058ab22cd1df346c5057ad0db401ce5ea82c7”,
“driver”: “lxc”,
“driver_version”: “2.0.8”,
“kernel”: “Linux”,
“kernel_architecture”: “x86_64”,
“kernel_version”: “4.4.0-116-generic”,
“server”: “lxd”,
“server_pid”: 1143,
“server_version”: “2.21”,
“storage”: “btrfs”,
“storage_version”: “4.4”
}
}
DBUG[05-26|17:06:34] Sending request to LXD etag= method=GET url=http://unix.socket/1.0/containers/LPC2
DBUG[05-26|17:06:34] Got response struct from LXD
DBUG[05-26|17:06:34]
{
“architecture”: “x86_64”,
“config”: {
“image.architecture”: “amd64”,
“image.description”: “ubuntu 16.04 LTS amd64 (release) (20180522)”,
“image.label”: “release”,
“image.os”: “ubuntu”,
“image.release”: “xenial”,
“image.serial”: “20180522”,
“image.version”: “16.04”,
“volatile.base_image”: “08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074”,
“volatile.eth0.hwaddr”: “00:16:3e:c9:81:d3”,
volatile.eth0.name”: “eth0”,
“volatile.idmap.base”: “0”,
“volatile.idmap.next”: “[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]”,
“volatile.last_state.idmap”: “[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]”,
“volatile.last_state.power”: “RUNNING”
},
“devices”: {},
“ephemeral”: false,
“profiles”: [
“lanprofile”
],
“stateful”: false,
“description”: “”,
“created_at”: “2018-05-26T13:14:07Z”,
“expanded_config”: {
“environment.http_proxy”: “”,
“image.architecture”: “amd64”,
“image.description”: “ubuntu 16.04 LTS amd64 (release) (20180522)”,
“image.label”: “release”,
“image.os”: “ubuntu”,
“image.release”: “xenial”,
“image.serial”: “20180522”,
“image.version”: “16.04”,
“user.network_mode”: “”,
“volatile.base_image”: “08bbf441bb737097586e9f313b239cecbba96222e58457881b3718c45c17e074”,
“volatile.eth0.hwaddr”: “00:16:3e:c9:81:d3”,
volatile.eth0.name”: “eth0”,
“volatile.idmap.base”: “0”,
“volatile.idmap.next”: “[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]”,
“volatile.last_state.idmap”: “[{“Isuid”:true,“Isgid”:false,“Hostid”:100000,“Nsid”:0,“Maprange”:65536},{“Isuid”:false,“Isgid”:true,“Hostid”:100000,“Nsid”:0,“Maprange”:65536}]”,
“volatile.last_state.power”: “RUNNING”
},
“expanded_devices”: {
“eth0”: {
“nictype”: “macvlan”,
“parent”: “enp0s3”,
“type”: “nic”
},
“root”: {
“path”: “/”,
“pool”: “lxdpool”,
“type”: “disk”
}
},
“name”: “LPC2”,
“status”: “Stopped”,
“status_code”: 102,
“last_used_at”: “2018-05-26T20:13:41.151744898Z”
}
DBUG[05-26|17:06:34] Connected to the websocket
DBUG[05-26|17:06:34] Sending request to LXD etag= method=PUT url=http://unix.socket/1.0/containers/LPC2/state
DBUG[05-26|17:06:34]
{
“action”: “start”,
“timeout”: 0,
“force”: false,
“stateful”: false
}
DBUG[05-26|17:06:34] Got operation from LXD
DBUG[05-26|17:06:34]
{
“id”: “d9800938-0399-4f0c-8b7c-95646940abcd”,
“class”: “task”,
“created_at”: “2018-05-26T17:06:34.060727824-04:00”,
“updated_at”: “2018-05-26T17:06:34.060727824-04:00”,
“status”: “Running”,
“status_code”: 103,
“resources”: {
“containers”: [
"/1.0/containers/LPC2"
]
},
“metadata”: null,
“may_cancel”: false,
“err”: “”
}
DBUG[05-26|17:06:34] Sending request to LXD etag= method=GET url=http://unix.socket/1.0/operations/d9800938-0399-4f0c-8b7c-95646940abcd
DBUG[05-26|17:06:34] Got response struct from LXD
DBUG[05-26|17:06:34]
{
“id”: “d9800938-0399-4f0c-8b7c-95646940abcd”,
“class”: “task”,
“created_at”: “2018-05-26T17:06:34.060727824-04:00”,
“updated_at”: “2018-05-26T17:06:34.060727824-04:00”,
“status”: “Running”,
“status_code”: 103,
“resources”: {
“containers”: [
"/1.0/containers/LPC2"
]
},
“metadata”: null,
“may_cancel”: false,
“err”: “”
}
error: Failed to run: /usr/bin/lxd forkstart LPC2 /var/lib/lxd/containers /var/log/lxd/LPC2/lxc.conf:
Try lxc info --show-log LPC2 for more info

And log:
ray@USN-LPC:~$ lxc info --show-log LPC2
Name: LPC2
Remote: unix://
Architecture: x86_64
Created: 2018/05/26 13:14 UTC
Status: Stopped
Type: persistent
Profiles: lanprofile

Log:

        lxc 20180526210634.149 ERROR    lxc_conf - conf.c:instantiate_macvlan:2811 - failed to create macvlan interface 'mcOE2S14' on 'enp0s3' : Device or resource busy
        lxc 20180526210634.150 ERROR    lxc_conf - conf.c:lxc_create_network:3029 - failed to create netdev
        lxc 20180526210634.150 ERROR    lxc_start - start.c:lxc_spawn:1103 - Failed to create the network.
        lxc 20180526210634.150 ERROR    lxc_start - start.c:__lxc_start:1358 - Failed to spawn container "LPC2".
        lxc 20180526210634.685 ERROR    lxc_conf - conf.c:run_buffer:416 - Script exited with status 1.
        lxc 20180526210634.685 ERROR    lxc_start - start.c:lxc_fini:546 - Failed to run lxc.hook.post-stop for container "LPC2".
        lxc 20180526210634.685 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:177 - Command get_cgroup failed to receive response: Connection reset by peer.
        lxc 20180526210634.685 WARN     lxc_commands - commands.c:lxc_cmd_rsp_recv:177 - Command get_cgroup failed to receive response: Connection reset by peer.

ray@USN-LPC:~$

I have posted what you recommended. Did you see anything wrong?

Thanks,

Ray