I run the following part of a script to install LXD and to feed it the init of a previous init --dump.
# Enable Quota after initial RAMdisk Project Quota Setup
apt install quota -y
quotaon -Pv -F vfsv1 /
# Setup LXD
apt install snapd -y
snap install lxd
cat <<EOF | sudo lxd init --preseed
config:
core.https_address: '[::]:8444'
core.trust_password: true
networks:
- config:
ipv4.address: 10.10.100.1/24
ipv4.nat: "true"
ipv6.address: 2222::1/64
ipv6.dhcp: "false"
ipv6.nat: "false"
ipv6.routing: "true"
description: ""
name: lxdbr0
type: bridge
storage_pools:
- config:
source: /var/snap/lxd/common/lxd/storage-pools/default
description: ""
name: default
driver: dir
profiles:
- config:
security.nesting: "true"
description: Default LXD profile
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
- config:
limits.cpu: "4"
limits.disk.priority: "5"
limits.memory: 14336MB
limits.memory.swap: "false"
limits.network.priority: "5"
description: ""
devices:
eth0:
limits.max: 300Mbit
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
size: 160GiB
type: disk
name: l
- config:
limits.cpu: "2"
limits.disk.priority: "3"
limits.memory: 7168MB
limits.memory.swap: "false"
limits.network.priority: "3"
description: ""
devices:
eth0:
limits.max: 150Mbit
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
size: 80GiB
type: disk
name: m
- config:
limits.cpu: "1"
limits.disk.priority: "2"
limits.memory: 3584MB
limits.memory.swap: "false"
limits.network.priority: "2"
description: ""
devices:
eth0:
limits.max: 100Mbit
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
size: 40GiB
type: disk
name: s
- config:
limits.cpu: "8"
limits.disk.priority: "10"
limits.memory: 28672MB
limits.memory.swap: "false"
limits.network.priority: "10"
description: ""
devices:
eth0:
limits.max: 600Mbit
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
size: 320GiB
type: disk
name: xl
- config:
limits.cpu: "1"
limits.cpu.allowance: 50%
limits.disk.priority: "1"
limits.memory: 1792MB
limits.memory.swap: "false"
limits.network.priority: "1"
description: ""
devices:
eth0:
limits.max: 50Mbit
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: default
size: 20GiB
type: disk
name: xs
EOF
sudo lxc config set core.trust_password SOME-PASSWORD
Then I get this error when running sudo lxc launch ubuntu:18.04 container2 -p default -p s
:
Error: websocket: close 1006 (abnormal closure): unexpected EOF
Try `lxc info --show-log local:container2` for more info
The output of sudo lxc info --show-log local:container2
is:
Name: container2
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/04/19 18:24 UTC
Status: Running
Type: container
Profiles: default, s
Pid: 11043
Ips:
eth0: inet 10.10.100.242 veth0ed253e1
eth0: inet6 2222::fec3:43ad veth0ed253e1
eth0: inet6 fe80::216:3eff:fec3:43ad veth0ed253e1
lo: inet 127.0.0.1
lo: inet6 ::1
Resources:
Processes: 24
Disk usage:
root: 738.07MB
CPU usage:
CPU usage (in seconds): 7
Memory usage:
Memory (current): 56.61MB
Network usage:
eth0:
Bytes received: 593.09kB
Bytes sent: 12.79kB
Packets received: 267
Packets sent: 155
lo:
Bytes received: 1.24kB
Bytes sent: 1.24kB
Packets received: 15
Packets sent: 15
Log:
lxc container2 20200419182456.584 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1143 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.container2"
lxc container2 20200419182456.586 ERROR cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1143 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.container2"
lxc container2 20200419182456.587 ERROR utils - utils.c:lxc_can_use_pidfd:1834 - Kernel does not support pidfds
The output of sudo lxc config show
also seems broken:
user@vps:~$ sudo lxc config show
config:
core.https_address: '[::]:8444'
core.trust_password: true
The rest is missing but shown when running sudo lxd init --dump
stgraber
(Stéphane Graber)
April 19, 2020, 6:56pm
2
journalctl -u snap.lxd.daemon -n 300
user@vps:~$ sudo journalctl -u snap.lxd.daemon -n 300
-- Logs begin at Sun 2020-04-12 17:00:14 CEST, end at Sun 2020-04-19 20:57:19 CEST. --
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 4: fd: 9: cpu,cpuacct
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 5: fd: 10: memory
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 6: fd: 11: pids
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 7: fd: 12: freezer
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 8: fd: 13: blkio
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 9: fd: 14: rdma
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 10: fd: 15: hugetlb
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 11: fd: 16: perf_event
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: 12: fd: 17: cpuset
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: api_extensions:
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - cgroups
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - sys_cpu_online
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - proc_cpuinfo
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - proc_diskstats
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - proc_loadavg
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - proc_meminfo
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - proc_stat
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - proc_swaps
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - proc_uptime
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - shared_pidns
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - cpuview_daemon
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - loadavg_daemon
Apr 19 20:49:05 vps.domain lxd.daemon[18516]: - pidfds
Apr 19 20:49:08 vps.domain systemd[1]: snap.lxd.daemon.service: Main process exited, code=exited, status=1/FAILURE
Apr 19 20:49:08 vps.domain systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.
Apr 19 20:49:08 vps.domain systemd[1]: snap.lxd.daemon.service: Service hold-off time over, scheduling restart.
Apr 19 20:49:08 vps.domain systemd[1]: snap.lxd.daemon.service: Scheduled restart job, restart counter is at 1.
Apr 19 20:49:08 vps.domain systemd[1]: Stopped Service for snap application lxd.daemon.
Apr 19 20:49:08 vps.domain systemd[1]: Started Service for snap application lxd.daemon.
Apr 19 20:49:08 vps.domain lxd.daemon[19156]: => Preparing the system (14709)
Apr 19 20:49:08 vps.domain lxd.daemon[19156]: ==> Loading snap configuration
Apr 19 20:49:08 vps.domain lxd.daemon[19156]: ==> Setting up mntns symlink (mnt:[4026532221])
Apr 19 20:49:08 vps.domain lxd.daemon[19156]: ==> Setting up kmod wrapper
Apr 19 20:49:08 vps.domain lxd.daemon[19156]: ==> Preparing /boot
Apr 19 20:49:08 vps.domain lxd.daemon[19156]: ==> Preparing a clean copy of /run
Apr 19 20:49:08 vps.domain lxd.daemon[19156]: ==> Preparing a clean copy of /etc
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: ==> Setting up ceph configuration
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: ==> Setting up LVM configuration
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: ==> Rotating logs
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: ==> Setting up ZFS (0.7)
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: ==> Escaping the systemd cgroups
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: ====> Detected cgroup V1
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: ==> Escaping the systemd process resource limits
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: ==> Disabling shiftfs on this kernel (auto)
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: => Re-using existing LXCFS
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: => Starting LXD
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: t=2020-04-19T20:49:09+0200 lvl=warn msg=" - Couldn't find the CGroup memory swap accounting, swap limits
Apr 19 20:49:09 vps.domain lxd.daemon[19156]: 2020/04/19 20:49:09 http: superfluous response.WriteHeader call from github.com/lxc/lxd/lxd/response.(*erApr 19 20:49:11 vps.domain systemd[1]: snap.lxd.daemon.service: Main process exited, code=exited, status=1/FAILURE
Apr 19 20:49:11 vps.domain systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.
Apr 19 20:49:11 vps.domain systemd[1]: snap.lxd.daemon.service: Service hold-off time over, scheduling restart.
Apr 19 20:49:11 vps.domain systemd[1]: snap.lxd.daemon.service: Scheduled restart job, restart counter is at 2.
stgraber
(Stéphane Graber)
April 19, 2020, 7:16pm
4
Interesting, so LXD is crashing.
Can you do:
systemctl stop snap.lxd.daemon.service snap.lxd.daemon.unix.socket
lxd --debug --group lxd
And then in another shell do your launch again?
That should give us a full trace of the crash.
Didnât have to run the launch again from another shell, it went off right away.
user@vps:~$ sudo systemctl stop snap.lxd.daemon.service snap.lxd.daemon.unix.socket
user@vps:~$ sudo lxd --debug --group lxd
DBUG[04-19|21:18:56] Connecting to a local LXD over a Unix socket
DBUG[04-19|21:18:56] Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
INFO[04-19|21:18:56] LXD 4.0.0 is starting in normal mode path=/var/snap/lxd/common/lxd
INFO[04-19|21:18:56] Kernel uid/gid map:
INFO[04-19|21:18:56] - u 0 0 4294967295
INFO[04-19|21:18:56] - g 0 0 4294967295
INFO[04-19|21:18:56] Configured LXD uid/gid map:
INFO[04-19|21:18:56] - u 0 1000000 1000000000
INFO[04-19|21:18:56] - g 0 1000000 1000000000
INFO[04-19|21:18:56] Kernel features:
INFO[04-19|21:18:56] - netnsid-based network retrieval: no
INFO[04-19|21:18:56] - uevent injection: no
INFO[04-19|21:18:56] - seccomp listener: no
INFO[04-19|21:18:56] - seccomp listener continue syscalls: no
INFO[04-19|21:18:56] - unprivileged file capabilities: yes
INFO[04-19|21:18:56] - cgroup layout: hybrid
WARN[04-19|21:18:56] - Couldn't find the CGroup memory swap accounting, swap limits will be ignored
INFO[04-19|21:18:56] - shiftfs support: no
INFO[04-19|21:18:56] Initializing local database
DBUG[04-19|21:18:56] Initializing database gateway
DBUG[04-19|21:18:56] Start database node id=1 address= role=voter
DBUG[04-19|21:18:56] Connecting to a local LXD over a Unix socket
DBUG[04-19|21:18:56] Sending request to LXD method=GET url=http://unix.socket/1.0 etag=
DBUG[04-19|21:18:56] Detected stale unix socket, deleting
INFO[04-19|21:18:56] Starting /dev/lxd handler:
INFO[04-19|21:18:56] - binding devlxd socket socket=/var/snap/lxd/common/lxd/devlxd/sock
INFO[04-19|21:18:56] REST API daemon:
INFO[04-19|21:18:56] - binding Unix socket socket=/var/snap/lxd/common/lxd/unix.socket
INFO[04-19|21:18:56] - binding TCP socket socket=[::]:8444
INFO[04-19|21:18:56] Initializing global database
DBUG[04-19|21:18:56] Dqlite: connected address=1 attempt=0
INFO[04-19|21:18:56] Firewall loaded driver "xtables"
INFO[04-19|21:18:56] Initializing storage pools
DBUG[04-19|21:18:56] Initializing and checking storage pool "default"
DBUG[04-19|21:18:56] Mount started driver=dir pool=default
DBUG[04-19|21:18:56] Mount finished driver=dir pool=default
INFO[04-19|21:18:57] Initializing daemon storage mounts
INFO[04-19|21:18:57] Initializing networks
DBUG[04-19|21:18:58] New task Operation: 61678f0a-71cd-4a27-9ffe-c22a333f0bb9
INFO[04-19|21:18:58] Pruning leftover image files
DBUG[04-19|21:18:58] Started task operation: 61678f0a-71cd-4a27-9ffe-c22a333f0bb9
INFO[04-19|21:18:58] Done pruning leftover image files
INFO[04-19|21:18:58] Loading daemon configuration
DBUG[04-19|21:18:58] Success for task operation: 61678f0a-71cd-4a27-9ffe-c22a333f0bb9
DBUG[04-19|21:18:58] Initialized inotify with file descriptor 24
DBUG[04-19|21:18:58] New task Operation: 0afced5f-df24-462c-b49a-a2cc320a11d7
INFO[04-19|21:18:58] Pruning expired images
DBUG[04-19|21:18:58] Started task operation: 0afced5f-df24-462c-b49a-a2cc320a11d7
INFO[04-19|21:18:58] Done pruning expired images
DBUG[04-19|21:18:58] New task Operation: 92fc8acf-5094-4cd3-9b6f-5905c4c861cb
DBUG[04-19|21:18:58] Success for task operation: 0afced5f-df24-462c-b49a-a2cc320a11d7
INFO[04-19|21:18:58] Pruning expired instance backups
DBUG[04-19|21:18:58] Started task operation: 92fc8acf-5094-4cd3-9b6f-5905c4c861cb
INFO[04-19|21:18:58] Done pruning expired instance backups
DBUG[04-19|21:18:58] Success for task operation: 92fc8acf-5094-4cd3-9b6f-5905c4c861cb
DBUG[04-19|21:18:58] New task Operation: 69b3b0dd-4902-476d-8127-29cf1e985b3c
DBUG[04-19|21:18:58] New task Operation: 2a84c6f5-2e86-4756-8c4d-374843f9e259
DBUG[04-19|21:18:58] New task Operation: b953a536-e9a7-4036-9058-c29c395c20c4
INFO[04-19|21:18:58] Expiring log files
DBUG[04-19|21:18:58] Started task operation: 69b3b0dd-4902-476d-8127-29cf1e985b3c
INFO[04-19|21:18:58] Updating instance types
DBUG[04-19|21:18:58] Started task operation: 2a84c6f5-2e86-4756-8c4d-374843f9e259
INFO[04-19|21:18:58] Updating images
DBUG[04-19|21:18:58] Started task operation: b953a536-e9a7-4036-9058-c29c395c20c4
INFO[04-19|21:18:58] Done updating instance types
INFO[04-19|21:18:58] Done updating images
INFO[04-19|21:18:58] Done expiring log files
DBUG[04-19|21:18:58] Success for task operation: 69b3b0dd-4902-476d-8127-29cf1e985b3c
DBUG[04-19|21:18:58] Processing image fp=2cfc5a5567b8d74c0986f3d8a77a2a78e58fe22ea9abd2693112031f85afa1a1 server=https://cloud-images.ubuntu.com/releases protocol=simplestreams alias=18.04
DBUG[04-19|21:18:58] Connecting to a remote simplestreams server
DBUG[04-19|21:18:58] Scheduler: network: veth82312eb3 has been added: updating network priorities
DBUG[04-19|21:18:58] Scheduler: network: vetha3e77654 has been added: updating network priorities
DBUG[04-19|21:18:58] MountInstance started driver=dir pool=default project=default instance=c1
DBUG[04-19|21:18:58] MountInstance finished driver=dir pool=default project=default instance=c1
DBUG[04-19|21:18:58] UpdateInstanceBackupFile started driver=dir pool=default project=default instance=c1
DBUG[04-19|21:18:58] UpdateInstanceBackupFile finished driver=dir pool=default project=default instance=c1
DBUG[04-19|21:18:58] MountInstance started driver=dir pool=default project=default instance=c1
DBUG[04-19|21:18:58] MountInstance finished driver=dir pool=default project=default instance=c1
INFO[04-19|21:18:58] Starting container project=default name=c1 action=start created=2020-04-19T20:08:52+0200 ephemeral=false used=2020-04-19T20:52:34+0200 stateful=false
DBUG[04-19|21:18:58] Handling ip=@ user= method=GET url=/internal/containers/3/onstart
DBUG[04-19|21:18:58] MountInstance started pool=default driver=dir project=default instance=c1
DBUG[04-19|21:18:58] MountInstance finished pool=default driver=dir project=default instance=c1
DBUG[04-19|21:18:58] Scheduler: container c1 started: re-balancing
INFO[04-19|21:18:58] Started container name=c1 action=start created=2020-04-19T20:08:52+0200 ephemeral=false used=2020-04-19T20:52:34+0200 stateful=false project=default
DBUG[04-19|21:18:58] Scheduler: network: veth5ce9b874 has been added: updating network priorities
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x90b96e]
goroutine 460 [running]:
gopkg.in/lxc/go-lxc%2ev2.(*Container).SetCgroupItem(0x0, 0x13e8d5b, 0x12, 0xc001a140b8, 0x4, 0x0, 0x0)
/build/lxd/parts/lxd/go/src/gopkg.in/lxc/go-lxc.v2/container.go:822 +0x3e
github.com/lxc/lxd/lxd/instance/drivers.(*lxcCgroupReadWriter).Set(0xc00075d410, 0x1, 0x13cfb7b, 0x8, 0x13e8d5b, 0x12, 0xc001a140b8, 0x4, 0xc001a140b8, 0xc001890000)
/build/lxd/parts/lxd/go/src/github.com/lxc/lxd/lxd/instance/drivers/driver_lxc.go:6901 +0x21e
github.com/lxc/lxd/lxd/cgroup.(*CGroup).SetNetIfPrio(0xc00058e820, 0xc001a140b8, 0x4, 0x2, 0x2)
/build/lxd/parts/lxd/go/src/github.com/lxc/lxd/lxd/cgroup/abstraction.go:302 +0x112
github.com/lxc/lxd/lxd/instance/drivers.(*lxc).setNetworkPriority(0xc0005a2140, 0x1, 0x783ec9)
/build/lxd/parts/lxd/go/src/github.com/lxc/lxd/lxd/instance/drivers/driver_lxc.go:6408 +0x314
github.com/lxc/lxd/lxd/instance/drivers.(*lxc).onStart.func1(0xc0005a2140)
/build/lxd/parts/lxd/go/src/github.com/lxc/lxd/lxd/instance/drivers/driver_lxc.go:2498 +0x36
created by github.com/lxc/lxd/lxd/instance/drivers.(*lxc).onStart
/build/lxd/parts/lxd/go/src/github.com/lxc/lxd/lxd/instance/drivers/driver_lxc.go:2496 +0x364
I ran the script again after setting back the VPS to the last snapshot and got the following output when snapd was installed:
snapd.failure.service is a disabled or a static unit, not starting it.
snapd.snap-repair.service is a disabled or a static unit, not starting it.
So I am guessing there is something wring with snap.
Will try to put some commands from this in the script.
Nope, did not solve the problemâŠ
Also I get during the script:
Warning: /snap/bin was not found in your $PATH. If you've not restarted your session since you
installed snapd, try doing that. Please see https://forum.snapcraft.io/t/9469 for more
details.
But I hope that restarting after the script is finished is enough for that.
stgraber
(Stéphane Graber)
April 19, 2020, 8:19pm
7
Seems like thereâs something going on with one of the limits somehow, not sure which or why though.
Any chance you can create a normal container and progressively apply your limits to track down which one is causing this?
1 Like
Seems to be the network priority:
user@vps:~$ sudo lxc launch ubuntu:18.04 c2
Creating c2
Starting c2
user@vps:~$ sudo lxc config set c2 limits.cpu 1
user@vps:~$ sudo lxc restart c2
user@vps:~$ sudo lxc config set c2 limits.memory 1792MB
user@vps:~$ sudo lxc restart c2
user@vps:~$ sudo lxc config set c2 limits.memory.swap false
user@vps:~$ sudo lxc restart c2
user@vps:~$ sudo lxc config set c2 limits.disk.priority 1
user@vps:~$ sudo lxc restart c2
user@vps:~$ sudo lxc config set c2 limits.network.priority 1
user@vps:~$ sudo lxc restart c2
Error: websocket: close 1006 (abnormal closure): unexpected EOF
Try `lxc info --show-log c2` for more info
But I do not like that sudo lxc config show
just shows those first two lines of the config, there must be something wrong there too.
stgraber
(Stéphane Graber)
April 20, 2020, 1:51pm
9
Thanks, can you also show lxc info
?
Sure.
user@vps:~$ sudo lxc info
config:
core.https_address: '[::]:8444'
core.trust_password: true #Added by me: There is a lot missing after this line, right?
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- macaroon_authentication
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- candid_authentication
- backup_compression
- candid_config
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- candid_config_key
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- rbac
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- resources_system
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
environment:
addresses:
- public-ipv4:8444
- '[ipv6]:8444'
- 10.10.100.1:8444
- '[ipv6]:8444'
architectures:
- x86_64
- i686
certificate: |
-----BEGIN CERTIFICATE-----
CERTIFICATE-STUFF==
-----END CERTIFICATE-----
certificate_fingerprint: blablablabla124234235342535
driver: lxc
driver_version: 4.0.2
firewall: xtables
kernel: Linux
kernel_architecture: x86_64
kernel_features:
netnsid_getifaddrs: "false"
seccomp_listener: "false"
seccomp_listener_continue: "false"
shiftfs: "false"
uevent_injection: "false"
unpriv_fscaps: "true"
kernel_version: 4.15.0-96-generic
lxc_features:
cgroup2: "true"
mount_injection_file: "true"
network_gateway_device_route: "true"
network_ipvlan: "true"
network_l2proxy: "true"
network_phys_macvlan_mtu: "true"
network_veth_router: "true"
seccomp_notify: "true"
os_name: Ubuntu
os_version: "18.04"
project: default
server: lxd
server_clustered: false
server_name: vps.domain
server_pid: 2920
server_version: 4.0.0
storage: dir
storage_version: "1"
stgraber
(Stéphane Graber)
April 20, 2020, 4:59pm
11
Thanks, Iâll try to reproduce and track it down here.
1 Like
Thanks, waiting eagerly for resolution. Wish you all the best when troubleshooting!
stgraber
(Stéphane Graber)
April 20, 2020, 6:07pm
13
Confirmed that thereâs something weird going on with limits.network.priority
, looking closer at that one now.
Thanks! Should I be fine with all the other weird stuff like the snap warnings when installing and the config only showing the first three lines? So leaving out those limits all is good?
stgraber
(Stéphane Graber)
April 20, 2020, 6:21pm
16
Only the network priority one is affected, all the others are fine.
1 Like