Limits.disk.priority prevents container starting

On a fresh install on 19.04, lxd latest snap

core 16-2.38.1 6818 stable canonical✓ core
lxd 3.14 10934 stable/… canonical✓ -

For some reason, if I apply limits.disk.priority, for example limits.disk.priority: "9", upon trying to start the container it fails and get this error:

Config parsing error: Failed to set cgroup blkio.weight="900": setting cgroup item for the container failed.

Any ideas about whats happening?

Various logs and configs/logs:

lxc.conf	
lxc.log.file = /var/snap/lxd/common/lxd/logs/test/lxc.log
lxc.log.level = warn
lxc.console.buffer.size = auto
lxc.console.size = auto
lxc.console.logfile = /var/snap/lxd/common/lxd/logs/test/console.log
lxc.mount.auto = proc:rw sys:rw
lxc.autodev = 1
lxc.pty.max = 1024
lxc.mount.entry = /dev/fuse dev/fuse none bind,create=file,optional 0 0
lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file,optional 0 0
lxc.mount.entry = /proc/sys/fs/binfmt_misc proc/sys/fs/binfmt_misc none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none rbind,create=dir,optional 0 0
lxc.mount.entry = /sys/kernel/security sys/kernel/security none rbind,create=dir,optional 0 0
lxc.mount.entry = /dev/mqueue dev/mqueue none rbind,create=dir,optional 0 0
lxc.include = /snap/lxd/current/lxc/config//common.conf.d/
lxc.arch = linux64
lxc.hook.version = 1
lxc.hook.pre-start = /snap/lxd/current/bin/lxd callhook /var/snap/lxd/common/lxd 5 start
lxc.hook.stop = /snap/lxd/current/bin/lxd callhook /var/snap/lxd/common/lxd 5 stopns
lxc.hook.post-stop = /snap/lxd/current/bin/lxd callhook /var/snap/lxd/common/lxd 5 stop
lxc.tty.max = 0
lxc.uts.name = test
lxc.mount.entry = /var/snap/lxd/common/lxd/devlxd dev/lxd none bind,create=dir 0 0
lxc.apparmor.profile = lxd-test_//&:lxd-test_:
lxc.seccomp.profile = /var/snap/lxd/common/lxd/security/seccomp/test
lxc.idmap = u 0 1000000 1000000000
lxc.idmap = g 0 1000000 1000000000
lxc.rootfs.path = dir:/var/snap/lxd/common/lxd/containers/test/rootfs
lxc.net.0.type = veth
lxc.net.0.script.up = /snap/lxd/current/bin/lxd callhook /var/snap/lxd/common/lxd 5 network-up eth0
lxc.net.0.flags = up
lxc.net.0.link = lxdbr0
lxc.net.0.hwaddr = 00:16:3e:32:16:57
lxc.net.0.name = eth0
lxc.mount.auto = shmounts:/var/snap/lxd/common/lxd/shmounts/test:/dev/.lxd-mounts
cat /var/snap/lxd/common/lxd/logs/test/lxc.log
lxc test 20190620140756.925 WARN     conf - conf.c:lxc_map_ids:2970 - newuidmap binary is missing
lxc test 20190620140756.926 WARN     conf - conf.c:lxc_map_ids:2976 - newgidmap binary is missing
lxc test 20190620140756.928 WARN     conf - conf.c:lxc_map_ids:2970 - newuidmap binary is missing
lxc test 20190620140756.928 WARN     conf - conf.c:lxc_map_ids:2976 - newgidmap binary is missing
lxc test 20190620140757.632 WARN     conf - conf.c:lxc_setup_devpts:1641 - Invalid argument - Failed to unmount old devpts instance
cat /var/snap/lxd/common/lxd/logs/lxd.log
t=2019-06-20T13:28:19+0000 lvl=info msg="LXD 3.14 is starting in normal mode" path=/var/snap/lxd/common/lxd
t=2019-06-20T13:28:19+0000 lvl=info msg="Kernel uid/gid map:" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - u 0 0 4294967295" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - g 0 0 4294967295" 
t=2019-06-20T13:28:19+0000 lvl=info msg="Configured LXD uid/gid map:" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - u 0 1000000 1000000000" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - g 0 1000000 1000000000" 
t=2019-06-20T13:28:19+0000 lvl=warn msg="CGroup memory swap accounting is disabled, swap limits will be ignored." 
t=2019-06-20T13:28:19+0000 lvl=info msg="Kernel features:" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - netnsid-based network retrieval: yes" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - uevent injection: yes" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - seccomp listener: yes" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - unprivileged file capabilities: yes" 
t=2019-06-20T13:28:19+0000 lvl=info msg=" - shiftfs support: no" 
t=2019-06-20T13:28:19+0000 lvl=info msg="Initializing local database" 
t=2019-06-20T13:28:20+0000 lvl=info msg="Starting /dev/lxd handler:" 
t=2019-06-20T13:28:20+0000 lvl=info msg=" - binding devlxd socket" socket=/var/snap/lxd/common/lxd/devlxd/sock
t=2019-06-20T13:28:20+0000 lvl=info msg="REST API daemon:" 
t=2019-06-20T13:28:20+0000 lvl=info msg=" - binding Unix socket" inherited=true socket=/var/snap/lxd/common/lxd/unix.socket
t=2019-06-20T13:28:20+0000 lvl=info msg=" - binding TCP socket" socket=[::]:8443
t=2019-06-20T13:28:20+0000 lvl=info msg="Initializing global database" 
t=2019-06-20T13:28:20+0000 lvl=info msg="Initializing storage pools" 
t=2019-06-20T13:28:20+0000 lvl=info msg="Applying patch: storage_api_rename_container_snapshots_dir_again" 
t=2019-06-20T13:28:20+0000 lvl=info msg="Applying patch: storage_api_rename_container_snapshots_links_again" 
t=2019-06-20T13:28:20+0000 lvl=info msg="Applying patch: storage_api_rename_container_snapshots_dir_again_again" 
t=2019-06-20T13:28:20+0000 lvl=info msg="Initializing networks" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Pruning leftover image files" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Done pruning leftover image files" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Loading daemon configuration" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Started seccomp handler" path=/var/snap/lxd/common/lxd/seccomp.socket
t=2019-06-20T13:28:21+0000 lvl=info msg="Pruning expired images" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Done pruning expired images" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Pruning expired container backups" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Done pruning expired container backups" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Updating instance types" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Done updating instance types" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Updating images" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Done updating images" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Expiring log files" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Done expiring log files" 
t=2019-06-20T13:28:21+0000 lvl=info msg="Starting container" action=start created=2019-06-20T13:22:20+0000 ephemeral=false name=test project=default stateful=false used=2019-06-20T13:26:17+0000
t=2019-06-20T13:28:22+0000 lvl=eror msg="Failed to apply network priority" container=test err="Can't set network priority on stopped container"
t=2019-06-20T13:28:22+0000 lvl=eror msg="Failed starting container" action=start created=2019-06-20T13:22:20+0000 ephemeral=false name=test project=default stateful=false used=2019-06-20T13:26:17+0000
t=2019-06-20T13:28:22+0000 lvl=eror msg="Failed to start container 'test': Failed to run: /snap/lxd/current/bin/lxd forkstart test /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/test/lxc.conf: " 

Essentially before, I could define a config like the following and would start fine, not sure whats changed.

architecture: x86_64
config:
  boot.autostart: "1"
  image.architecture: amd64
  image.description: ubuntu 19.04 amd64 (release) (20190619)
  image.label: release
  image.os: ubuntu
  image.release: disco
  image.serial: "20190619"
  image.version: "19.04"
  limits.cpu: "1"
  limits.cpu.allowance: 100%
  limits.cpu.priority: "9"
  limits.disk.priority: "9"
  limits.memory: 1024MB
  limits.memory.enforce: hard
  limits.memory.swap: "1"
  limits.memory.swap.priority: "10"
  limits.network.priority: "7"
  limits.processes: "1000"
  security.nesting: "1"
  security.privileged: "0"
  volatile.eth0.hwaddr: 00:16:3e:32:16:57
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
devices: {}
ephemeral: false
profiles:
- default
stateful: false
description: this is a description...

Thanks in advance anyone who can see whats wrong, please ask for further info if you need it.

Issue resolved by using 18.04 on the host

Thinking blame Digital Oceans 19.04 image.