Hello Thomas,
Thank you for your response. I’m been a sick note for a few days so I havent had a chance to reply.
I have reloaded LXD on the source server now and rebooted, which I didnt think to do before! This, however, made no difference. I then decided to run the following command on the source container before trying the transfer:
sudo lxc config device set c1 root size=50GB
This command would not work. Eventually, I tracked the issue to a bug that you provided the solution for in your reponse to another thead.
Using the following commands I was able to remove some rogue snapshots:
lxd sql global 'select * from storage_volumes where name like "%/%"'
lxd sql global 'delete from storage_volumes where id = <ID>'
After correcting this issue I was able to set the size of the disk on the source container to 50GB. Once I had done this I was able to successfully transfer the container to the remote server using the lxd copy command.
The problem I now have is that the container starts on the remote server but I cannot stop the container without having to use the force option.
sudo lxc stop c1 --force
Services on the container no not automatically start either. I have to start the networking inside the container the get an ip address.
The container config on the remote server is as follows:
$ sudo lxc config show c1 --expanded
architecture: x86_64
config:
boot.autostart: "false"
limits.cpu.allowance: 100%
limits.memory: 100%
volatile.cloud-init.instance-id: c4143041-e2db-4dac-a063-0b32dde725b4
volatile.eth0.host_name: veth6d4cd144
volatile.eth0.hwaddr: 00:16:3e:04:7e:e3
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: RUNNING
volatile.last_state.ready: "false"
volatile.uuid: e8d0d748-35ef-4307-bb71-5fd2ef4d6e51
volatile.uuid.generation: e8d0d748-35ef-4307-bb71-5fd2ef4d6e51
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: vglxc
size: 50GB
type: disk
ephemeral: false
profiles:
- default
- bridge
stateful: false
description: ""
The container config on the source server before transfer is:
$ sudo lxc config show c1 --expanded
[sudo] password for notmebruv:
architecture: x86_64
config:
boot.autostart: "true"
limits.cpu.allowance: 100%
limits.memory: 100%
volatile.eth0.host_name: vethda0561f8
volatile.eth0.hwaddr: 00:16:3e:18:11:03
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
volatile.last_state.power: RUNNING
volatile.last_state.ready: "false"
volatile.uuid: 1448b7b7-0501-4617-8d24-03470ca017fd
volatile.uuid.generation: 1448b7b7-0501-4617-8d24-03470ca017fd
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: vglxc
size: 50GB
type: disk
ephemeral: false
profiles:
- default
- bridge
stateful: false
description: ""
It is an old Ubuntu 16.04 container that I need to keep running just a little longer! It was originally a VM running on Centos before I used the P2C tool to convert it into an LXD container.
At this stage I am not sure why the networking is not starting or why the conatainer cannot be stopped? Any thougths would be welcome.