Cannot copy container to remote server

Sup Hommies,

I am struggling to copy a container from one server to another.

I think it is because the container is 50GB in size.

A 10GB container copies over just fine.

I used to be able to use this command to start the transfer:

sudo lxc copy c1 $TARGET_HOST:c1 --verbose

Then I would ssh into the remote server and expand the root disk:

sudo lxc config device set c1 root size 50GB

This was working on Ubuntu 20.04 with both servers using classic lvm storage.

The remote server I am tring to copy to now is running Ubuntu22.04 with an LVM thinpool as the storage.

I guess this has been asked already, but my search could not find an answer.

Any advice would be appreciated.

As good as LXD is I wish it was simpler to set the size of the container storage.

I have struggled with this since switching to LXD from Ubuntu 18.04

From reading this website I understand that I should set the size in the default profile:

sudo lxc profile device set default root size=50GiB

Unfortunately, this has never worked for me. Every new container created still ends up being 10GB!

Today, I have also tried both of the following on the remote server as well as setting the default profile to 50GiB:

$ sudo lxc config device override c1 root size=50GB
Error: The device already exists
$ sudo lxc config device set c1 root size=50GB
Error: Failed to create instance update operation: Instance is busy running a "create" operation

With Ubuntu 20.04 and classic LVM one of these commands would have worked. Now I am trying Ubuntu 22.04 with thinpool LVM they dont work.

My step is to try work out how to make the size setting in the default profile take effect (which it currently does not).

$ sudo lxc profile show default
config: {}
description: ""
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: vglxc
    size: 50GiB
    type: disk
name: default
used_by:
- /1.0/instances/vcsuphommies

Any advice would be appreciated.

I have tried pulling the container from the remote server and still no joy!

$ sudo lxc copy remote:c1 c1 --refresh --mode=pull -p default -p bridge
Error: Failed instance creation: Error transferring instance data: Failed migration on target: Error reading migration control source: websocket: close 1000 (normal)

both servers are running LXD 5.15.

Can you do:

sudo systemctl reload snap.lxd.daemon

On both servers and then ensure the container in question is stopped.

Then can you advise the command you show the output of lxc config show <instance> --expanded on source and target.

Hello Thomas,

Thank you for your response. I’m been a sick note for a few days so I havent had a chance to reply.

I have reloaded LXD on the source server now and rebooted, which I didnt think to do before! This, however, made no difference. I then decided to run the following command on the source container before trying the transfer:

sudo lxc config device set c1 root size=50GB

This command would not work. Eventually, I tracked the issue to a bug that you provided the solution for in your reponse to another thead.

Using the following commands I was able to remove some rogue snapshots:

lxd sql global 'select * from storage_volumes where name like "%/%"'
lxd sql global 'delete from storage_volumes where id = <ID>'

After correcting this issue I was able to set the size of the disk on the source container to 50GB. Once I had done this I was able to successfully transfer the container to the remote server using the lxd copy command.

The problem I now have is that the container starts on the remote server but I cannot stop the container without having to use the force option.

sudo lxc stop c1 --force

Services on the container no not automatically start either. I have to start the networking inside the container the get an ip address.

The container config on the remote server is as follows:

$ sudo lxc config show c1 --expanded
architecture: x86_64
config:
 boot.autostart: "false"
 limits.cpu.allowance: 100%
 limits.memory: 100%
 volatile.cloud-init.instance-id: c4143041-e2db-4dac-a063-0b32dde725b4
 volatile.eth0.host_name: veth6d4cd144
 volatile.eth0.hwaddr: 00:16:3e:04:7e:e3
 volatile.idmap.base: "0"
 volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
 volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
 volatile.last_state.idmap: '[]'
 volatile.last_state.power: RUNNING
 volatile.last_state.ready: "false"
 volatile.uuid: e8d0d748-35ef-4307-bb71-5fd2ef4d6e51
 volatile.uuid.generation: e8d0d748-35ef-4307-bb71-5fd2ef4d6e51
devices:
 eth0:
   name: eth0
   nictype: bridged
   parent: br0
   type: nic
 root:
   path: /
   pool: vglxc
   size: 50GB
   type: disk
ephemeral: false
profiles:
- default
- bridge
stateful: false
description: ""

The container config on the source server before transfer is:

$ sudo lxc config show c1 --expanded
[sudo] password for notmebruv:
architecture: x86_64
config:
  boot.autostart: "true"
  limits.cpu.allowance: 100%
  limits.memory: 100%
  volatile.eth0.host_name: vethda0561f8
  volatile.eth0.hwaddr: 00:16:3e:18:11:03
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 1448b7b7-0501-4617-8d24-03470ca017fd
  volatile.uuid.generation: 1448b7b7-0501-4617-8d24-03470ca017fd
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic
  root:
    path: /
    pool: vglxc
    size: 50GB
    type: disk
ephemeral: false
profiles:
- default
- bridge
stateful: false
description: ""

It is an old Ubuntu 16.04 container that I need to keep running just a little longer! It was originally a VM running on Centos before I used the P2C tool to convert it into an LXD container.

At this stage I am not sure why the networking is not starting or why the conatainer cannot be stopped? Any thougths would be welcome.

After creating a new blank ubnutu 16.04 container to copying it between the two servers, as a test, I encountered the following error:

 Error: The image used by this instance requires a CGroupV1 host system

After updating grub with the following, and rebooting the host server I was able to transfer my old Ubuntu 16.04 container to the new server:

systemd.unified_cgroup_hierarchy=false 

The container now starts and runs fine.