BTRFS Disk Usage incorrect

LXD: 4.0.5
I have two servers, one is Ubuntu with ZFS and another is Ubuntu with BTRFS. (I think the two server is not relevant since the issue is repeatable with just one server).

BTRFS server setup: When installing Ubuntu server, i created a separate partition and formatted with BTRFS during Ubuntu install.

During the LXD initialisation process, I did this.

Name of the storage backend to use (ceph, btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]: no
Name of the existing BTRFS pool or dataset: /btrfs

I created an ubuntu container on my Ubuntu server with ZFS, created a dummy file using

head -c 20GB /dev/urandom > data.bin

I then took a snapshot then copied to a ubuntu server with BTRFS, and now when I check the disk usage, it reports it as only 14MB.

Results from lxc info dummy
image

image

Probably related to this: https://github.com/lxc/lxd/issues/8468

Correction: I said copy, I used the migrate.

{
    "name": "dummy",
    "architecture": "aarch64",
    "type": "container",
    "profiles": [
        "custom-default",
        "custom-nat"
    ],
    "config": {
        "image.architecture": "arm64",
        "image.description": "Ubuntu focal arm64 (20210120_07:42)",
        "image.os": "Ubuntu",
        "image.release": "focal",
        "image.serial": "20210120_07:42",
        "image.type": "squashfs",
        "limits.cpu": "1",
        "limits.memory": "1GB",
        "volatile.base_image": "766788f3eb910d209469ccb48109d3236d1bf60897bb2bf52e5d14e12a5a2a3d",
        "volatile.eth0.hwaddr": "00:16:3e:aa:7c:60",
        "volatile.idmap.base": "0",
        "volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
        "volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
        "volatile.last_state.idmap": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
        "volatile.last_state.power": "STOPPED",
        "volatile.uuid": "913ef2d2-8d29-4afd-a0c9-d10c3f5bc37a"
    },
    "source": {
        "type": "migration",
        "mode": "pull",
        "operation": "https://192.168.1.100:8443/1.0/operations/4a032839-ec1f-4b3e-af05-c04dfc985f05",
        "certificate": "<!-- removed -->",
        "secrets": {
            "control": "6f0f1882626dd85448db6f6f88ecfd6777ac080f8890b0144c0775002834db8b",
            "fs": "ad50c01be6410ab514c5174a0b93ffbf3e47960ad8d3dc9e8d7d24cb75a9efa9"
        },
        "instance_only": false
    },
    "devices": {
        "root": {
            "path": "/",
            "pool": "default",
            "size": "25GB",
            "type": "disk"
        }
    },
    "ephemeral": false,
    "stateful": false
} 

I migrated it back to the Ubuntu ZFS server, now the disk usage reports 20512034816 bytes. Which is the difference between the image and the what is on the disk, which I understand what disk usage reports.