Silly me, I just realised that last year I added a custom code to my restore process which removes subsequent snapshots before the restore process to prevent these kind of issues when using ZFS as at the time volume.zfs.remove_snapshots=true
was not supported. It might interesting to also add version limitation to docs when it is written.
As for the error on restoring a snapshot on the ORIGINAL container after running a copy.
Snapshot \"container-test-20210203-02\" cannot be restored due to subsequent internal snapshot(s) (from a copy)",
This is what I sent to the API, as you can see it does not copy snapshots, but now I can’t restore using snapshots on the original container anymore due to error above. This seems to be a problem, anyway round this?
{
"name": "container-clone",
"architecture": "aarch64",
"type": "container",
"profiles": [
"custom-default",
"custom-nat"
],
"config": {
"image.architecture": "arm64",
"image.description": "Ubuntu focal arm64 (20210120_07:42)",
"image.os": "Ubuntu",
"image.release": "focal",
"image.serial": "20210120_07:42",
"image.type": "squashfs",
"limits.cpu": "1",
"limits.memory": "1GB",
"volatile.base_image": "766788f3eb910d209469ccb48109d3236d1bf60897bb2bf52e5d14e12a5a2a3d"
},
"source": {
"type": "copy",
"certificate": null,
"base-image": "766788f3eb910d209469ccb48109d3236d1bf60897bb2bf52e5d14e12a5a2a3d",
"source": "container-test",
"live": false,
"instance_only": true
},
"devices": {
"root": {
"path": "/",
"pool": "default",
"size": "5GB",
"type": "disk"
}
},
"ephemeral": false,
"stateful": false
}