Migrating Incus docker instances with attached volumes

Hi, I’ve started using docker containers on my cluster and generally it seems to work well. I am however having an issue which is making me think I’m missing something.

Here’s what I’m doing;

# incus storage volume create cluster nginxproxy-data
# incus storage volume create cluster nginxproxy-lets
# incus create docker:jc21/nginx-proxy-manager:latest nginxproxy
# incus storage volume attach cluster nginxproxy-data nginxproxy /data
# incus storage volume attach cluster nginxproxy-lets nginxproxy /etc/letsencrypt

Works great, I can go to the GUI, run the container, add a few proxy ports and I’m away with a running NginxProxy manager.

I have a few concerns;

  • “storage volume create” doesn’t seem to want to take a “target”, so I have create everything “on” the target node
  • When I come to migrate the instance to another node, the docker instance itself migrates fine, but it doesn’t take the volumes with it
  • I can’t seem to move the volumes via the UI, only with a “move” command from the command line
  • I can only create such a docker setup on the CLI, is this something the UI might do in the future (i.e. attach volumes) [docker instances are of limited use (to me) without persistence]

Can anyone tell me if there’s something I’m missing, or is the UI just not caught up in terms of OCI image functionality?

I’m running the latest Zabbly on Raspberry Pi5’s, clustered on a ZFS filesystem. At the moment networking is just a simple bridge. (although I’m looking at OVN)

incus storage volume create does support --target for use with local storage pools (basically anything except ceph or lvmcluster)

Because volumes can be shared with multiple instances, they don’t get automatically dragged along when an instance is moved. So you’d typically want to:

  • stop the instance
  • move the volumes
  • move the instance
  • start back the instance

incus storage volume move supports --target to move a volume around

It sounds like it’s mostly a UI issue, where we’d want to make sure that we can:

  • support the creation of OCI instances (asking you for the registry and image name)
  • select a target when creating a volume
  • move a volume between servers

@presztak I think those would be useful UI additions (if not already handled somehow), so probably worth putting on the list of enhancements to make to it

Hmm, so it does, my apologies, I clearly did something wrong at my end on that one, which as you say just leaves me with UI issues :slight_smile:

I did post this somewhere else, but in terms of migration, it would be really nice if there was an option that did something like;

  • run “copy” for the instance (and volumes)
  • stop the instance
  • run a copy --refresh
  • start the instance
  • remove the old source container and images

For large instances this would reduce the downtime from potentially minutes or even tens of minutes, to literally the time it takes a container to reboot. This would make a massive difference to uptimes when reorganising … and it would make moving downtime predictable.

Just another follow-on, I have a script that takes an incremental backup of the entire cluster every hour, which takes about a minute to run. One thing I noticed was that the UI would migrate a docker instance (without it’s data volume) then refuse to run (because there was no data volume on that node) , however when you do an “incus copy” from the command line, it seems to actually fail reporting an error, because it sees it’s copying an instance without dependent data … i.e. it seems to be doing validation that the UI is not (?)

My script as a point of reference, amended to also copy custom (docker) volumes so the docker instance copies work.

#!/usr/bin/env bash

self=rad
project=standby
echo '------------'
date

for i in `incus storage volume ls cluster -c nL -fcsv type=custom`
do
	inst=`echo $i|cut -d"," -f1`
	host=`echo $i|cut -d"," -f2`
	echo "Copying custom (docker) Image :: ${insg} from ${host}"
	incus storage volume copy ${host}:cluster/${inst} cluster/${inst} --target-project=${project} --destination-target=${self} --refresh
done

for i in `incus list -c nL -f csv,noheader|tr ' ' '_'`
do
	inst=`echo $i|cut -d"," -f1`
	host=`echo $i|cut -d"," -f2`
	echo "Copying standard (incus) Image :: ${inst} from ${host}"
	incus copy $inst $inst --target-project=${project} --refresh --target=${self}
done

Script to copy all instances to a project called “standby” on a spare machine
25 Instances, target consumes ~ 300G (compressed ZFS), runtime ~ 1m
(runs on node ‘rad’)