An easy Backup Script for Incus :-)

Hello there, and first of all, thanks to Stephane for his great work !! :slight_smile:

I recently came to run a whole bunch of VMs over differents cluster of servers using Incus, and faced a little problem regarding how to automate backups. I mean, simply enough, to be troubleshooted by anyone, even me !!

I went up with this little script, that lets me backup one entire cluster member, to another, automatically.

#!/bin/bash

#to save an entire server, first, export list of instances : 
incus list location=$1 -c n -f csv > savetodo.txt

#then define the date format to use in file generated-name, and the path to the list file generated above:
bkpdate=`date "+%Y%m%d"`
input="./savetodo.txt"

#Finally, do the things !
while read -r line
do
  incus export $line $line.$bkpdate.bkp.xz
done < "$input"

One can run this script on each servers, with the next-server name as argument, for exemple :

./save.sh <next-membername> 

I use XZ compression for backups, but as compressor isn’t really fast, i set backup compression settings as follow, for our 2*12core cpu servers, no-one can barely notice when it’s backuping :

backups.compression_algorithm=xz -8 -T8

One have to notice, that backup is ran in two steps :

1 - The server hosting the VM/CT performs the backup, and compress the datas.
2 - When the tarball is complete, it’s sent to the server where backup are ran.
3 - When upload is complete, the backup process resumes to the next VM/CT

Hope it helps :slight_smile:

Best regards,

2 Likes

Cool. I think you should be able to avoid creating the temporary file, something like this:

#!/bin/bash
bkpdate="$(date "+%Y%m%d")"
incus list location="$1" -c n -f csv |
while read -r line
do
  incus export $line $line.$bkpdate.bkp.xz
done
2 Likes

Hello,

I recently updated my script, to best fit my needs :slight_smile:

  • It allows to save the whole lot of Instances in a cluster, regardless of project :
#!/bin/bash
bkpdate="$(date "+%Y%m%d")"
incus project switch default
incus project list -c n -f csv |
while read -r project
do
  incus project switch $project 
  sleep 2
  incus list -c n -f csv |
  while read -r instance
  do 
    incus export $instance $instance.$bkpdate.bkp.tar.gz --instance-only #--optimized-storage
  done
done

Hope it helps,

1 Like

What the script does, is

  1. for each project in your Incus installation
    a. for each instance in that project
    ____run the command incus export with parameter --instance-only.

Here is the documentation on incus export, How to back up instances - Incus documentation

Thanks Simos for the explanations :slight_smile:

A little bit of context :

I had problems with multiples project : ā€˜incus export’ only exports vms it can ā€˜access’ on a per project basis… it would then be necessary to switch to a specific project to backup a vm on that specific project.

Not a big deal, as it can be easyly done in a cron job, by first issuing a command like :

incus project switch my-awesome-project

BUT, when running multiples backup jobs, for differents projects, if one starts before another has completed, it would change the current project for the user used to cron-backup, and stop a backup job, partially done, while starting another on another project. One can use differents users, to backup differents projects, as projects are used to separate management, it could also be used for backup, not a big deal. But it needs to maintain twice amount of accounts, one for managing a project, and another to backup it, with sched pass changes… a bit of a hassle.

We first thought about delegating backup to ā€˜Projects Owners’ (department head), and realized that not all of them had the foresight to handle it, so we went to a central backup solution for incus clusters.

So, the need for a sctipt, that would backup, every instances of every project was needed.
Inspired by Candlerb suggested code, came to this :slight_smile:

#!/bin/bash
#1st : define the date as a variable
bkpdate="$(date "+%Y%m%d")"

#2nd : switch to default, and list incus projects, "-c n" just the name please "-f csv" no table
incus project switch default
incus project list -c n -f csv |

#3rd : read the output, and for each line (each project name) do "incus project switch <project-name>"
while read -r project
do
  incus project switch $project 
  sleep 2

#4th : in the current project, list instances, "-c n" just the name please, "-f csv" no table please.
  incus list -c n -f csv |

#5th : read the instances list, and for each 'instance name' run the following command : incus export...
  while read -r instance
  do 

#6th : export instance, to a tar.gz file, it can be /absolute/path/$instance.tar.gz, and accepts options, such as --instance-only (to avoid snapshots from backup) or --optimized-storage
    incus export $instance $instance.$bkpdate.bkp.tar.gz --instance-only #--optimized-storage

#7 : When it's done, switch to the next project, and export every instances in it.. and so on, till all line are read, and script exits.
  done
done

I was thinking about writing a little ā€˜howto backup’ your incus instances, but, my use cases are probably not ā€˜the use case’ :slight_smile:
(actually running a bunch of ā€˜small stretch clusters’ for educational needs, so export is an easy way to achieve data backup, then Borg to manage retention policy and store encrypted backups offsite… REAR used for metal recovery, and bareOS for file-level backup across hosts (including incus datas…), far from ā€˜state of the art’, but actually fully open-source, functionnal, and incus-based :slight_smile: Instances are used for students projects, so it’s easyer to backup it as a whole…)

If some would like to tell how their backups run, or how they would like them to run… in an ideal world, it could be cool, to share about this subject :slight_smile:

Thanks for this.
The reason why the CLI client incus needs to specify the project, then perform the export, is because it’s designed that way.

I believe that it should be possible to use the API to perform the exports in a way that you do not need to specify the project first. This would require writing the appropriate script.
As is shown below, you need to get the fingerprints of the instances, and then you can export each one without specifying the project.

I’ve thought about that, as one can extract UUID of instances issuing a command like :

incus list -c volatile.uuid:UUID -f csv

The problem i faced next, was ā€œOk, i could export vms based on their uuid, but then, how to keep track of which vm i do backupā€¦ā€ it takes to export vm name too, and treat both as argument, first as source of backup, second to name the backup… it did not seem wise to go in that direction, maybe am i wrong :slight_smile:

I never thought about someone naming an instances a way it could disrupt the backup process… but i’ll keep an eye on, even if i bieleve, special characters are not allowed in instance naming scheme :slight_smile:

–all-projects permits to lists all instances across all projects, but using incus tools, not API would result in problem finding instances :slight_smile:

My actual backup script, stores instances in per-project folders, so the workaround is actually part of the process, using $project as :

incus export $instance /path/to/$project/$instance.$bkpdate.bkp.tar.gz

I did not thought about saving instance image only, is the overhead of an instance backup a big deal ? I bieleve --instance-only is related to this, as it won’t export snapshots, so image only, but i might be wrong, i did not went through the code :slight_smile:

Thanks for the link to this discussion, i’ll pay it a look :slight_smile:
I was thinking about a python script to run those remotely, authenticating to the api is fairly simple… thanks for the idea :slight_smile: