Kopia backup vs Incus

Hello,

I use Kopia to back up my important files to a Hetzner storage box in Europe for 3.5€.
What should I back up for Incus, considering I use custom volumes?

Following this guide, do I just need to tarball the /var/lib/incus directory and send it to my storage box? If that’s it, it’s magic!:sparkles:

folder size

my containers

root@incus:~# incus list
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
|           NAME           |  STATE  |       IPV4        |                     IPV6                     |   TYPE    | SNAPSHOTS |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| adguardhome              | RUNNING | 10.0.0.12 (eth0)  | fd42:f836:2ae:e691:216:3eff:fee6:87a2 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| caddy                    | RUNNING | 10.0.0.31 (eth0)  | fd42:f836:2ae:e691:216:3eff:febe:ec5a (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| dns-technitium-authority | RUNNING | 10.0.0.17 (eth0)  | fd42:f836:2ae:e691:216:3eff:fe5d:6c84 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| dns-technitium-recursive | RUNNING | 10.0.0.15 (eth0)  | fd42:f836:2ae:e691:216:3eff:fead:c571 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| filebrowser              | RUNNING | 10.0.0.147 (eth0) | fd42:f836:2ae:e691:216:3eff:fe95:fc79 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| filebrowser-local        | RUNNING | 10.0.0.148 (eth0) | fd42:f836:2ae:e691:216:3eff:fe30:803b (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| goaccess                 | RUNNING | 10.0.0.178 (eth0) | fd42:f836:2ae:e691:216:3eff:fe49:4005 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| jump2-ssh                | RUNNING | 10.0.0.54 (eth0)  | fd42:f836:2ae:e691:216:3eff:fe34:56d4 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| mkdocs                   | RUNNING | 10.0.0.14 (eth0)  | fd42:f836:2ae:e691:216:3eff:fe3f:c170 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| myspeed                  | RUNNING | 10.0.0.13 (eth0)  | fd42:f836:2ae:e691:216:3eff:fec4:fab6 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| nextjs                   | RUNNING | 10.0.0.170 (eth0) | fd42:f836:2ae:e691:216:3eff:fe2b:8c74 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| pagefind                 | STOPPED |                   |                                              | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| prestashop               | RUNNING | 10.0.0.248 (eth0) | fd42:f836:2ae:e691:216:3eff:fe31:cbcf (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+
| sftp-publii              | RUNNING | 10.0.0.77 (eth0)  | fd42:f836:2ae:e691:216:3eff:fedb:7292 (eth0) | CONTAINER | 0         |
+--------------------------+---------+-------------------+----------------------------------------------+-----------+-----------+

my storages

root@incus:~# incus storage volume list tank
+-----------+--------------------------+-------------+--------------+---------+
|   TYPE    |           NAME           | DESCRIPTION | CONTENT-TYPE | USED BY |
+-----------+--------------------------+-------------+--------------+---------+
| container | adguardhome              |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | caddy                    |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | dns-technitium-authority |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | dns-technitium-recursive |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | filebrowser              |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | filebrowser-local        |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | goaccess                 |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | jump2-ssh                |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | mkdocs                   |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | myspeed                  |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | nextjs                   |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | pagefind                 |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | prestashop               |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| container | sftp-publii              |             | filesystem   | 1       |
+-----------+--------------------------+-------------+--------------+---------+
| custom    | filebrowser              |             | filesystem   | 3       |
+-----------+--------------------------+-------------+--------------+---------+
| custom    | publii                   |             | filesystem   | 3       |
+-----------+--------------------------+-------------+--------------+---------+
| custom    | sftpbox                  |             | filesystem   | 0       |
+-----------+--------------------------+-------------+--------------+---------+
| custom    | webdata                  |             | filesystem   | 3       |
+-----------+--------------------------+-------------+--------------+---------+

It depends on the type of storage you are using.

incus storage list
incus storage show <name>

It is possible that all of your data is in that directory. It is also possible that is not.

root@incus:/var/lib# incus storage show tank
config:
  source: tank
  volatile.initial_source: tank
  zfs.pool_name: tank
description: ""
name: tank
driver: zfs
used_by:
- /1.0/instances/adguardhome
- /1.0/instances/caddy
- /1.0/instances/dns-technitium-authority
- /1.0/instances/dns-technitium-recursive
- /1.0/instances/filebrowser
- /1.0/instances/filebrowser-local
- /1.0/instances/goaccess
- /1.0/instances/jump2-ssh
- /1.0/instances/mkdocs
- /1.0/instances/myspeed
- /1.0/instances/nextjs
- /1.0/instances/pagefind
- /1.0/instances/prestashop
- /1.0/instances/sftp-publii
- /1.0/profiles/default
- /1.0/profiles/macvlan
- /1.0/storage-pools/tank/volumes/custom/filebrowser
- /1.0/storage-pools/tank/volumes/custom/publii
- /1.0/storage-pools/tank/volumes/custom/sftpbox
- /1.0/storage-pools/tank/volumes/custom/webdata
status: Created
locations:
- none
root@incus:/var/lib#
--- /var/lib/incus/storage-pools -----------------------------------------------------------------------------------------------------
                                  /..
   11.4 GiB [###################] /tank
--- /var/lib/incus/storage-pools/tank ------------------------------------------------------------------------------------------------
                                  /..
    8.8 GiB [###################] /containers
    2.6 GiB [#####              ] /custom
e   4.0 KiB [                   ] /virtual-machines-snapshots
--- /var/lib/incus/storage-pools/tank/custom -----------------------------------------------------------------------------------------
                                  /..
    2.3 GiB [###################] /default_publii
  164.3 MiB [#                  ] /default_webdata
  107.2 MiB [                   ] /default_filebrowser
e   4.0 KiB [                   ] /default_sftpbox

I back my zfs storage pools with disk partitions.

incus storage show <my-pool-name>
config:
  source: nvme
  volatile.initial_source: /dev/nvme0n1
  zfs.pool_name: nvme
...

You can see that our volatile.initial_source are different. How did you configure your storage pool? It is possible that it is file based. I am still playing around and trying to figure out a way to tell.

I used a partition labeled ‘tank’ on /dev/sda4

zpool create tank /dev/sda4 -o ashift=12
root@incus:/var/lib# lsblk -fs
NAME  FSTYPE     FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda1  vfat       FAT32       7833-864F                             505.1M     1% /boot/efi
`-sda
sda2  ext4       1.0         d9e548de-06f9-4546-8dfd-6a6efff2be87   20.8G    20% /
`-sda
sda3  swap       1           918b1001-8323-459c-b630-f351a9e37ee4                [SWAP]
`-sda
sda4  zfs_member 5000  tank  10667009160202454661
`-sda
root@incus:/var/lib# df -h
Filesystem                                Size  Used Avail Use% Mounted on
udev                                      7.7G     0  7.7G   0% /dev
tmpfs                                     1.6G  1.5M  1.6G   1% /run
/dev/sda2                                  28G  5.6G   21G  22% /
tmpfs                                     7.7G     0  7.7G   0% /dev/shm
tmpfs                                     5.0M     0  5.0M   0% /run/lock
/dev/sda1                                 511M  5.9M  506M   2% /boot/efi
tmpfs                                     100K     0  100K   0% /var/lib/incus/shmounts
tmpfs                                     100K     0  100K   0% /var/lib/incus/guestapi
tank/containers/adguardhome               420G  393M  419G   1% /var/lib/incus/storage-pools/tank/containers/adguardhome
tank/containers/caddy                     420G  605M  419G   1% /var/lib/incus/storage-pools/tank/containers/caddy
tank/custom/default_publii                422G  2.3G  419G   1% /var/lib/incus/storage-pools/tank/custom/default_publii
tank/custom/default_webdata               420G  168M  419G   1% /var/lib/incus/storage-pools/tank/custom/default_webdata
tank/containers/dns-technitium-authority  420G  719M  419G   1% /var/lib/incus/storage-pools/tank/containers/dns-technitium-authority
tank/containers/dns-technitium-recursive  420G  434M  419G   1% /var/lib/incus/storage-pools/tank/containers/dns-technitium-recursive
tank/containers/filebrowser               420G  344M  419G   1% /var/lib/incus/storage-pools/tank/containers/filebrowser
tank/containers/filebrowser-local         420G  340M  419G   1% /var/lib/incus/storage-pools/tank/containers/filebrowser-local
tank/custom/default_filebrowser           420G  108M  419G   1% /var/lib/incus/storage-pools/tank/custom/default_filebrowser
tank/containers/jump2-ssh                 419G   14M  419G   1% /var/lib/incus/storage-pools/tank/containers/jump2-ssh
tank/containers/mkdocs                    421G  1.9G  419G   1% /var/lib/incus/storage-pools/tank/containers/mkdocs
tank/containers/myspeed                   420G  694M  419G   1% /var/lib/incus/storage-pools/tank/containers/myspeed
tank/containers/nextjs                    421G  1.5G  419G   1% /var/lib/incus/storage-pools/tank/containers/nextjs
tank/containers/prestashop                421G  1.3G  419G   1% /var/lib/incus/storage-pools/tank/containers/prestashop
tank/containers/sftp-publii               419G   14M  419G   1% /var/lib/incus/storage-pools/tank/containers/sftp-publii
tank/containers/goaccess                  420G  724M  419G   1% /var/lib/incus/storage-pools/tank/containers/goaccess
root@incus:/var/lib#
root@incus:/var/lib# blkid
/dev/sda4: LABEL="tank" UUID="10667009160202454661" UUID_SUB="2351329608076347181" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="7ab8be92-e786-48b7-b267-f73f278c6532"
/dev/sda2: UUID="d9e548de-06f9-4546-8dfd-6a6efff2be87" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="08591982-7f75-4869-b0ae-1622a454d191"
/dev/sda3: UUID="918b1001-8323-459c-b630-f351a9e37ee4" TYPE="swap" PARTUUID="0050c448-3602-4d98-acec-89f0513ef006"
/dev/sda1: UUID="7833-864F" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="795f2275-9463-458f-8064-58389bc6e120"
Would you like to use LXD clustering? (yes/no) [default=no]: 𝗻𝗼
Do you want to configure a new storage pool? (yes/no) [default=yes]: 𝘆𝗲𝘀 !!
Name of the new storage pool [default=default]: 𝘁𝗮𝗻𝗸
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: 𝘇𝗳𝘀
Create a new ZFS pool? (yes/no) [default=yes]: 𝗻𝗼
Name of the existing LVM pool or dataset: 𝘁𝗮𝗻𝗸

Okay. I see the difference. You created the pools directly with zfs tooling. I used Incus commands.

I don’t think I know enough at the moment to give good advice about backing up your system.

Don’t worry, though. You posted enough information for another community member to help out.

Of course, whatever you do, please test your backups and keep us updated with your progress. :slight_smile:

1 Like

For Incus data itself, /var/lib/incus/ will do, but if you want your instances too (and you likely do), then you need to also include the whole tank ZFS pool in your backup.

Backup systems that rely on zfs send/receive can do that pretty cheaply.
The very costly alternative is to basically do a zfs send -R tank into a file and back that up, but it will be as large as the entire pool.

An alternative may be to do a incus export --optimized-storage --instance-only NAME of the instances you do care about as that will result in a tarball that you can pretty easily include in a traditional backup.