Exclude added device disk from snapshot and backup

How to exclude added device disk sized TB to the container instance from backup or copy of snapshot operation. I do have a container with large disk mounted from the host like:
lxc config device add container1 device-name disk source=/share/c1 path=/data
Backing up or copy this container instance is always fail since disk size is more than 10 TB of data. lxc export or copy operation does not have disk exclusion option.

Hmm, that’s odd because snapshots, backups and copies do not read from attached disks on the container at all.

So lxc export for that container should only include whatever you get from du -schx / inside the container.

Sorry that I have not checked the documents on container’s snapshot, backups and copies operation details.
Today, I removed the attached disk by:
lxc config device remove container1 device-name
And after that made snapshot and container copy operation succeeded first time.
To the copied container reattached disk and work as a backup container perfectly:
lxc config device add container1 device-name disk source=/xxxx/xxxx path=/backup

For information of the container is:

du -schx /
820G /
820G total

$ df -h
Filesystem Size Used Avail Use% Mounted on
c-pool/containers/container1 2.1T 820G 1.3T 40% /
none 492K 4.0K 488K 1% /dev
udev 126G 0 126G 0% /dev/fuse
tmpfs 100K 0 100K 0% /dev/lxd
192.168.10.12:/s-pool/backup/lxd1 67T 23T 45T 34% /backup
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 126G 0 126G 0% /dev/shm
tmpfs 126G 188K 126G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 126G 0 126G 0% /sys/fs/cgroup

@stgraber Could it be related to what path one uses?

@sbayasgalan
Do you have snapshots?
In that case you might exclude them from backup and copy, with flag --instance-only.

Otherwise could you try the initial configuration again and show us your container config with:
lxc config show containername -e
Also add any log or messages.

lxc config show container1 -e
architecture: x86_64
config:
boot.autostart: “true”
boot.autostart.delay: “10”
image.architecture: amd64
image.description: ubuntu 16.04 LTS amd64 (release) (20200318)
image.label: release
image.os: ubuntu
image.release: xenial
image.serial: “20200318”
image.type: squashfs
image.version: “16.04”
security.privileged: “true”
volatile.base_image: a840c9970bf470624cb176b6b7561b7df1143e9d44de734df19f664584035336
volatile.eth0.host_name: veth6f11b5c7
volatile.eth0.hwaddr: 00:16:3e:5d:5f:9b
volatile.idmap.base: “0”
volatile.idmap.current: ‘[]’
volatile.idmap.next: ‘[]’
volatile.last_state.idmap: ‘[]’
volatile.last_state.power: RUNNING
devices:
disk1:
path: /backup
source: /backup/lxd1
type: disk
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: local
type: disk
ephemeral: false
profiles:

  • dmz
    stateful: false
    description: “”

The container config only providing an original image (ubuntu 16.04) not the current upgraded image. The original image is upgraded to 18.04 from the inside of instance via: sudo do-release-upgrade

Yes, I did use --instance-only flag! since I have multiple snapshots

My command for backup is (I just run and if failed I will provide log additional info):
#lxc export container1 /backup/lxd/container1/17-10-2020-container1.tar.xz --instance-only --optimized-storage

Could you use a code element for log outputs and commands:
Either use preformatted text, with the button in editor with symbol: </>.
Or use code tags: [code] content [/code].

  1. backup export command without --instance-only took endless time (more than 10 hours and killed)!!! That is why earlier I thought that backup taking attached disks data for backup container!
  2. Following command with --instance-only generated error (command was started at 06:30 UTC):
    # lxc export container1 /backup/lxd/container1/18-10-2020-container1.tar.xz --instance-only --optimized-storage
    Error: Create backup: Backup create: Failed to run: zfs send c-pool/containers/container1@backup-1f52ddd9-babc-4c1c-a4d8-0606f7c80319: warning: cannot send 'c-pool/containers/container1@backup-1f52ddd9-babc-4c1c-a4d8-0606f7c80319': signal received

Syslog does not have much information:

Oct 18 06:30:00 lxd1 zed: eid=37 class=history_event pool_guid=0x847E9C333ED3FF23
Oct 18 06:40:52 lxd1 systemd[1]: Starting Daily apt upgrade and clean activities...
Oct 18 06:40:55 lxd1 systemd[1]: Started Daily apt upgrade and clean activities.
Oct 18 06:47:01 lxd1 CRON[25419]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ))
Oct 18 06:47:02 lxd1 CRON[25418]: (CRON) info (No MTA installed, discarding output)
Oct 18 06:58:47 lxd1 systemd[1]: Started Session 13394 of user sbayasa.
Oct 18 06:59:48 lxd1 zed: eid=38 class=history_event pool_guid=0x847E9C333ED3FF23
Oct 18 07:00:11 lxd1 zed: eid=39 class=history_event pool_guid=0x847E9C333ED3FF23
Oct 18 07:17:01 lxd1 CRON[75812]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Oct 18 08:17:01 lxd1 CRON[13417]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Oct 18 08:27:01 lxd1 systemd[1]: Starting Daily apt download activities...
Oct 18 08:27:31 lxd1 systemd-networkd-wait-online[16545]: Event loop failed: Connection timed out
Oct 18 08:27:31 lxd1 apt-helper[16527]: E: Sub-process /lib/systemd/systemd-networkd-wait-online returned an error code (1)
Oct 18 08:27:32 lxd1 systemd[1]: Started Daily apt download activities.
Oct 18 08:31:01 lxd1 systemd[1]: Starting Message of the Day...
Oct 18 08:31:03 lxd1 50-motd-news[17765]:  * Introducing autonomous high availability clustering for MicroK8s
Oct 18 08:31:03 lxd1 50-motd-news[17765]:    production environments! Super simple clustering, hardened Kubernetes,
Oct 18 08:31:03 lxd1 50-motd-news[17765]:    with automatic data store operations. A zero-ops HA K8s for anywhere.
Oct 18 08:31:03 lxd1 50-motd-news[17765]:      https://microk8s.io/high-availability
Oct 18 08:31:03 lxd1 systemd[1]: Started Message of the Day.
Oct 18 08:52:05 lxd1 snapd[66810]: stateengine.go:150: state ensure error: cannot sections: got unexpected HTTP status code 403 via GET to "https://api.snapcraft.io/api/v1/snaps/sections"
Oct 18 09:04:28 lxd1 systemd[1]: Stopping User Manager for UID 1000...
Oct 18 09:04:28 lxd1 systemd[28817]: Stopped target Default.
Oct 18 09:04:28 lxd1 systemd[28817]: Stopped target Basic System.
Oct 18 09:04:28 lxd1 systemd[28817]: Stopped target Timers.
Oct 18 09:04:28 lxd1 systemd[28817]: Stopped target Sockets.
Oct 18 09:04:28 lxd1 systemd[28817]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Oct 18 09:04:28 lxd1 systemd[28817]: Closed GnuPG cryptographic agent and passphrase cache.
Oct 18 09:04:28 lxd1 systemd[28817]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
Oct 18 09:04:28 lxd1 systemd[28817]: Closed REST API socket for snapd user session agent.
Oct 18 09:04:28 lxd1 systemd[28817]: Closed GnuPG network certificate management daemon.
Oct 18 09:04:28 lxd1 systemd[28817]: Stopped target Paths.