[LXD 4.2] how to create & attach volumes automatically (profile & lxd init preseed)?

Hi :slight_smile:

I try to write my own lxd preseed file and containers cloud-init files to use and restore volume. my test here assume that my volume named ‘myvolume’ is a volume linked to the /home folder of the container. The commands used to create the volume are lxc storage volume create default myvolume and lxc storage volume attach default myvolume mycontainer /home

For the moment my preseed file for lxd default config is:

config:
  core.https_address: '[::]:8443'
  core.trust_password: true
  images.auto_update_interval: "0"
networks:
- config:
    ipv4.address: 10.111.174.1/24
    ipv4.nat: "true"
    ipv6.address: none
  description: "My Network"
  name: lxdbr0
  type: bridge
storage_pools:
- config:
    source: /var/snap/lxd/common/lxd/storage-pools/default
  description: ""
  name: default
  driver: dir
profiles:
- config: {}
  description: Default LXD profile used by virtbazx
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: virtbazx-default
cluster:

And my container cloud-init file is:

#cloud-config
timezone: Europe/Paris
hostname: builder-vhost
groups:
 - ansible
users:
 - name: myuser
   sudo: ['ALL=(ALL) NOPASSWD:ALL']
   groups: sudo
   shell: /bin/bash
   ssh_authorized_keys:
     - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGchBXUfJluldcK9FqIA887w/YPSPOL8m+A9TI5GleTL myuser@builder-vhost
   ssh_import_id: myuser
   ssh_redirect_user: true
 - name: ansible
   groups: ansible
   ssh_authorized_keys:
     - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGchBXUfJluldcK9FqIA887w/YPSPOL8m+A9TI5GleTL ansible@builder-vhost
   ssh_import_id: ansible
   ssh_redirect_user: true
packages:
 - openssh-client
 - openssh-server
 - python3
package_update: false
package_upgrade: false
ssh_deletekeys: true
ssh_keys:
 ed25519_private: |
   -----BEGIN OPENSSH PRIVATE KEY-----
   b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
   QyNTUxOQAAACAXN0tY8HbDIK2lM7jZz0IEMpZny3bGW2lEOfWOU8hTxAAAAJDSXdY+0l3W
   PgAAAAtzc2gtZWQyNTUxOQAAACAXN0tY8HbDIK2lM7jZz0IEMpZny3bGW2lEOfWOU8hTxA
   AAAEAhCZQjRFqPlsQle97+P5pMO2lNp1t20E12gX657mozgBc3S1jwdsMgraUzuNnPQgQy
   lmfLdsZbaUQ59Y5TyFPEAAAADXJvb3RAbHhtYXN0ZXI=
   -----END OPENSSH PRIVATE KEY-----
 ed25519_public: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBc3S1jwdsMgraUzuNnPQgQylmfLdsZbaUQ59Y5TyFPE root@builder-vhost
write_files:
 - path: /etc/ssh/sshd_config.tmp
   content: |
     #ListenAddress ::
     ListenAddress 0.0.0.0
     Port 8822
     Protocol 2
     HostKey /etc/ssh/ssh_host_ecdsa_key
     
     PermitRootLogin no
     
     # To disable tunneled clear text passwords, change to no here!
     PasswordAuthentication no
     PermitEmptyPasswords no
     
     # Change to yes to enable challenge-response passwords (beware issues with
     # some PAM modules and threads)
     ChallengeResponseAuthentication no
     # Set this to 'yes' to enable PAM authentication, account processing,
     # and session processing. If this is enabled, PAM authentication will
     # be allowed through the ChallengeResponseAuthentication and
     # PasswordAuthentication.  Depending on your PAM configuration,
     # PAM authentication via ChallengeResponseAuthentication may bypass
     # the setting of "PermitRootLogin without-password".
     # If you just want the PAM account and session checks to run without
     # PAM authentication, then enable this but set PasswordAuthentication
     # and ChallengeResponseAuthentication to 'no'.
     UsePAM yes
     
     #Don't read the user's ~/.rhosts and ~/.shosts files
     IgnoreRhosts yes
     
     HostbasedAuthentication no
     LoginGraceTime 120
     MaxStartups 2
     AllowTcpForwarding no
     X11Forwarding no
     LogLevel VERBOSE
     ClientAliveInterval 300
     ClientAliveCountMax 0
     
     PrintMotd no
     
     # Allow client to pass locale environment variables
     AcceptEnv LANG LC_*
     
     # override default of no subsystems
     Subsystem       sftp    /usr/lib/openssh/sftp-server
     
     AllowUsers myuser
   owner: root:root
   permissions: 0400
   encoding: text/plain
   append: false
 - path: /etc/hosts.allow.tmp
   content: |
     sshd: 192.168.1.
   owner: root:root
   permissions: 0600
   encoding: text/plain
   append: true
runcmd:
 - cp -f /etc/hosts.allow /etc/hosts.allow.old
 - cp -f /etc/ssh/sshd_config /etc/ssh/sshd_config.old
 - mv -f /etc/hosts.allow.tmp /etc/hosts.allow
 - mv -f /etc/ssh/sshd_config.tmp /etc/ssh/sshd_config
 - systemctl restart ssh
 - curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
 - python get-pip.py --user
 - pip install --user ansible

If someone can help me to add volumes informations on this two files :wink:

To make this question clearer, I am looking for a solution to avoid having to manually recreate the volume and have to reattach it to the container with each new installation of the system.

I suppose is possible using preseed and cloud-init file but i didn’t find informations about this.

Afaik:

  1. cloud-init is for use inside containers and for use with Linux Systems (so it’s not specifically for LXD), volumes are something you attach from the outside via LXD, so it is no usecase for this.
    But using the attached volumes with cloud-init is probably possible, for example copying a file from the volume to your system.
  2. preseed: I guess you refer to lxd init preseeds, which would be the server configuration.
    You could probably add the volume to your default profile, but that would mean that every container on that server would have the volume attached to it.
    For volume creation see below.

Regarding attaching of volumes:
I think a better solution for you might be to use container profiles (https://linuxcontainers.org/lxd/advanced-guide/#profiles).
:thinking: Even though I don’t really know how to attach volumes with it (because I never tried).
My guess would be to try attaching it as a disk device:
https://linuxcontainers.org/lxd/docs/master/instances#type-disk
(Because regular storage pools are also attached as disk devices).

Regarding the automatic creation of volumes:
Sadly I know nothing about volumes, because I don’t use them.
So I blindly assume one of the following two options applies:

  1. Volumes are just like other disk devices, so they are created during attachment.
    This would lead to the solution above.
  2. Volumes need to be created first and then attached.

Assuming that Number 2 is correct, you would need a config key in the server config preseed similar to storage pools (e.g.):

storage_pools:
- config:
    source: /dev/sda1
  description: ""
  name: default
  driver: btrfs

But I don’t see an option for storage volumes there…

@BeRoots
Could you post the configuration of a container with a volume attached to it? So I can see what the config looks like :smile:.
You can do this with:
lxc config show containername -e

@stgraber Is there an option to create storage volumes during lxd init with a preseed?
(And is that even necessary)

Thanks toby63. I don’t have my computer here. I show you the config a bit later.

@toby63 Here is the log of my shell:

💎 ~ $ ❱❯❭ lxc config show test1 --expanded
WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Debian buster amd64 (20200825_05:24)
  image.os: Debian
  image.release: buster
  image.serial: "20200825_05:24"
  image.type: squashfs
  volatile.base_image: d88174bbe1fab0b4797bb5094390bb46acfa85a6eaba52b09827cc1e9d998dd4
  volatile.eth0.hwaddr: 00:16:3e:70:79:c7
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  myvolume:
    path: /home
    pool: default
    source: myvolume
    type: disk
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""
💎 ~ $ ❱❯❭ lxc storage show default
WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement
config:
  source: /var/snap/lxd/common/lxd/storage-pools/default
description: ""
name: default
driver: dir
used_by:
- /1.0/instances/deb10-base
- /1.0/instances/test1
- /1.0/profiles/default
- /1.0/storage-pools/default/volumes/custom/myvolume
status: Created
locations:
- none
💎 ~ $ ❱❯❭ lxc storage volume show default myvolume
WARNING: cgroup v2 is not fully supported yet, proceeding with partial confinement
config:
  volatile.idmap.last: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
description: ""
name: myvolume
type: custom
used_by:
- /1.0/instances/test1
location: none
content_type: filesystem
💎 ~ $ ❱❯❭ sudo cat /var/snap/lxd/common/lxd/storage-pools/default/containers/test1/backup.yaml
[sudo] Mot de passe de anode1 : 
container:
  architecture: x86_64
  config:
    image.architecture: amd64
    image.description: Debian buster amd64 (20200825_05:24)
    image.os: Debian
    image.release: buster
    image.serial: "20200825_05:24"
    image.type: squashfs
    volatile.apply_template: create
    volatile.base_image: d88174bbe1fab0b4797bb5094390bb46acfa85a6eaba52b09827cc1e9d998dd4
    volatile.eth0.host_name: veth5edc4adb
    volatile.eth0.hwaddr: 00:16:3e:70:79:c7
    volatile.idmap.base: "0"
    volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.power: STOPPED
  devices:
    myvolume:
      path: /home
      pool: default
      source: myvolume
      type: disk
  ephemeral: false
  profiles:
  - default
  stateful: false
  description: ""
  created_at: 2020-09-02T13:00:58.47408787+02:00
  expanded_config:
    image.architecture: amd64
    image.description: Debian buster amd64 (20200825_05:24)
    image.os: Debian
    image.release: buster
    image.serial: "20200825_05:24"
    image.type: squashfs
    volatile.apply_template: create
    volatile.base_image: d88174bbe1fab0b4797bb5094390bb46acfa85a6eaba52b09827cc1e9d998dd4
    volatile.eth0.host_name: veth5edc4adb
    volatile.eth0.hwaddr: 00:16:3e:70:79:c7
    volatile.idmap.base: "0"
    volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
    volatile.last_state.power: STOPPED
  expanded_devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    myvolume:
      path: /home
      pool: default
      source: myvolume
      type: disk
    root:
      path: /
      pool: default
      type: disk
  name: test1
  status: Stopped
  status_code: 102
  last_used_at: 1970-01-01T01:00:00+01:00
  location: none
  type: container
snapshots: []
pool:
  config:
    source: /var/snap/lxd/common/lxd/storage-pools/default
  description: ""
  name: default
  driver: dir
  used_by: []
  status: Created
  locations:
  - none
volume:
  config: {}
  description: ""
  name: test1
  type: container
  used_by: []
  location: none
  content_type: filesystem

Thank you for providing the config.

Just as I thought the volume is attached as a disk device:

devices:

   myvolume:
       path: /home
       pool: default
       source: myvolume
       type: disk

So you can add the “myvolume” part into a profile (under devices:) and add this profile to any container you want.
Now the volume will be attached to any container which uses that profile.

Yes. I will try this with profiles. I think it is a good way. I want to inverstigate first around another profile property to fill attachement information about the target path.