So, storage and network questions

If one wants to put containers directly on the LAN, what is the current way to do so? In particular, with a cluster? I’ve searched around and the information I’ve found for non-cluster setups doesn’t seem to work for some reason.

On the subject of storage, what’s the general recommendation for shared storage? Ceph? Gluster? Something else?

macvlan works fine so long as you don’t need host to container traffic.

Otherwise, manually create bridges (defined through your host OS) are the other way to go and don’t have that limitation.

For storage, LXD only supports ceph & cephfs, using ceph to back instances and cephfs to back custom volumes is a good combination I’ve found.

I’ve actually been having trouble getting macvlan to work. That’s what I had been trying. I’ll take another look at it in the morning though. :slight_smile:

Can you show the config of your container you’re trying to use lxc config show <container> --expanded and explain what issues you are having so we can try and help. Thanks

I can/will as soon as the cluster is working again. I keep having it go down and freak out with “cannot fetch node config from database: driver: bad connection”

lxc config show test1 --expanded

config:
  image.architecture: x86_64
  image.description: Ubuntu 18.04 LTS server (20190918)
  image.os: ubuntu
  image.release: bionic
  volatile.base_image: b8408cd20d5952552ddcb863ac4f641c56e5b046c08c3b516c0531d6c45e1065
  volatile.eth0.host_name: macf4a0b966
  volatile.eth0.hwaddr: 00:16:3e:8f:4a:6e
  volatile.eth0.last_state.created: "false"
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: lanbridge
    type: nic
  root:
    path: /
    pool: local
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""```

lxc network show landbridge:
```config:
  ipv4.address: 10.163.53.1/24
  ipv4.nat: "false"
  ipv6.address: fd42:1af6:796f:9999::1/64
  ipv6.nat: "false"
description: ""
name: lanbridge
type: bridge
used_by:
- /1.0/instances/test1
managed: true
status: Created
locations:
- starbase6
- lxdnode1
- lxdnode2
- lxdnode3```

lxc network show lanbridge --target lxdnode1:
```config:
  bridge.external_interfaces: ens34
  ipv4.address: 10.163.53.1/24
  ipv4.nat: "false"
  ipv6.address: fd42:1af6:796f:9999::1/64
  ipv6.nat: "false"
description: ""
name: lanbridge
type: bridge
used_by:
- /1.0/instances/test1
managed: true
status: Created
locations:
- starbase6
- lxdnode1
- lxdnode2
- lxdnode3```

Well, the formatting got a bit messed up there, but I included what you asked for as well as the global network config and one of the nodes

Huh, for some reason the shared network didn’t come up on the actual host. O.o

I hit it with an “ifconfig up” and now it is working. Weird

Well, I say that…

I didn’t want the containers picking up an IP from dnsmasq, so I removed the ipv4 and ipv6 entries from the network config. Now, they won’t go on the network at all, even after putting those entries back. They can’t reach any destination, nor pick up a DHCP address. Even if I manually assign an IP/gateway, nothing :frowning:

Nuked the network, recreated it. Now it’s fine :man_shrugging:

Forgive the ignorance, but how do things like snapshots, backups, and such work if the backing storage is Ceph vice ZFS?

Pretty much the same, whatever server the instance is on will handle the task of creating the snapshots on Ceph. Same for backups, when you make a backup or export a container, whatever server hosts the instance will mount ceph, generate the backup tarball and hose it for you.

That makes sense. I may have to just see all this in operation though to be able to get it in my head. Well, I’m off to make a cluster and such again :smiley:

So, I have ceph working or at least I think I do. However, upon trying to do my first “lxd init” with 4.0.0 I’m running into this:
Error: Failed to create storage pool 'remote': Failed to run: ceph --name client.admin --cluster ceph osd pool create lxd 32: unable to parse addrs in '[v2:XXX.XXX.XXX.XXX:3300/0,v1:XXX.XXX.XXX.XXX:6789/0]'

Where “XXX.XXX.XXX.XXX” is the IP of the Ceph monitor. I’m kinda stuck at this point and not sure where to go

Hmm, yeah, we may have an issue parsing the hybrid network configs (v2&v1) in such cases.

You could probably modify your ceph config to only list one of the two (not sure if we’d parse that better).

Can you file a bug about it at https://github.com/lxc/lxd/issues so we can have the parser support that syntax?

I’ll give it a try as well and see if that makes any difference with the single entry

Fails with just one as well in the same manner. I updated the issue to reflect this as well, but I’m wondering if Ceph Octopus is just too new for LXD 4.0.0?