Problem removing bridge

I made a backup of my lxc container using:

I then created a new container to test backup using the above link. The only problem is that I can’t start the container because the bridge conflicts with the 1st container. I tried to remove the bridge. no luck so far. This is how I added the bridge in the first place
//add bridge to server
lxc config device add ubuntu-vm myport80 proxy listen=tcp::8004 connect=tcp:127.0.0.1:80

//to take away bridge
lxc config device remove ubuntu-vm myport80

this looks like it works but a
lxc network show lxdbr4
still shows a used by link to that virtual machine.

I also tried
lxc network edit lxdbr4

this does open a yamal file to be able to delete the offending link, but this changes nothing. You can then reopen the file and have the exact same values in the used_by section.

So, the magic question to all this is. How do you restore a backup and change the bridge and copy the memory pool to a new one detail so that when it instantiates it can stand on it’s own resources. This is useful if you need a developer’s copy of the server you are standing up so that you don’t stomp on the release version. The installation of python tools can be no small feat sometimes. It’s nice not to have to do that twice. It’s well beyond the pip install requirements and you have to get your bat-leth for conflict resolution contortions. The path seems to never be the same twice. I’m wondering if it’s easier to stand up the new container and then copy the memory pool to a new space?

Best regards. Thanks for your time in advance,
lisa

Hi @lisa

I’m a little confused by your post as you’ve mixed some terminology up, so I’m unclear what the actual problem is you’re encountering.

A bridge is a type of LXD managed network. These don’t belong to any instance/container, and so you cannot remove a bridge from an instance. Instances connect to bridge networks via bridged NIC devices. These bridged NIC devices can be removed from an instance that will then “disconnect” the instance from the bridge. However the bridge itself will still remain unless you delete the actual network.

You’re example also mentions proxy devices, which are nothing to do with either bridge networks or bridged NIC devices.

To help clarify what the issue is, please can you show an example of the full command you are running along with the error you are getting.

Thanks

[AMD Official Use Only]

Thanks for getting back to me Thomas. I put it in line.

I can’t see what you posted I’m afraid.

[AMD Official Use Only]

Objective: copy a working installation of os and all to another container with different memory pool image and different network bridge. I can set up the container no problem. I have a profile setup and I work it off of that. I just want to copy the installation of all the tools and os and software onto another container. How do I do that?

Thanks,

lisa

You can copy and instance using:

lxc copy <source instance name> <target instance name>

[AMD Official Use Only]

Will this copy the network though and then conflict with existing container?

Lisa

No it will copy the nic device but generate a new Mac address to avoid conflicts.

[AMD Official Use Only]

I did not find that it worked that way. I got error messages of conflicts when I tried to start the second container. Here are some notes in case this helps you. I got it working. I don’t see this type of thing posted so… we may want to put this up.

Objective: after we get an image setup, create another container exactly the same so you don’t have to take the release server up and down as you test software. You work on a separate dev server.

Notes:

  1. lxc profiles help you setup containers exactly the same. Pointing to same network and same image. This is not good for the above objective.

  2. lxc storage pools contain the image of the operating system you are executing on the container

  3. images are the OS system you put on the container in the storage pool

  4. I used a network bridge to get the 2 way communication from the outside world to the hardware server to the container.

  5. to get the release server copied over to the development server. I tried a bunch of things here.

  • backup container and the restore to new container. This didn’t work because they point to the same storage and network resources and the second container will not start
  • lxc copy <container2). same problem as above.
  • lxc publish new image (this actually just copies the storage image on container 1), then launch new container with that image and the profile you already created. Then do the network bridge command and now you have it.

gory details that might help the next guy.

#create profile

lxc profile copy default

lxc profile edit <new profile)

#publish working server and os and installed tools

Lxc stop

Lxc publish –-public --alias=<new image name, T>

Lxc start

#launch container with profile and newly created image with all the tools installed

#this give you a container off a standard ubuntu:18.04 image

lxc launch ubuntu:18.04 -p

#this gave me a container off a non standard image I created call T

#Note: I suspect .ssh and keys and git will be messed up and need to be reworked on this new container.

lxc launch T ubuntu-vm-dev-2 -p ubuntu-dev-profile-2

method for backing up lxc containers.

https://www.cyberciti.biz/faq/how-to-backup-and-restore-lxd-containers/

#other stuff

lxc storage list

lxc storage create <pool#> btrfs size=600GB

lxc list

//need to change the bridge binding. before you start it.

//add bridge to server

lxc config device add ubuntu-vm-dev myport8004 proxy listen=tcp::8004 connect=tcp:127.0.0.1:8004

#test with simple python server

python -m http.server 8004

#then go outside and see if you can see something using the following internet address: :8004

#then try T server

//to take away bridge

lxc config device remove uguntu-vm-REL2 myport80

//lxc network edit lxdbr4

to edit connections of bridges. This seems to fail.

//how to take snap shot… how to restore to second machine… how to restore to current machine

https://openschoolsolutions.org/how-to-backup-lxd-containers/

What error did you get and what command were you running?

[AMD Official Use Only]

Well for the last attempt at copying the img file in the var directory from release container to dev container, leaves my dev container in a state that can not be stopped and can not be deleted.

Lxc list gives me this.

Here is some more commands and errors

asl@ASL-PC:~$ lxc stop ubuntu-vm-dev

Error: The instance cannot be cleanly shutdown as in Error status

asl@ASL-PC:~$ lxc exec ubuntu-vm-dev bash

Error: Instance is not running

asl@ASL-PC:~$ lxc delete ubuntu-vm-dev

Error: The instance is currently running, stop it first or pass --force

asl@ASL-PC:~$ lxc delete ubuntu-vm-dev --force

Error: Stopping the instance failed: The instance is already stopped

This happened after I did a cp /var/snap/lxd/common/lxd/disks/pool.img /var/snap/lxd/common/lxd/disks/pool.img

I did this to try to copy all the installed tools and setup to a second container.

Just to remind you, publish did what I needed. But for clean up this container, ubuntu-vm-dev is stuck.

Please advise,

lisa

OK so you shouldn’t try and copy the underlying storage pool files, as LXD won’t know about them.

Lets back up to your objective for a moment.

Objective: after we get an image setup, create another container exactly the same so you don’t have to take the release server up and down as you test software. You work on a separate dev server.

When you say “image setup”, what do you mean by that? Are you creating an image for an existing instance using lxc publish? Is this so you can create multiple instances from the same image?

I’d like to try and know more about what you’re trying to achieve, then I can give you the best approach, as creating an image from an instance is valid when you want to create lots of instances from a single base one.

But if you just want to copy a single instance to take a backup of it you can use lxc copy or lxc export.

Also I think its really important that we get our terminology correct, as if I’m reading this correctly you are considering storage pool to only house one instance, but that is incorrect, a storage pool can house many instances.

[AMD Official Use Only]

Yeah, I know the copy image didn’t work. It crashed the 2nd container. So, how do I remove that dev container now?

Image setup: in this case we have ubuntu image with server tools all setup. This takes a while to do. I just want to use this in the next development server. Lxc publish worked to copy this image after it was setup and then I could use this in multiple containers. This actually worked.

Lxc copy didn’t work on the same hardware. It conflicts and will not start the second container. If you export it, I suspect it works, don’t know, I haven’t tried it.

lisa

[AMD Official Use Only]

This is good to know. I don’t think I want to stick several instances on the same pool though. It seems like your memory management could become a problem.

lisa

Have you rebooted since you copied the storage pool over itself? You may need to do this in order to ensure there are no running containers. Then try force deleting again.

That is what storage pools are designed for. They don’t take up memory they take up disk storage.

Great. So what caused the problem?

[AMD Official Use Only]

Ok, rebooted. Now lxc list doesn’t work. I get an error message that looks like this.

Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory

asl@ASL-PC:~/backups$ sudo lxd

WARN[08-26|18:58:08] - Couldn’t find the CGroup blkio.weight, disk priority will be ignored

WARN[08-26|18:58:08] - Couldn’t find the CGroup memory swap accounting, swap limits will be ignored

WARN[08-26|18:58:08] Instance type not operational type=virtual-machine driver=qemu err=“KVM support is missing”

WARN[08-26|18:58:08] Firewall failed to detect any compatible driver, falling back to “xtables” (but some features may not work as expected due to: Backend command “ebtables” is an nftables shim)

EROR[08-26|18:58:18] Failed to start the daemon: Failed initializing storage pool “pool5”: Failed to mount “/dev/loop25” on “/var/snap/lxd/common/lxd/storage-pools/pool5” using “btrfs”: file exists

Error: Failed initializing storage pool “pool5”: Failed to mount “/dev/loop25” on “/var/snap/lxd/common/lxd/storage-pools/pool5” using “btrfs”: file exists

[AMD Official Use Only]

Some more information from clip from

sudo lxd --debug --group lxd

work as expected due to: Backend command “ebtables” is an nftables shim)

INFO[08-26|19:09:18] Firewall loaded driver driver=xtables

INFO[08-26|19:09:18] Initializing storage pools

DBUG[08-26|19:09:19] Initializing and checking storage pool pool=pool4

DBUG[08-26|19:09:19] Mount started driver=btrfs pool=pool4

DBUG[08-26|19:09:19] Mount finished driver=btrfs pool=pool4

DBUG[08-26|19:09:19] Initializing and checking storage pool pool=pool5

DBUG[08-26|19:09:19] Mount started driver=btrfs pool=pool5

DBUG[08-26|19:09:29] Mount finished driver=btrfs pool=pool5

EROR[08-26|19:09:29] Failed to start the daemon: Failed initializing storage pool “pool5”: Failed to mount “/dev/loop25” on “/var/snap/lxd/common/lxd/storage-pools/pool5” using “btrfs”: file exists

INFO[08-26|19:09:29] Starting shutdown sequence

INFO[08-26|19:09:29] Closing the database

INFO[08-26|19:09:29] Stop database gateway

INFO[08-26|19:09:29] Stopping REST API handler:

INFO[08-26|19:09:29] - closing socket socket=/var/snap/lxd/common/lxd/unix.socket

INFO[08-26|19:09:29] Stopping /dev/lxd handler:

INFO[08-26|19:09:29] - closing socket socket=/var/snap/lxd/common/lxd/devlxd/sock

INFO[08-26|19:09:29] Unmounting temporary filesystems

INFO[08-26|19:09:29] Done unmounting temporary filesystems

Error: Failed initializing storage pool “pool5”: Failed to mount “/dev/loop25” on “/var/snap/lxd/common/lxd/storage-pools/pool5” using “btrfs”: file exists

asl@ASL-PC:~/backups$ lxc version

Client version: 4.17

Server version: unreachable

lisa