Problem removing bridge

You can copy and instance using:

lxc copy <source instance name> <target instance name>

[AMD Official Use Only]

Will this copy the network though and then conflict with existing container?

Lisa

No it will copy the nic device but generate a new Mac address to avoid conflicts.

[AMD Official Use Only]

I did not find that it worked that way. I got error messages of conflicts when I tried to start the second container. Here are some notes in case this helps you. I got it working. I don’t see this type of thing posted so… we may want to put this up.

Objective: after we get an image setup, create another container exactly the same so you don’t have to take the release server up and down as you test software. You work on a separate dev server.

Notes:

  1. lxc profiles help you setup containers exactly the same. Pointing to same network and same image. This is not good for the above objective.

  2. lxc storage pools contain the image of the operating system you are executing on the container

  3. images are the OS system you put on the container in the storage pool

  4. I used a network bridge to get the 2 way communication from the outside world to the hardware server to the container.

  5. to get the release server copied over to the development server. I tried a bunch of things here.

  • backup container and the restore to new container. This didn’t work because they point to the same storage and network resources and the second container will not start
  • lxc copy <container2). same problem as above.
  • lxc publish new image (this actually just copies the storage image on container 1), then launch new container with that image and the profile you already created. Then do the network bridge command and now you have it.

gory details that might help the next guy.

#create profile

lxc profile copy default

lxc profile edit <new profile)

#publish working server and os and installed tools

Lxc stop

Lxc publish –-public --alias=<new image name, T>

Lxc start

#launch container with profile and newly created image with all the tools installed

#this give you a container off a standard ubuntu:18.04 image

lxc launch ubuntu:18.04 -p

#this gave me a container off a non standard image I created call T

#Note: I suspect .ssh and keys and git will be messed up and need to be reworked on this new container.

lxc launch T ubuntu-vm-dev-2 -p ubuntu-dev-profile-2

method for backing up lxc containers.

https://www.cyberciti.biz/faq/how-to-backup-and-restore-lxd-containers/

#other stuff

lxc storage list

lxc storage create <pool#> btrfs size=600GB

lxc list

//need to change the bridge binding. before you start it.

//add bridge to server

lxc config device add ubuntu-vm-dev myport8004 proxy listen=tcp::8004 connect=tcp:127.0.0.1:8004

#test with simple python server

python -m http.server 8004

#then go outside and see if you can see something using the following internet address: :8004

#then try T server

//to take away bridge

lxc config device remove uguntu-vm-REL2 myport80

//lxc network edit lxdbr4

to edit connections of bridges. This seems to fail.

//how to take snap shot… how to restore to second machine… how to restore to current machine

https://openschoolsolutions.org/how-to-backup-lxd-containers/

What error did you get and what command were you running?

[AMD Official Use Only]

Well for the last attempt at copying the img file in the var directory from release container to dev container, leaves my dev container in a state that can not be stopped and can not be deleted.

Lxc list gives me this.

Here is some more commands and errors

asl@ASL-PC:~$ lxc stop ubuntu-vm-dev

Error: The instance cannot be cleanly shutdown as in Error status

asl@ASL-PC:~$ lxc exec ubuntu-vm-dev bash

Error: Instance is not running

asl@ASL-PC:~$ lxc delete ubuntu-vm-dev

Error: The instance is currently running, stop it first or pass --force

asl@ASL-PC:~$ lxc delete ubuntu-vm-dev --force

Error: Stopping the instance failed: The instance is already stopped

This happened after I did a cp /var/snap/lxd/common/lxd/disks/pool.img /var/snap/lxd/common/lxd/disks/pool.img

I did this to try to copy all the installed tools and setup to a second container.

Just to remind you, publish did what I needed. But for clean up this container, ubuntu-vm-dev is stuck.

Please advise,

lisa

OK so you shouldn’t try and copy the underlying storage pool files, as LXD won’t know about them.

Lets back up to your objective for a moment.

Objective: after we get an image setup, create another container exactly the same so you don’t have to take the release server up and down as you test software. You work on a separate dev server.

When you say “image setup”, what do you mean by that? Are you creating an image for an existing instance using lxc publish? Is this so you can create multiple instances from the same image?

I’d like to try and know more about what you’re trying to achieve, then I can give you the best approach, as creating an image from an instance is valid when you want to create lots of instances from a single base one.

But if you just want to copy a single instance to take a backup of it you can use lxc copy or lxc export.

Also I think its really important that we get our terminology correct, as if I’m reading this correctly you are considering storage pool to only house one instance, but that is incorrect, a storage pool can house many instances.

[AMD Official Use Only]

Yeah, I know the copy image didn’t work. It crashed the 2nd container. So, how do I remove that dev container now?

Image setup: in this case we have ubuntu image with server tools all setup. This takes a while to do. I just want to use this in the next development server. Lxc publish worked to copy this image after it was setup and then I could use this in multiple containers. This actually worked.

Lxc copy didn’t work on the same hardware. It conflicts and will not start the second container. If you export it, I suspect it works, don’t know, I haven’t tried it.

lisa

[AMD Official Use Only]

This is good to know. I don’t think I want to stick several instances on the same pool though. It seems like your memory management could become a problem.

lisa

Have you rebooted since you copied the storage pool over itself? You may need to do this in order to ensure there are no running containers. Then try force deleting again.

That is what storage pools are designed for. They don’t take up memory they take up disk storage.

Great. So what caused the problem?

[AMD Official Use Only]

Ok, rebooted. Now lxc list doesn’t work. I get an error message that looks like this.

Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory

asl@ASL-PC:~/backups$ sudo lxd

WARN[08-26|18:58:08] - Couldn’t find the CGroup blkio.weight, disk priority will be ignored

WARN[08-26|18:58:08] - Couldn’t find the CGroup memory swap accounting, swap limits will be ignored

WARN[08-26|18:58:08] Instance type not operational type=virtual-machine driver=qemu err=“KVM support is missing”

WARN[08-26|18:58:08] Firewall failed to detect any compatible driver, falling back to “xtables” (but some features may not work as expected due to: Backend command “ebtables” is an nftables shim)

EROR[08-26|18:58:18] Failed to start the daemon: Failed initializing storage pool “pool5”: Failed to mount “/dev/loop25” on “/var/snap/lxd/common/lxd/storage-pools/pool5” using “btrfs”: file exists

Error: Failed initializing storage pool “pool5”: Failed to mount “/dev/loop25” on “/var/snap/lxd/common/lxd/storage-pools/pool5” using “btrfs”: file exists

[AMD Official Use Only]

Some more information from clip from

sudo lxd --debug --group lxd

work as expected due to: Backend command “ebtables” is an nftables shim)

INFO[08-26|19:09:18] Firewall loaded driver driver=xtables

INFO[08-26|19:09:18] Initializing storage pools

DBUG[08-26|19:09:19] Initializing and checking storage pool pool=pool4

DBUG[08-26|19:09:19] Mount started driver=btrfs pool=pool4

DBUG[08-26|19:09:19] Mount finished driver=btrfs pool=pool4

DBUG[08-26|19:09:19] Initializing and checking storage pool pool=pool5

DBUG[08-26|19:09:19] Mount started driver=btrfs pool=pool5

DBUG[08-26|19:09:29] Mount finished driver=btrfs pool=pool5

EROR[08-26|19:09:29] Failed to start the daemon: Failed initializing storage pool “pool5”: Failed to mount “/dev/loop25” on “/var/snap/lxd/common/lxd/storage-pools/pool5” using “btrfs”: file exists

INFO[08-26|19:09:29] Starting shutdown sequence

INFO[08-26|19:09:29] Closing the database

INFO[08-26|19:09:29] Stop database gateway

INFO[08-26|19:09:29] Stopping REST API handler:

INFO[08-26|19:09:29] - closing socket socket=/var/snap/lxd/common/lxd/unix.socket

INFO[08-26|19:09:29] Stopping /dev/lxd handler:

INFO[08-26|19:09:29] - closing socket socket=/var/snap/lxd/common/lxd/devlxd/sock

INFO[08-26|19:09:29] Unmounting temporary filesystems

INFO[08-26|19:09:29] Done unmounting temporary filesystems

Error: Failed initializing storage pool “pool5”: Failed to mount “/dev/loop25” on “/var/snap/lxd/common/lxd/storage-pools/pool5” using “btrfs”: file exists

asl@ASL-PC:~/backups$ lxc version

Client version: 4.17

Server version: unreachable

lisa

[AMD Official Use Only]

I see this.

https://discuss.linuxcontainers.org/t/lxd-daemon-startup-problem/3302

truncate -s 1G /var/lib/lxd/disks/

If you don’t mind losing your instances and images I would suggest starting afresh with:

  1. Disable LXD on start
  2. Reboot the machine (ensures all mounts and networks are cleaned up)
  3. rm -rvf /var/lib/lxd
  4. Start LXD again and enable on boot
  5. sudo lxd init - run through configuring LXD which will create a storage pool for use with all of your instances (not just one of them).

[AMD Official Use Only]

This is a terrible solution to dump containers.

Ok, so I did your list below. I reinstalled lxd. I have a release server regenerated from a backup I had done just a few days ago. But… the network connection is not working.

#this is the command issued

lxc config device add ubuntu-vm-REL myport80 proxy listen=tcp:0.0.0.0:8002 connect=tcp:127.0.0.1:80002

Device myport80 added to ubuntu-vm-REL

Then I go into the container and I have python 3.

Python -m http.server 8002

This site can’t be reached.

asl@ASL-PC:~$ lxc config device add ubuntu-vm-REL myport8002 proxy listen=tcp:0.0.0.0:8002 connect=tcp:127.0.0.1:8002

Device myport8002 added to ubuntu-vm-REL

asl@ASL-PC:~$ lxc config device show ubuntu-vm-REL

myport8002:

connect: tcp:127.0.0.1:8002

listen: tcp:0.0.0.0:8002

type: proxy

asl@ASL-PC:~$

This worked in the past. Not sure why it’s not now. I don’t have any conflicting ports or containers setup at this point. This has worked in the past I thought.

lisa

~WRD0002.jpg

Ok, got network working. 2 servers restored. One to go. I had a backup from lxc export

Now on backups…working off of instructions here.

https://www.cyberciti.biz/faq/how-to-backup-and-restore-lxd-containers/

See this error

asl@ASL-PC:~/backups/ubuntu-vm $ lxc import

Error: unknown command “import” for “lxc”

Run ‘lxc --help’ for usage.

There is no lxc import???

Thanks,

lisa

~WRD0002.jpg

That suggests you’re on LXD 3.0.x or something. What does lxc version show you?