How to move containers from Apt LXD to Snap LXD -Import -Export


(Tony Anytime) #1

I can’t stand around and do nothing. So was able to create a new SNAP LXD on one of the servers that is working concurrent with the broken LXD, and I am able to mount via zfs my non-running containers, so I have access to them.

My new Snap LXD is not a cluster and not using ZFS, the new snap is working fine, can create containers, exec bash, etc… The new LXD has a neat Import container command. Unfortunately, the old 3.0.3 doesn’t. I tried creating a tar file manually but when I try to import it says /snap/bin/lxc import Q-PODCAST.tar.gz
Importing container: 100% (818.77MB/s)Error: Backup is missing index.yaml
IF I could import my containers that would solve half the problem, how can I create this missing index.yaml.

Also if I start lxd init in none cluster can I later upgrade to cluster or should I do it from the beginning? I don’t remember if when you from uncluster to cluster if it erases your data.

thanks


(Turtle0x1) #2

Im not sure about about the clustering question, but instead of importing from backups can’t you lxc copy to the new server ?


(Tony Anytime) #3

I would love to use lxc copy but my lxd wont start and I can’t even do a lxc list. I dont think I can get lxc copy to work.


(Stéphane Graber) #4

You’re trying to import a container tarball as a backup tarball, this will not work as your hand generated tarball does not contain the index files and extra metadata that a real backup would.


(Tony Anytime) #5

So I have my cluster working well again minus Zfs fragmentation. But I still need to move them to SNAP to keep them with current softwares right. What is best way to move cluster over. I don’t think Migrate is a good idea. I rather move container at a time if possible.
What is best practices here?


(Stéphane Graber) #6

Indeed, lxd.migrate for clusters is still pretty new and due to clusters having to be consistent, may be a bit annoying as you’d need to run it on all of the nodes within a one hour window so that all nodes get onto the new version. lxd.migrate will effectively hold until that’s the case.

Note that lxd.migrate needs to shutdown containers as part of the migration and that containers will be offline until the cluster is back online (all nodes upgraded to snap).

In your case, if you have a bit of spare hardware, it’s likely easier to either setup a new cluster and move your container over to it with lxc move or at least setup one system that can temporarily run your containers while you wipe your old systems and setup a cluster with the snap, then move containers back from that standalone system onto your new cluster.


(Tony Anytime) #7

It may not be possible in short terms I have more than a few containers and some are very large. May do them one by one as I create new ones for new clients on new servers.
I had to get another server for drive capacity and zfs fragmentation issues first. The two latest servers are performing well in the cluster but is cluster list show, they show database NO. What does that mean? How do I get them to yes.


(Stéphane Graber) #8

That would normally mean that they do not replicate the database, which is a bit surprising given that your cluster is working. Maybe it’s just the database state being wrong, or maybe there’s an actual replication issue.

Can you show:

  • lxd sql global “SELECT * FROM nodes;” (on any of the machines)
  • lxd sql local “SELECT * FROM raft_nodes;” (this one on both machines)

@freeekanayaka may be able to help there too


(Tony Anytime) #9

±—±---------±------------±-----------------±-------±---------------±------------------------------------±--------+
| id | name | description | address | schema | api_extensions | heartbeat | pending |
±—±---------±------------±-----------------±-------±---------------±------------------------------------±--------+
| 1 | curlyjoe | | 64.71.77.29:8443 | 7 | 85 | 2019-02-17T12:59:49.702929044-05:00 | 0 |
| 2 | moe | | 64.71.77.32:8443 | 7 | 85 | 2019-02-17T12:59:49.702964883-05:00 | 0 |
| 3 | larry | | 64.71.77.80:8443 | 7 | 85 | 2019-02-17T12:59:49.804742719-05:00 | 0 |
| 4 | joe | | 64.71.77.13:8443 | 7 | 85 | 2019-02-17T12:59:49.931148147-05:00 | 0 |
| 5 | chemp | | 64.71.77.18:8443 | 7 | 85 | 2019-02-17T12:59:50.027983564-05:00 | 0 |
±—±---------±-----------


(Tony Anytime) #10

joe
±—±-----------------+
| id | address |
±—±-----------------+
| 1 | 64.71.77.29:8443 |
| 2 | 64.71.77.32:8443 |
| 4 | 64.71.77.80:8443 |
±—±-----------------+

CHEMP:/home/ic2000# lxd sql local ‘SELECT * FROM raft_nodes;’
±—±-----------------+
| id | address |
±—±-----------------+
| 1 | 64.71.77.29:8443 |
| 2 | 64.71.77.32:8443 |
| 4 | 64.71.77.80:8443 |
±—±-----------------+


(Stéphane Graber) #11

Hmm, that’s odd your configuration seems fine, you have 3 database servers, things look consistent and they all exist in the global table too.

@freeekanayaka any idea what’s going on?