Can not enter into my container, but container is runnig

Can not enter into my container, but container is running. Also container do not have ip addresses

P.S. I entered in container but all work very slowly and almost all services do not work
root@lxd0x:/home/quersys# lxc exec test001 -- /bin/bash
root@test001:~#

lxc list

lxc cluster list

`quersys@lxd02:~$ systemctl status snap.lxd.daemon
● snap.lxd.daemon.service - Service for snap application lxd.daemon
Loaded: loaded (/etc/systemd/system/snap.lxd.daemon.service; static; vendor preset: enabled)
Active: active (running) since Tue 2020-12-01 10:34:09 UTC; 47min ago
TriggeredBy: ● snap.lxd.daemon.unix.socket
Main PID: 1294 (daemon.start)
Tasks: 0 (limit: 4620)
Memory: 25.9M
CGroup: /system.slice/snap.lxd.daemon.service
‣ 1294 /bin/sh /snap/lxd/18402/commands/daemon.start

Dec 01 10:34:21 lxd02 lxd.daemon[1581]: t=2020-12-01T10:34:21+0000 lvl=warn msg=“Dqlite: attempt 6: server 192.168.100.54:8443: dial: Failed to connect to HTTP endpoint: dial tcp 192.168.100.54:8443: connect: connection refused”
Dec 01 10:34:22 lxd02 lxd.daemon[1581]: t=2020-12-01T10:34:22+0000 lvl=warn msg=“Dqlite: attempt 7: server 192.168.100.53:8443: no known leader”
Dec 01 10:34:22 lxd02 lxd.daemon[1581]: t=2020-12-01T10:34:22+0000 lvl=warn msg=“Dqlite: attempt 7: server 192.168.100.54:8443: dial: Failed to connect to HTTP endpoint: dial tcp 192.168.100.54:8443: connect: connection refused”
Dec 01 10:34:25 lxd02 lxd.daemon[1581]: t=2020-12-01T10:34:25+0000 lvl=warn msg=“Dqlite: attempt 0: server 192.168.100.53:8443: no known leader”
Dec 01 10:48:05 lxd02 lxd.daemon[1581]: t=2020-12-01T10:48:05+0000 lvl=warn msg="Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 Raft:true LastHeartbeat:2020-12-01 10:33:25.822460464 +000>
Dec 01 10:48:05 lxd02 lxd.daemon[1581]: 2020/12/01 10:48:05 http: superfluous response.WriteHeader call from github.com/lxc/lxd/lxd/response.(*errorResponse).Render (response.go:224)
Dec 01 10:48:05 lxd02 lxd.daemon[1581]: 2020/12/01 10:48:05 http: superfluous response.WriteHeader call from github.com/lxc/lxd/lxd/response.(*errorResponse).Render (response.go:224)
Dec 01 10:48:05 lxd02 lxd.daemon[1581]: 2020/12/01 10:48:05 http: superfluous response.WriteHeader call from github.com/lxc/lxd/lxd/response.(*errorResponse).Render (response.go:224)
Dec 01 10:48:05 lxd02 lxd.daemon[1581]: 2020/12/01 10:48:05 http: superfluous response.WriteHeader call from github.com/lxc/lxd/lxd/response.(*errorResponse).Render (response.go:224)
Dec 01 10:48:06 lxd02 lxd.daemon[1294]: => LXD is ready`

root@lxd0x:~# nano /etc/netplan/00-installer-config.yaml

network:
bridges:
br0:
addresses:
- 192.168.100.54/24
gateway4: 192.168.100.11
interfaces:
- ens3
nameservers:
addresses:
- 208.67.222.222
- 208.67.220.220
mtu: 1400
parameters:
stp: false
forward-delay: 0
ethernets:
ens3: {}
version: 2

This txt file with lxd.log
cat /var/snap/lxd/common/lxd/logs/lxd.log

  • lxd sql global "SELECT * FROM nodes;"
    
  • lxd sql local "SELECT * FROM raft_nodes;"
    

quersys@lxd0x:~$ lxd sql global "SELECT * FROM config;"

±—±--------------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | key | value |
±—±--------------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 1 | core.trust_password | 0fabe8346ffe62cd5f105e5e34504ba4ac94e337b6b340d1bccad41bbc4d947084abe87807c70e07b4bdf26094a33860db8b03398472321c4be740b2072e0515d9fd4900a8461027a5ce3ca9cdddd90a68ed7e5b7c8ca1fc644c24d1c0030d4c |
±—±--------------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
quersys@lxd0x:~$ lxd sql local “SELECT * FROM config;”
±—±----------------------±--------------------+
| id | key | value |
±—±----------------------±--------------------+
| 1 | core.https_address | 192.168.100.54:8443 |
| 2 | cluster.https_address | 192.168.100.54:8443 |

Out of disk space or out of RAM?

I found solutions :slight_smile: I do not know, correct this or not, but is is working

When my container stops and no working, I use command for "snapshot " and "restore " , then just use command “start”. I can understand why this works :slight_smile:

root@lxd02:~# lxc snapshot grafanaDB snap-20201201
root@lxd02:~# lxc restore grafanaDB snap-20201201
root@lxd02:~# lxc start grafanaDB
root@lxd02:~# lxc list

| grafanaDB | RUNNING | 192.168.100.226 (eth0) | | CONTAINER | 4 | lxd02 |

Hello!

Today, when I came to work and try to enter to my container (test001) I did not can to enter, but lxc list showing that this container “running”

root@lxd0x:/home/quersys# systemctl status snap.lxd.daemon

● snap.lxd.daemon.service - Service for snap application lxd.daemon
Loaded: loaded (/etc/systemd/system/snap.lxd.daemon.service; static; vendor preset: enabled)
Active: active (running) since Tue 2020-12-01 14:59:58 UTC; 17h ago
TriggeredBy: ● snap.lxd.daemon.unix.socket
Main PID: 103292 (daemon.start)
Tasks: 0 (limit: 4620)
Memory: 1.2M
CGroup: /system.slice/snap.lxd.daemon.service
‣ 103292 /bin/sh /snap/lxd/18402/commands/daemon.start

Dec 01 17:13:05 lxd0x lxd.daemon[103422]: t=2020-12-01T17:13:05+0000 lvl=warn msg=“Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-01 17:12:11.841829435 +0000 UTC Online:false updated:false}”
Dec 01 17:13:15 lxd0x lxd.daemon[103422]: t=2020-12-01T17:13:15+0000 lvl=warn msg=“Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-01 17:12:11.841829435 +0000 UTC Online:false updated:false}”
Dec 01 17:13:22 lxd0x lxd.daemon[103422]: t=2020-12-01T17:13:22+0000 lvl=warn msg=“Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-01 17:16:11.318504037 +0000 UTC Online:false updated:false}”
Dec 01 17:16:48 lxd0x lxd.daemon[103422]: t=2020-12-01T17:16:48+0000 lvl=warn msg=“Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-01 17:16:11.318504037 +0000 UTC Online:false updated:false}”
Dec 01 17:16:56 lxd0x lxd.daemon[103422]: t=2020-12-01T17:16:56+0000 lvl=warn msg=“Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-01 17:16:11.318504037 +0000 UTC Online:false updated:false}”
Dec 01 17:17:01 lxd0x lxd.daemon[103422]: t=2020-12-01T17:17:01+0000 lvl=warn msg=“Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-01 17:16:11.318504037 +0000 UTC Online:false updated:false}”
Dec 01 17:17:01 lxd0x lxd.daemon[103422]: t=2020-12-01T17:17:01+0000 lvl=warn msg=“Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-01 17:16:11.318504037 +0000 UTC Online:false updated:false}”
Dec 01 17:17:04 lxd0x lxd.daemon[103422]: t=2020-12-01T17:17:04+0000 lvl=warn msg=“Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-01 17:16:11.318504037 +0000 UTC Online:false updated:false}”
Dec 01 17:17:15 lxd0x lxd.daemon[103422]: t=2020-12-01T17:17:15+0000 lvl=warn msg="Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443
Dec 02 08:10:44 lxd0x lxd.daemon[103422]: t=2020-12-02T08:10:44+0000 lvl=warn msg=“Detected poll(POLLNVAL) event.”

Can not enter in container, could you pleas help by advice?

quersys@lxd0x:~$ sudo cat /var/snap/lxd/common/lxd/logs/lxd.log


    t=2020-12-02T11:02:50+0000 lvl=info msg="Creating container" ephemeral=false name=test002 project=default
    t=2020-12-02T11:02:50+0000 lvl=info msg="Created container" ephemeral=false name=test002 project=default
    t=2020-12-02T11:02:50+0000 lvl=info msg="Creating container" ephemeral=false name=test002/snap-test001-20201014 project=default
    t=2020-12-02T11:02:51+0000 lvl=info msg="Created container" ephemeral=false name=test002/snap-test001-20201014 project=default
    t=2020-12-02T11:02:51+0000 lvl=info msg="Creating container" ephemeral=false name=test002/snap-test001-20201118-upgrade project=default
    t=2020-12-02T11:02:51+0000 lvl=info msg="Created container" ephemeral=false name=test002/snap-test001-20201118-upgrade project=default
    t=2020-12-02T11:02:51+0000 lvl=info msg="Creating container" ephemeral=false name=test002/snap-test001-20201202 project=default
    t=2020-12-02T11:02:51+0000 lvl=info msg="Created container" ephemeral=false name=test002/snap-test001-20201202 project=default
    t=2020-12-02T11:09:06+0000 lvl=info msg="Pruning expired instance backups"
    t=2020-12-02T11:09:06+0000 lvl=info msg="Done pruning expired instance backups"
    t=2020-12-02T11:17:48+0000 lvl=warn msg="Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-02 11:17:18.095031523 +0000 UTC Online:false updated:false}"
    t=2020-12-02T11:18:32+0000 lvl=warn msg="Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-02 11:17:58.390759077 +0000 UTC Online:false updated:false}"
    t=2020-12-02T11:18:43+0000 lvl=warn msg="Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-02 11:17:58.390759077 +0000 UTC Online:false updated:false}"
    t=2020-12-02T11:18:48+0000 lvl=warn msg="Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-02 11:17:58.390759077 +0000 UTC Online:false updated:false}"
    t=2020-12-02T11:19:02+0000 lvl=warn msg="Excluding offline node from refresh: {ID:2 Address:192.168.100.53:8443 RaftID:2 RaftRole:1 Raft:true LastHeartbeat:2020-12-02 11:17:58.390759077 +0000 UTC Online:false updated:false}"

What does lxc exec say when you try to do it?

quersys@lxd0x:~$ lxc exec
Description:
  Execute commands in instances

  The command is executed directly using exec, so there is no shell and
  shell patterns (variables, file redirects, ...) won't be understood.
  If you need a shell environment you need to execute the shell
  executable, passing the shell commands as arguments, for example:

    lxc exec <instance> -- sh -c "cd /tmp && pwd"

  Mode defaults to non-interactive, interactive mode is selected if both stdin AND stdout are terminals (stderr is ignored).

Usage:
  lxc exec [<remote>:]<instance> [flags] [--] <command line>

Flags:
      --cwd                    Directory to run the command in (default /root)
  -n, --disable-stdin          Disable stdin (reads from /dev/null)
      --env                    Environment variable to set (e.g. HOME=/home/foo)
  -t, --force-interactive      Force pseudo-terminal allocation
  -T, --force-noninteractive   Disable pseudo-terminal allocation
      --group                  Group ID to run the command as (default 0)
      --mode                   Override the terminal mode (auto, interactive or non-interactive) (default "auto")
      --user                   User ID to run the command as (default 0)

Global Flags:
      --debug            Show all debug messages
      --force-local      Force using the local unix socket
  -h, --help             Print help
      --project string   Override the source project
  -q, --quiet            Don't show progress information
  -v, --verbose          Show all information messages
      --version          Print version number

lxc exec NAME bash

Does it still hang? If it does, then ps fauxww and dmesg output on the host which is running that container.

quersys@lxd0x:~$ lxc exec test002 bash
root@test002:~#

In this moment it is working, I do not nothing, just waited your advice. But the same situation was yesterday, it working but at morning when I came to work it does not work.

And also it happened when I restart cluster… And one more, I can not understand, how it can be, that containers are running, but do not have IP