Lxc is down and I can not see the list.
I get this error
lxc list
Error: Get “http://unix.socket/1.0”: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory
Lxc is down and I can not see the list.
I get this error
lxc list
Error: Get “http://unix.socket/1.0”: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: no such file or directory
Please can you show the contents of /var/snap/lxd/common/lxd/logs/lxd.log
?
This is it. And it seems it is not up.
t=2022-09-09T02:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T03:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T03:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T04:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T04:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T05:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T05:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T06:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T06:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T07:48:29+0430 lvl=info msg=“Updating images”
t=2022-09-09T07:48:29+0430 lvl=info msg=“Done updating images”
t=2022-09-09T07:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T07:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T08:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T08:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T09:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T09:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T10:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T10:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T11:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T11:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T12:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T12:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
t=2022-09-09T13:48:29+0430 lvl=info msg=“Updating images”
t=2022-09-09T13:48:29+0430 lvl=info msg=“Done updating images”
t=2022-09-09T13:48:29+0430 lvl=info msg=“Pruning expired instance backups”
t=2022-09-09T13:48:29+0430 lvl=info msg=“Done pruning expired instance backups”
Can you show sudo ls -la /var/snap/lxd/common/lxd/unix.socket
and sudo snap info lxd
Snap info
mages are available for all Ubuntu releases and architectures as well
as for a wide number of other Linux distributions. Existing
integrations with many deployment and operation tools, makes it work
just like a public cloud, except everything is under your control.
LXD containers are lightweight, secure by default and a great
alternative to virtual machines when running Linux on Linux.
LXD virtual machines are modern and secure, using UEFI and secure-boot
by default and a great choice when a different kernel or operating
system is needed.
With clustering, up to 50 LXD servers can be easily joined and managed
together with the same tools and APIs and without needing any external
dependencies.
Supported configuration options for the snap (snap set lxd [=…]):
- ceph.builtin: Use snap-specific Ceph configuration [default=false]
- ceph.external: Use the system’s ceph tools (ignores ceph.builtin) [default=false]
- criu.enable: Enable experimental live-migration support [default=false]
- daemon.debug: Increase logging to debug level [default=false]
- daemon.group: Set group of users that have full control over LXD [default=lxd]
- daemon.user.group: Set group of users that have restricted LXD access [default=lxd]
- daemon.preseed: Pass a YAML configuration to lxd init
on initial start
- daemon.syslog: Send LXD log events to syslog [default=false]
- daemon.verbose: Increase logging to verbose level [default=false]
- lvm.external: Use the system’s LVM tools [default=false]
- lxcfs.pidfd: Start per-container process tracking [default=false]
- lxcfs.loadavg: Start tracking per-container load average [default=false]
- lxcfs.cfs: Consider CPU shares for CPU usage [default=false]
- openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
- openvswitch.external: Use the system’s OVS tools (ignores openvswitch.builtin) [default=false]
- ovn.builtin: Use snap-specific OVN configuration [default=false]
- shiftfs.enable: Enable shiftfs support [default=auto]
For system-wide configuration of the CLI, place your configuration in
/var/snap/lxd/common/global-conf/ (config.yml and servercerts)
commands:
─➤ sudo ls -la /var/snap/lxd/common/lxd/unix.socket
ls: cannot access ‘/var/snap/lxd/common/lxd/unix.socket’: No such file or directory
Please show sudo ps aux | grep lxd
udo ps aux | grep lxd
[sudo] password for farbod:
lxd 67342 0.0 0.0 7200 1084 ? Ss 17:09 0:00 dnsmasq --keep-in-foreground --strict-order --bind-interfaces --except-interface=lo --pid-file= --no-ping --interface=lxdbr0 --dhcp-rapid-commit --listen-address=10.125.208.1 --dhcp-no-override --dhcp-authoritative --dhcp-leasefile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.leases --dhcp-hostsfile=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.hosts --dhcp-range 10.125.208.2,10.125.208.254,1h --listen-address=fd42:26f1:148a:843::1 --enable-ra --dhcp-range ::,constructor:lxdbr0,ra-stateless,ra-names -s lxd --interface-name _gateway.lxd,lxdbr0 -S /lxd/ --conf-file=/var/snap/lxd/common/lxd/networks/lxdbr0/dnsmasq.raw -u lxd -g lxd
farbod 109306 0.0 0.0 6432 724 pts/1 S+ 18:53 0:00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn --exclude-dir=.idea --exclude-dir=.tox lxd
OK so LXD itself is not running.
what happens if you do sudo snap refresh lxd
?
Yes it seems it’s not running.
sudo snap refresh lxd
snap “lxd” has no updates available
Can you run:
sudo systemctl start snap.lxd.daemon.service snap.lxd.daemon.unix.socket
sudo systemctl start snap.lxd.daemon.service snap.lxd.daemon.unix.socket
Job failed. See “journalctl -xe” for details.
Can you try:
Here U are :
Sep 09 19:22:43 ubuntu sshd[121241]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.173.12 user=root
Sep 09 19:22:44 ubuntu sshd[121241]: Failed password for root from 61.177.173.12 port 33812 ssh2
Sep 09 19:22:46 ubuntu sudo[120861]: pam_unix(sudo:session): session closed for user root
Sep 09 19:22:48 ubuntu sudo[121255]: farbod : TTY=pts/2 ; PWD=/home/farbod ; USER=root ; COMMAND=/usr/bin/systemctl daemon-reload
Sep 09 19:22:48 ubuntu sudo[121255]: pam_unix(sudo:session): session opened for user root by farbod(uid=0)
Sep 09 19:22:48 ubuntu kernel: [UFW BLOCK] IN=lxdbr0 OUT= PHYSIN=tapca85acb4 MAC=00:16:3e:c0:6a:e7:00:16:3e:c0:5f:c6:86:dd SRC=fe80:0000:0000:0000:0216:3eff:fec0:5fc6 DST=fe80:0000:0000:00>
Sep 09 19:22:48 ubuntu systemd[1]: Reloading.
Sep 09 19:22:48 ubuntu sshd[121241]: Failed password for root from 61.177.173.12 port 33812 ssh2
Sep 09 19:22:48 ubuntu sudo[121255]: pam_unix(sudo:session): session closed for user root
Sep 09 19:22:51 ubuntu sshd[121241]: Failed password for root from 61.177.173.12 port 33812 ssh2
Sep 09 19:22:52 ubuntu sshd[121241]: Received disconnect from 61.177.173.12 port 33812:11: [preauth]
Sep 09 19:22:52 ubuntu sshd[121241]: Disconnected from authenticating user root 61.177.173.12 port 33812 [preauth]
Sep 09 19:22:52 ubuntu sshd[121241]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.173.12 user=root
Sep 09 19:22:59 ubuntu sudo[121288]: farbod : TTY=pts/2 ; PWD=/home/farbod ; USER=root ; COMMAND=/usr/bin/systemctl start snap.lxd.daemon snap.lxd.daemon.unix.socket
Sep 09 19:22:59 ubuntu sudo[121288]: pam_unix(sudo:session): session opened for user root by farbod(uid=0)
Sep 09 19:22:59 ubuntu systemd[1]: snap.lxd.daemon.unix.socket: Socket service snap.lxd.daemon.service already active, refusing.
Sep 09 19:22:59 ubuntu systemd[1]: Failed to listen on Socket unix for snap application lxd.daemon.
Sep 09 19:23:04 ubuntu kernel: [UFW BLOCK] IN=eno1 OUT= MAC=2c:44:fd:7f:b7:e8:00:81:c4:f7:0f:57:08:00 SRC=116.87.162.39 DST=195.110.38.200 LEN=44 TOS=0x00 PREC=0x00 TTL=43 ID=23095 PROTO=T>
Sep 09 19:23:05 ubuntu sudo[121589]: farbod : TTY=pts/2 ; PWD=/home/farbod ; USER=root ; COMMAND=/usr/bin/journalctl -n 300
Sep 09 19:23:05 ubuntu sudo[121589]: pam_unix(sudo:session): session opened for user root by farbod(uid=0)
Sep 09 19:23:06 ubuntu kernel: [UFW BLOCK] IN=eno1 OUT= MAC=2c:44:fd:7f:b7:e8:00:81:c4:f7:0f:57:08:00 SRC=192.241.205.51 DST=195.110.38.200 LEN=40 TOS=0x00 PREC=0x00 TTL=234 ID=54321 PROTO>
Sep 09 19:23:08 ubuntu kernel: [UFW BLOCK] IN=lxdbr0 OUT= PHYSIN=tapca85acb4 MAC=00:16:3e:c0:6a:e7:00:16:3e:c0:5f:c6:86:dd SRC=fe80:0000:0000:0000:0216:3eff:fec0:5fc6 DST=fe80:0000:0000:00>
Sep 09 19:23:09 ubuntu kernel: [UFW BLOCK] IN=eno1 OUT= MAC=2c:44:fd:7f:b7:e8:00:81:c4:f7:0f:57:08:00 SRC=92.63.197.83 DST=195.110.38.200 LEN=40 TOS=0x00 PREC=0x00 TTL=240 ID=35062 PROTO=T>
Sep 09 19:23:14 ubuntu kernel: [UFW BLOCK] IN=eno1 OUT= MAC=2c:44:fd:7f:b7:e8:00:81:c4:f7:0f:57:08:00 SRC=167.94.138.64 DST=195.110.38.200 LEN=44 TOS=0x00 PREC=0x00 TTL=35 ID=28288 PROTO=T>
Sep 09 19:23:28 ubuntu kernel: [UFW BLOCK] IN=lxdbr0 OUT= PHYSIN=tapca85acb4 MAC=00:16:3e:c0:6a:e7:00:16:3e:c0:5f:c6:86:dd SRC=fe80:0000:0000:0000:0216:3eff:fec0:5fc6 DST=fe80:0000:0000:00>
Sep 09 19:23:48 ubuntu kernel: [UFW BLOCK] IN=lxdbr0 OUT= PHYSIN=tapca85acb4 MAC=00:16:3e:c0:6a:e7:00:16:3e:c0:5f:c6:86:dd SRC=fe80:0000:0000:0000:0216:3eff:fec0:5fc6 DST=fe80:0000:0000:00>
Sep 09 19:23:57 ubuntu kernel: [UFW BLOCK] IN=eno1 OUT= MAC=2c:44:fd:7f:b7:e8:00:81:c4:f7:0f:57:08:00 SRC=154.89.5.47 DST=195.110.38.200 LEN=44 TOS=0x00 PREC=0x00 TTL=235 ID=40358 PROTO=TC>
Sep 09 19:23:58 ubuntu sudo[121589]: pam_unix(sudo:session):
Sep 09 19:24:08 ubuntu kernel: [UFW BLOCK] IN=lxdbr0 OUT= PHYSIN=tapca85acb4 MAC=00:16:3e:c0:6a:e7:00:16:3e:c0:5f:c6:86:dd SRC=fe80:0000:0000:0000:0216:3eff:fec0:5fc6 DST=fe80:0000:0000:00>
Sep 09 19:24:14 ubuntu sudo[122040]: farbod : TTY=pts/2 ; PWD=/home/farbod ; USER=root ; COMMAND=/usr/bin/journalctl -n 300
Sep 09 19:24:14 ubuntu sudo[122040]: pam_unix(sudo:session): session opened for user root by farbod(uid=0)
Sep 09 19:24:14 ubuntu sudo[122040]: pam_unix(sudo:session): session closed for user root
Sep 09 19:24:22 ubuntu kernel: [UFW BLOCK] IN=eno1 OUT= MAC=2c:44:fd:7f:b7:e8:00:81:c4:f7:0f:57:08:00 SRC=154.89.5.92 DST=195.110.38.200 LEN=44 TOS=0x00 PREC=0x00 TTL=235 ID=27845 PROTO=TC>
Sep 09 19:24:28 ubuntu kernel: [UFW BLOCK] IN=lxdbr0 OUT= PHYSIN=tapca85acb4 MAC=00:16:3e:c0:6a:e7:00:16:3e:c0:5f:c6:86:dd SRC=fe80:0000:0000:0000:0216:3eff:fec0:5fc6 DST=fe80:0000:0000:00>
Sep 09 19:24:33 ubuntu sudo[122096]: farbod : TTY=pts/2 ; PWD=/home/farbod ; USER=root ; COMMAND=/usr/bin/journalctl -n 300
Sep 09 19:24:33 ubuntu sudo[122096]: pam_unix(sudo:session): session opened for user root by farbod(uid=0)
It seems that restart worked.
now my node is up.
It says the service is already active, that’s odd.
Can you show systemctl -a | grep snap.lxd
and systemctl status snap.lxd.daemon
?
yes … I checked it again to make sure if lxd is okay or not. I figured out that my nodes are available now. Every things is fine now. Thanks a lot for your helps.