LXD on Centos 7

howto
snap

(Stéphane Graber) #22

Your PATH is likely limited for some reason. Spawning a new shell may refresh that.
Otherwise you’ll need to add /snap/bin to your PATH


(Adam) #23

Checked the path, but still the same problem:
[root@localhost ~]# echo $PATH
/usr/local/sbin:/sbin:/bin:/usr/sbin:/usr/bin:/var/lib/snapd/snap/bin:/root/bin
[root@localhost ~]# lxd init
-bash: lxd: command not found
[root@localhost ~]#


(Tatrasiel R) #24

@stgraber Is Centos still supported? And should I open a new thread for this?

I have this problem . I am going to continue to work through it as much as I can.

Error: Failed to run: /snap/lxd/current/bin/lxd forkstart c1 /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/c1/lxc.conf: 
Try `lxc info --show-log c1` for more info

The thing that sticks out is /snapd isn’t an existing directory. /root/snapd/ is but I am not sure if this is related.

Log:

lxc test 20180927140717.978 ERROR    start - start.c:lxc_spawn:1650 - Invalid argument - Failed to clone a new set of namespaces
lxc test 20180927140718.336 WARN     network - network.c:lxc_delete_network_priv:2597 - Invalid argument - Failed to remove interface "vethVT8YRF" from "lxdbr0"
lxc test 20180927140718.347 ERROR    start - start.c:__lxc_start:1910 - Failed to spawn container "test"
lxc test 20180927140718.349 ERROR    lxccontainer - lxccontainer.c:wait_on_daemonized_start:840 - Received container state "ABORTING" instead of "RUNNING"
lxc test 20180927140718.354 ERROR    conf - conf.c:userns_exec_1:4333 - Failed to clone process in new user namespace
lxc test 20180927140718.358 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_destroy:1119 - Failed to destroy cgroups
lxc 20180927140718.369 WARN     commands - commands.c:lxc_cmd_rsp_recv:130 - Connection reset by peer - Failed to receive response for command "get_state"



[root@slot02 ~]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-862.11.6.el7.x86_64 root=/dev/mapper/centos_slot02-root ro crashkernel=auto rd.lvm.lv=centos_slot02/root rd.lvm.lv=centos_slot02/swap rhgb quiet LANG=en_US.UTF-8 user_namespace.enable=1 namespace.unpriv_enable=1

(Stéphane Graber) #25

What does cat /proc/sys/user/max_user_namespaces show?


(Tatrasiel R) #26
[root@slot02 ~]#  cat /proc/sys/user/max_user_namespaces 
0

Also I cat this /etc/sysctl.d/99-userns.conf
and I see

user.max_user_namespaces=3883

What should this be? I assume not zero.


(Stéphane Graber) #27

Should be 3883 per that sysctl.d file, I suspect that’s your issue.

Try running (as root):

echo 3883 > /proc/sys/user/max_user_namespaces

Assuming this doesn’t fail, it should fix LXD, at least until the next reboot.


#28

[ This question has been answered :slight_smile: ]


(Tatrasiel R) #29

Ok this worked for me. I tried to reply earlier but the site seemed down. Thanks . We probably should add this to the steps for solving and or trouble shooting.

I would like to make a youtube video for this if you think it will help.


(HSB) #30

@stgraber Nice topic. Thanks. I tried this on CentOS 7.6 and getting squashfs error. Any thoughts on resolution?

snap install lxd
error: system does not fully support snapd: cannot mount squashfs image using "squashfs": mount:
       unknown filesystem type 'squashfs'

#31

how about

sudo yum install squashfs-tools
sudo modprobe squashfs

(HSB) #32

Thanks @gpatel-fr I have already tried suggested:
squashfs-tools is already installed.
Post modprobe squashfs , I tried running snap command and same error (including post boot)


(Stéphane Graber) #33

Are you running a non-standard kernel?

CentOS7 is supposed to have the squashfs module available for its kernel and it does in our test environment at least.


#34

I tried it with Kvm and it installed fine with the 7.6.1810 minimal image Iso.
I can launch an ubuntu 18 image
I see a strange message with sudo snap services, it seems that the server returns an error 500; I can’t be 100% certain it is a specific Centos problem since I had a crash because I understimated the necessary disk space, I managed to extend everything and after a bit of btrfs cleaning on the lxd volume lxd was working again, everything seems to work except this snap services command.
FTR
sudo lsmod | grep sq
shows squashfs.
you can see your kernel config with the usual
cat /boot/config-uname -r | grep SQUASH


(HSB) #35

@stgraber @gpatel-fr Thanks.
I am using internal cloud so have limited control on the image. List modules doesn’t show squash entries but boot config does have some.


#36

@stgraber talked about a ‘clean image’ in his first post. That’s about it, if you have no control on the kernel (corporate policy and the like) and the people setting up vm have definite ideas about what can run on the company’s computer, you can be blocked to do some things, that’s life.
Here is the config on a ‘clean’ instance (installed from the dvd image, standard sec profile)

CONFIG_SQUASHFS=m
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3

if you have the same it would be difficult to understand why it does not work.
As you can see squashfs is compiled as a module while on ubuntu it’s in the static kernel (Y)
If you have CONFIG_SQUASHFS=N you will have to get someone to provide a different kernel.


(HSB) #37

CONFIG_SQUASHFS=M so seems like missing modules … lsmod does not show any squash modules.


#38

M means that SQUASHFS is avaiable as a module.

In a minimal install, squashfs is NOT loaded in memory BEFORE snap installation, and it IS loaded (permanently, at boot) when snap is installed and working.

So the first thing to do is to test if snap is really working, using

sudo snap install --edge test-snapd-hello-multi-arch

and running the said application to see if it displays ‘hello world’.

It this does not succeed, there is no point in fiddling with lxd, that would be a compatibility problem between your particular installation and snap (since snap works with a vanilla install and the procedure explicited in the first post of this thread)