Nested containers issues (permissions, zfs, possibly something else)

Aloha,

(Edit… removing a lot of long-winded stuff talking about how awesome LXD is, several people here probably already know that. Skipping right to the main points in this message…)

I am currently running into some issues with nested containers.

Here are some notes:

Running everything as root, setting secure nesting true, expanding subgid & subuid along the way.

  • Ubuntu VM with uid/gid update in place, created VM snapshot
  • lxd init manually ran for a generic setup, calling it host001
  • Set preseed to yes to display, pushed the yaml to a private github repo
  • Revert VM to snapshot to test the preseed
  • Cloned github repo, lxd init with new preseed, success
  • Launched org001 (Ubuntu 20.04 container) within host001, success
  • Bashed into org001, updated/upgraded, installed git, snapd, snap installed lxd
  • Updated uid/gid to prep for nesting, restarted container
  • Bashed back into org001, lxd init manually ran for a generic setup
  • Same approach as with host001, pushed yaml to private repo
  • Reverted to recent VM snapshot to test new preseed
  • Hopped back into host001 then org001
  • Attempted to lxd init the preseed and launch web001
  • Error, permission denied on /dev/stdin

Every time a manual run of lxd init without the preseed goes through, it succeeds, including within the nested container.

However, every time a preseed is used, it only successfully initializes LXD on the host, but not in the first nested layer’s container.

The preseed yaml itself is very generic, auto, lxdbr0, default, etc.

After running:
lxd init --preseed < ./preseed_file.yaml

It resulted in:
Error: Failed to read from stdin: read /dev/stdin: permission denied

After running:
cat ./preseed_file.yaml | lxd init --preseed

It resulted in:
Error: Failed to create storage pool 'default': Failed to run: zpool create -f -m none -O compression=on default /var/snap/lxd/common/lxd/disks/default.img: /dev/zfs and /proc/self/mounts are required.

Tried running:
udevadm trigger
mount -t proc proc /proc
Although /proc was already mounted.
Tried umount -l /proc then mount again.
Same error occurred.

There’s the option of changing from zfs to another storage type, although would like to use that, unless these days LXD has a better option.

Curious what might be preventing lxd init with preseed and launching nested containers from succeeding.

ZFS can’t work nested, so that part is somewhat expected.

If your outer storage is ZFS, then your only option for nested storage driver is dir.
The only exception to this is btrfs which allows it to be used in a container if that container is itself on btrfs.

I’m hoping that nested ZFS will eventually be a thing, but that’s not currently the case.

The /dev/stdin error is interesting, that’s likely coming from apparmor flagging the source of the file, going through cat + pipe avoids this issue.

Changed zfs to dir, preseed init succeeded, and successfully launched nested container. Thank you!

Next is an error with snapd and apparmor.

Using a very basic yaml:

config:
  core.https_address: '[::]:8443'
  core.trust_password: ""
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: lxdbr0
  type: ""
storage_pools:
- config: {}
  description: ""
  name: default
  driver: dir
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: default
      type: disk
  name: default
cluster: null

The layout so far:

  1. host001 (test VM host on physical host) is now running the following:
  2. org001 (main container), which is now running the following:
  3. web001 (nested container)

Created local image with LXD preinstalled to avoid downloading new image every time.

Overview of steps so far:

  1. VM host: update, upgrade, install snapd, snap install lxd (4.0/stable)
  2. Update /etc/subgid, /etc/subuid, then restart VM
  3. Reconnect: retrieve yaml, lxd init, launch org001 (security nesting true)
  4. Connect to org001, update with new subgid & subuid, snap restart lxd, unable to use snap

error: system does not fully support snapd: apparmor detected but insufficient permissions to use it

Would like to launch another nested LXD container within this one.

Looked for possible cause of insufficient permissions.

From root@web001:
# which snap
/usr/bin/snap
# apparmor_status | grep snap
You do not have enough privilege to read the profile set.

Found /etc/apparmor.d/usr.lib.snapd.snap-confine.real, went through it, it seemed okay, everything default, although it would seem weird to need to manually edit that.

Maybe it needed to be reinstalled directly within web001.
# apt-get install --reinstall snapd
Attempted to restard LXD.
# snap restart lxd
Same error, apparmor detected insufficient permissions to use snapd.

Installed apparmor-utils, still unable to restart lxd, reinstalled snapd again, new error.
# apt-get install apparmor-utils
# snap restart lxd
Same error, apparmor detected insufficient permissions to use snapd.
# apt-get install --reinstall snapd
apparmor_parser: Unable to replace "mount-namespace-capture-helper". Permission denied; attempted to load a profile while confined? apparmor_parser: Unable to replace "/usr/lib/snapd/snap-confine". Permission denied; attempted to load a profile while confined? snapd.failure.service is a disabled or a static unit, not starting it. snapd.snap-repair.service is a disabled or a static unit, not starting it. Job for snapd.apparmor.service failed because the control process exited with error code. See "systemctl status snapd.apparmor.service" and "journalctl -xe" for details. Processing triggers for mime-support (3.64ubuntu1) ...

# systemctl status snapd.apparmor.service

● snapd.apparmor.service - Load AppArmor profiles managed internally by snapd
     Loaded: loaded (/lib/systemd/system/snapd.apparmor.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2020-06-23 18:05:26 UTC; 7min ago
    Process: 994 ExecStart=/usr/lib/snapd/snapd-apparmor start (code=exited, status=123)
   Main PID: 994 (code=exited, status=123)

Error could not replace profile snap.lxd.lxc-to-lxd (and many more), permission denied.
It asked if attempted to load profile while confined.
Maybe there is a need for a configuration change in apparmor.
snapd.apparmor.service: Main process exited, code=exited, status=123/n/a
web001 systemd[1]: snapd.apparmor.service: Failed with result 'exit-code'.
web001 systemd[1]: Failed to start Load AppArmor profiles managed internally by snapd.

# journalctl -xe highlights:
apparmor.systemd[109]: Error: Could not replace profile /var/cache/apparmor/26b63962.0/lsb_release: Permission denied
snapd-apparmor[158]: Error: Could not replace profile /var/cache/apparmor/26b63962.0/snap-update-ns.lxd: Permission denied

I might delete the web001 container, create a new one and try fresh without the reinstalls, or maybe try a different image if the error is due to an image created within another container, although would like to be able to use one image as a general LXD template, there will be 2 more layers of containers after web001, including 3 containers within it (for 3 environments), and each of those will contain the actual applications.

Update:

This error:
error: system does not fully support snapd: apparmor detected but insufficient permissions to use it
Also occurs with normal images. Tried spinning up from images:ubuntu/focal.

Some googling may find something, just from the error output alone it looks like it is a setting in apparmor disallowing snap/snapd from doing its thing.

On second thought, maybe it’s what it says about the system not supporting it. Maybe the parent container (or host beyond that) needs updated to allow child/nested containers to have more permissions.

From a quick search this may resolve it:

# lxc profile edit default

Add the following:

config:
  security.nesting: "true"
  security.privileged: "true"

Possibly also:

raw.lxc: |-
  lxc.cgroup.devices.allow=a
  lxc.mount.auto=proc:rw sys:rw

Going to give that a try.

Okay, so far it may only need the config: update in profile, have not tried the raw.lxc part yet., although snap refresh works now, still unable to restart lxd.

# snap refresh
All snaps up to date.

# snap restart lxd
error: cannot perform the following tasks:
- restart of [lxd.activate lxd.daemon] (# systemctl restart snap.lxd.activate.service snap.lxd.daemon.service
Job for snap.lxd.activate.service failed because the control process exited with error code.
See "systemctl status snap.lxd.activate.service" and "journalctl -xe" for details.
)
- restart of [lxd.activate lxd.daemon] (exit status 1)

# systemctl status snap.lxd.activate.service
● snap.lxd.activate.service - Service for snap application lxd.activate
     Loaded: loaded (/etc/systemd/system/snap.lxd.activate.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Tue 2020-06-23 21:56:39 UTC; 36s ago
    Process: 306 ExecStart=/usr/bin/snap run lxd.activate (code=exited, status=1/FAILURE)
   Main PID: 306 (code=exited, status=1/FAILURE)

systemd[1]: Starting Service for snap application lxd.activate...
lxd.activate[339]: cannot change apparmor hat: No child processes
lxd.activate[340]: cannot change profile for the next exec call: Permission denied
systemd[1]: snap.lxd.activate.service: Main process exited, code=exited, status=1/FAILURE
lxd.activate[306]: snap-update-ns failed with code 1: File exists
systemd[1]: snap.lxd.activate.service: Failed with result 'exit-code'.
systemd[1]: Failed to start Service for snap application lxd.activate.

journalctl

systemd-logind.service: Unexpected error response from GetNameOwner(): Connection terminated
systemd[1]: systemd-logind.service: start operation timed out. Terminating.
systemd[1]: systemd-logind.service: Unexpected error response from GetNameOwner(): Connection terminated
systemd[1]: systemd-logind.service: Failed with result 'timeout'.

Maybe the raw.lxc change will resolve it.

Update 2:

Actually, I had to reboot the parent container.

# systemctl status snap.lxd.activate.service
● snap.lxd.activate.service - Service for snap application lxd.activate
     Loaded: loaded (/etc/systemd/system/snap.lxd.activate.service; enabled; vendor preset: enabled)
     Active: activating (start) since Tue 2020-06-23 22:04:10 UTC; 326ms ago
   Main PID: 184 (snap)
      Tasks: 1 (limit: 14276)
     Memory: 7.4M
     CGroup: /system.slice/snap.lxd.activate.service
             └─184 /usr/bin/snap run lxd.activate

systemd[1]: Starting Service for snap application lxd.activate...

The weird thing is, I used the security nesting true option when creating the containers, didn’t think that would be needed in the profile. Likewise, don’t know if it’s necessary to set security privileged true at the profile level. Going to try a fresh parent container, reboot it, then spin up a child container, see if the permissions go through. If not, then may just need to go ahead with a revised profile.

I would like to avoid setting security.privileged true, I have already created new ranges for uid/gid, would prefer keeping container IDs separate from host. I’m pretty sure this worked in the past without having to set that value to true, although the security.nesting has required true.

Update 3:

Tried various things, at first I thought it might require security.privileged: "true" in profile (unless somewhere else is better), although I would rather not use that. Maybe it’s an issue with /etc/subgid and /etc/subuid, although I have a larger range for each parent-level container, with the most on the host.

With security privileged true, I am able to successfully run snap refresh and snap refresh lxd, although when I run snap restart lxd it fails:

error: cannot perform the following tasks:
- restart of [lxd.activate lxd.daemon] (# systemctl restart snap.lxd.activate.service snap.lxd.daemon.service
Job for snap.lxd.activate.service failed because the control process exited with error code.
See "systemctl status snap.lxd.activate.service" and "journalctl -xe" for details.
)
- restart of [lxd.activate lxd.daemon] (exit status 1)

Logs show same output as before.

Status shows not running. I restarted the container before checking.

# systemctl status snap.lxd.activate.service
● snap.lxd.activate.service - Service for snap application lxd.activate
     Loaded: loaded (/etc/systemd/system/snap.lxd.activate.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Wed 2020-06-24 01:02:14 UTC; 8s ago
    Process: 172 ExecStart=/usr/bin/snap run lxd.activate (code=exited, status=1/FAILURE)
   Main PID: 172 (code=exited, status=1/FAILURE)

systemd[1]: Starting Service for snap application lxd.activate...
lxd.activate[210]: cannot change apparmor hat: No child processes
lxd.activate[213]: cannot change profile for the next exec call: Permission denied
lxd.activate[172]: snap-update-ns failed with code 1: No such file or directory
systemd[1]: snap.lxd.activate.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: snap.lxd.activate.service: Failed with result 'exit-code'.
systemd[1]: Failed to start Service for snap application lxd.activate.

To avoid going in circles, going to step away, do some extra digging into this later.

It’s probably a really simple fix to permissions and I’m not seeing it.

For some additional context, here are the subgid & subuid numbers I’m using:

host001 (vm with base lxd):

echo "root:2345678901:2345678901" | tee -a /etc/subuid /etc/subgid && echo "lxd:1234567890:1234567890" | tee -a /etc/subuid /etc/subgid

org001 (lxd within host001):

echo "root:234567890:234567890" | tee -a /etc/subuid /etc/subgid && echo "lxd:123456789:123456789" | tee -a /etc/subuid /etc/subgid

web001 (lxd within org001), this would be set up with the following but currently unable to use lxd in that node:

echo "root:23456789:23456789" | tee -a /etc/subuid /etc/subgid && echo "lxd:12345678:12345678" | tee -a /etc/subuid /etc/subgid

Each time I update the subgid & subuid, I snap restart lxd.

Maybe my numbers are off.

I would like to add one more…

dev001 (lxd within web001):

echo "root:2345678:2345678" | tee -a /etc/subuid /etc/subgid && echo "lxd:1234567:1234567" | tee -a /etc/subuid /etc/subgid

Here’s an overview of what’s going on:

  • install Ubuntu Server 20.04 VM (“host001”), su into root
  • update, upgrade, install git snapd, snap install lxd (4.0/stable)
  • add to /etc/subgid & /etcsubuid: root:2345678901:2345678901 and lxd:1234567890:1234567890
  • reboot vm, ssh & su into root again
  • git clone from private repo a generic yaml for preseed
  • lxc profile edit default, added security.nesting: “true”, reboot, ssh, su root
  • lxc launch images:ubuntu/focal org001 -c security.nesting=true
  • bash into org001, update, upgrade, install git snapd, snap install lxd (4.0/stable), reboot org001
  • create snapshot then publish it, edit new image to set “public: true”
  • bash into org001
  • add to /etc/subgid & /etcsubuid: root:234567890:234567890 and lxd:123456789:123456789
  • snap restart lxd, reboot org001
  • bash into org001
  • git clone from private repo a generic yaml for preseed
  • lxc remote add the local repo and new image
  • lxc profile edit default, added security.nesting: “true”, reboot, bash back into org001
  • lxc launch local_repo:new_image web001 -c security.nesting=true
  • bash into web001
  • verify snap & lxd are functioning as intended, they are not

error: system does not fully support snapd: apparmor detected but insufficient permissions to use it

Ok, so you have:

  • host (host001)
  • first level (org001)
  • second level (web001)

And you’re trying to install snaps/lxd inside of web001, correct?

If so, this indeed will not work. The problem boils down to AppArmor namespacing/nesting. AppArmor only supports a single level of namespacing.

In this case, org001 gets its own AppArmor namepace which snapd and lxd then happily use.

web001 cannot get its own namespace so LXD does the best it can which in this case means generating a regular apparmor profile and applying that to the container (as we were doing pre-apparmor namespacing). This is usually mostly fine but it does mean that anything running in web001 which requires the creation and loading of apparmor profiles will just fail. This is the case of snapd.

You could probably trick snapd into thinking that apparmor isn’t used at all on the system, which would then let you run snaps (including lxd) inside that nested container but without any of the confinement that would normally come from apparmor.

Looking at the snapd code, it looks like doing:

mount -t tmpfs tmpfs /sys/kernel/security/apparmor inside the container before restarting snapd may be enough to have it disable apparmor completely.

Tried that within the 2nd level container (web001):

root@org001:~# lxc exec web001 bash
root@web001:~# mount -t tmpfs tmpfs /sys/kernel/security/apparmor
root@web001:~# systemctl restart snapd
root@web001:~# systemctl status snap.lxd.activate.service
● snap.lxd.activate.service - Service for snap application lxd.activate
     Loaded: loaded (/etc/systemd/system/snap.lxd.activate.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Thu 2020-06-25 00:11:46 UTC; 1min 14s ago
    Process: 178 ExecStart=/usr/bin/snap run lxd.activate (code=exited, status=1/FAILURE)
   Main PID: 178 (code=exited, status=1/FAILURE)

Jun 25 00:11:43 web001 systemd[1]: Starting Service for snap application lxd.activate...
Jun 25 00:11:46 web001 lxd.activate[320]: cannot change apparmor hat: No child processes
Jun 25 00:11:46 web001 lxd.activate[321]: cannot change profile for the next exec call: Permission denied
Jun 25 00:11:46 web001 lxd.activate[178]: snap-update-ns failed with code 1: No such file or directory
Jun 25 00:11:46 web001 systemd[1]: snap.lxd.activate.service: Main process exited, code=exited, status=1/FAILURE
Jun 25 00:11:46 web001 systemd[1]: snap.lxd.activate.service: Failed with result 'exit-code'.
Jun 25 00:11:46 web001 systemd[1]: Failed to start Service for snap application lxd.activate.
root@web001:~#

Also tried backing out to the 1st level container (org001) and doing the same thing there, then hopping back into 2nd level, same thing.

Double checking services:

root@web001:~# snap services lxd
Service       Startup  Current   Notes
lxd.activate  enabled  inactive  -
lxd.daemon    enabled  inactive  socket-activated

I might be incorrectly running some of that, which would explain why it’s not bypassing apparmor.

I considered setting snapd and lxd unconfined starting at the 2nd level, although it will probably be okay to go ahead with disabling apparmor instead.

Tried disabling apparmor and checking lxd status with a start and it still failed:

root@web001:~# systemctl disable apparmor
Synchronizing state of apparmor.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable apparmor
Removed /etc/systemd/system/sysinit.target.wants/apparmor.service.
root@web001:~# systemctl restart snapd
root@web001:~# systemctl status snapd
● snapd.service - Snap Daemon
     Loaded: loaded (/lib/systemd/system/snapd.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2020-06-25 00:31:08 UTC; 5s ago
TriggeredBy: ● snapd.socket
   Main PID: 1158 (snapd)
      Tasks: 11 (limit: 14276)
     Memory: 13.3M
     CGroup: /system.slice/snapd.service
             └─1158 /usr/lib/snapd/snapd

Jun 25 00:31:08 web001 systemd[1]: Starting Snap Daemon...
Jun 25 00:31:08 web001 snapd[1158]: AppArmor status: apparmor not enabled
Jun 25 00:31:08 web001 snapd[1158]: AppArmor status: apparmor not enabled
Jun 25 00:31:08 web001 snapd[1158]: daemon.go:343: started snapd/2.45.1 (series 16; classic; devmode) ubuntu/20.04 (amd64) linux/5.4.0-37-generic.
Jun 25 00:31:08 web001 snapd[1158]: daemon.go:436: adjusting startup timeout by 45s (pessimistic estimate of 30s plus 5s per snap)
Jun 25 00:31:08 web001 systemd[1]: Started Snap Daemon.
root@web001:~# systemctl status snap.lxd.activate.service
● snap.lxd.activate.service - Service for snap application lxd.activate
     Loaded: loaded (/etc/systemd/system/snap.lxd.activate.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Thu 2020-06-25 00:29:07 UTC; 2min 18s ago
   Main PID: 886 (code=exited, status=1/FAILURE)

Jun 25 00:29:07 web001 systemd[1]: Starting Service for snap application lxd.activate...
Jun 25 00:29:07 web001 lxd.activate[901]: cannot change apparmor hat: No child processes
Jun 25 00:29:07 web001 lxd.activate[902]: cannot change profile for the next exec call: Permission denied
Jun 25 00:29:07 web001 lxd.activate[886]: snap-update-ns failed with code 1: File exists
Jun 25 00:29:07 web001 systemd[1]: snap.lxd.activate.service: Main process exited, code=exited, status=1/FAILURE
Jun 25 00:29:07 web001 systemd[1]: snap.lxd.activate.service: Failed with result 'exit-code'.[m
Jun 25 00:29:07 web001 systemd[1]: Failed to start Service for snap application lxd.activate.
root@web001:~# systemctl start snap.lxd.activate.service
Job for snap.lxd.activate.service failed because the control process exited with error code.
See "systemctl status snap.lxd.activate.service" and "journalctl -xe" for details.

Also tried that on the 1st level container to see if it would help the 2nd level container, same result on the 2nd level container. I realized that I just disabled, to make sure they were stopped I went back in and systemctl stop apparmor and double-checked status at each level.

root@web001:~# systemctl start snap.lxd.activate.service
Job for snap.lxd.activate.service failed because the control process exited with error code.
See "systemctl status snap.lxd.activate.service" and "journalctl -xe" for details.

Logs showing same thing as mentioned before. I’m probably just doing something incorrectly. I can try messing directly with apparmor to unconfine snapd and lxd, although don’t know if that will make a difference or if it would be required at the 1st container level instead of or in addition to the 2nd level.

Apparently masking all of /sys/kernel/security/ with mount -t tmpfs tmpfs /sys/kernel/security/ works.

in web001 you should be able to get things working with:

  • mount -t tmpfs tmpfs /sys/kernel/security/
  • systemctl restart snapd
  • snap install lxd
  • lxc profile set default security.privileged true
  • lxc profile set default raw.lxc lxc.apparmor.profile=unchanged

That last one is needed as LXC also cannot access the profiles after everything got masked under /sys/kernel/security, so telling it to not change anything will have it behave as wanted.

Thanks again for the input. I reset to try again with those changes.

Before trying them, attempted to set image public true during publish, although it did not work.

Used this page as reference:
https://ubuntu.com/server/docs/containers-lxd

Example from that page:
lxc publish u1/YYYY-MM-DD --alias foo-2.0 public=true

Using public=true with publish resulted in the following:

auto_update: false
properties:
  public: "true"
public: false
expires_at: 0001-01-01T00:00:00Z
profiles:
- default

That required edit image, remove properties: and set main line public: true.

Getting back to the 2nd level container, the changes you provided were successful.

While inside the 1st level container, instead of including the lxd installation during the image creation, I only included git and snapd, launched the 2nd level container, then ran the steps, installing lxd separately.

I am guessing I could probably still leave lxd installed on the image and it might still work on the 2nd level container as long as I run the mount, restart snapd, and lxc profile set changes.

The raw.lxc change makes sense as, what I am guessing is an apparmor override.

I wish I did not have to set security.privileged to true. I expanded subgid & subuid to prepare each level of containers.

Getting closer! :slight_smile: Next up:

  • Add 3rd level containers: 3 environments within web001 (basically, dev through prod)
  • Add 4th level containers: the actual applications (and final layer of containers)

It will be interesting to see how all of this handles with that many levels of nested containers. I may separate the environments, which would also partially flatten the nesting levels, although curious to try this out.

You may not need security.privileged=true, in my case I needed to since it was a quick hack and I didn’t set large maps for every layer :slight_smile:

Making progress! :sunglasses: Thank you! I was able to launch nested containers with large maps without having to set security privileged true.

At the moment, I am creating an image from the 1st level container that includes git and snapd, then using that with the 2nd and 3rd level containers. Attempted to include lxd with that image, it ran into issues again but nothing major. I was able to successfully build everything out up to the 4th nested level by manually installing lxd at each level.

Also currently retrieving preseed yaml from github private repo, requiring credentials in the middle of the setup process, which is not the smoothest workflow. I might change that to temporarily use a basic script from the repo so the credentials can be used in the beginning, then let the script do the rest.

I am looking into options to set it up to allow pushing directly from git merges to a deployment tool.

I am curious if it is possible to include lxd in an image when launching multiple levels of nested containers. The extra steps with mount, snapd restart, lxd install, lxc profile set at each new level still accomplish the intended end result.

Also curious if it’s possible to set an image to public at the time it is published, tried public=true but that didn’t work.

lxc publish --public should do the trick.

Of course, makes sense. :smile: Thanks again for the help!

Each time the deeper levels of nested containers are rebooted, it requires running:
mount -t tmpfs tmpfs /sys/kernel/security/

Although it does not appear to require restarting snapd each time.

It may be easier to insert that mount into /etc/fstab, although on an initial glance, it is only showing:

# UNCONFIGURED FSTAB FOR BASE SYSTEM

One thought is actually adding the mount into /etc/fstab, unless it will be overwritten at next boot.

There’s the option of appending /etc/rc.local (I’ve used that in RHEL, it’d probably be somewhere similar to that in Ubuntu) with a line that has the mount command, although that has been seen as an “unclean” way to mount something.

/etc/fstab should be perfectly reasonable for this

1 Like