Container refuses to start with other profile

Can you run lxc monitor --type=logging --pretty in a separate terminal while running lxc start again, then post the output from that monitor command?

@stgraber Thanks, sure. Please find the attached logs from above command:

DBUG[12-03|15:15:39] New event listener: 91b18509-0790-402c-821e-a1f81ecb4a93 
DBUG[12-03|15:16:04] Handling                                 ip=@ method=GET protocol=unix url=/1.0 username=afridi
DBUG[12-03|15:16:04] Handling                                 protocol=unix url=/1.0/profiles/x11 username=afridi ip=@ method=GET
DBUG[12-03|15:16:06] Handling                                 ip=@ method=PUT protocol=unix url=/1.0/profiles/x11 username=afridi
DBUG[12-03|15:16:14] Handling                                 username=afridi ip=@ method=GET protocol=unix url=/1.0
DBUG[12-03|15:16:14] Handling                                 ip=@ method=GET protocol=unix url=/1.0/events username=afridi
DBUG[12-03|15:16:14] New event listener: c9431c5f-5dcb-4ae2-a99c-62cba251f16a 
DBUG[12-03|15:16:14] Handling                                 ip=@ method=POST protocol=unix url=/1.0/instances username=afridi
DBUG[12-03|15:16:14] Connecting to a remote simplestreams server 
DBUG[12-03|15:16:14] Responding to instance create 
DBUG[12-03|15:16:14] New task Operation: a255ef42-0984-4e4f-a78c-5d23aa87af60 
DBUG[12-03|15:16:14] Started task operation: a255ef42-0984-4e4f-a78c-5d23aa87af60 
DBUG[12-03|15:16:14] Connecting to a remote simplestreams server 
DBUG[12-03|15:16:14] Handling                                 ip=@ method=GET protocol=unix url=/1.0/operations/a255ef42-0984-4e4f-a78c-5d23aa87af60 username=afridi
DBUG[12-03|15:16:14] Image already exists in the DB           fingerprint=f42eab18aa248c4caeebc4dd053c8fccd5830589dff4365612b96931275a2989
INFO[12-03|15:16:14] Creating container                       ephemeral=false name=mycontainer project=default
DBUG[12-03|15:16:14] FillInstanceConfig started               driver=zfs instance=mycontainer pool=default project=default
DBUG[12-03|15:16:14] FillInstanceConfig finished              driver=zfs instance=mycontainer pool=default project=default
INFO[12-03|15:16:14] Created container                        ephemeral=false name=mycontainer project=default
DBUG[12-03|15:16:14] CreateInstanceFromImage started          driver=zfs instance=mycontainer pool=default project=default
DBUG[12-03|15:16:14] EnsureImage started                      driver=zfs fingerprint=f42eab18aa248c4caeebc4dd053c8fccd5830589dff4365612b96931275a2989 pool=default
DBUG[12-03|15:16:14] Checking image volume size               driver=zfs fingerprint=f42eab18aa248c4caeebc4dd053c8fccd5830589dff4365612b96931275a2989 pool=default
DBUG[12-03|15:16:14] Setting image volume size                size= driver=zfs fingerprint=f42eab18aa248c4caeebc4dd053c8fccd5830589dff4365612b96931275a2989 pool=default
DBUG[12-03|15:16:14] EnsureImage finished                     driver=zfs fingerprint=f42eab18aa248c4caeebc4dd053c8fccd5830589dff4365612b96931275a2989 pool=default
DBUG[12-03|15:16:14] Checking volume size                     driver=zfs instance=mycontainer pool=default project=default
DBUG[12-03|15:16:14] Mounted ZFS dataset                      dev=default/containers/mycontainer driver=zfs path=/var/snap/lxd/common/lxd/storage-pools/default/containers/mycontainer pool=default
DBUG[12-03|15:16:14] Unmounted ZFS dataset                    dev=default/containers/mycontainer driver=zfs path=/var/snap/lxd/common/lxd/storage-pools/default/containers/mycontainer pool=default
DBUG[12-03|15:16:14] CreateInstanceFromImage finished         driver=zfs instance=mycontainer pool=default project=default
DBUG[12-03|15:16:14] UpdateInstanceBackupFile started         project=default driver=zfs instance=mycontainer pool=default
DBUG[12-03|15:16:15] Mounted ZFS dataset                      driver=zfs path=/var/snap/lxd/common/lxd/storage-pools/default/containers/mycontainer pool=default dev=default/containers/mycontainer
DBUG[12-03|15:16:15] UpdateInstanceBackupFile finished        project=default driver=zfs instance=mycontainer pool=default
DBUG[12-03|15:16:15] Success for task operation: a255ef42-0984-4e4f-a78c-5d23aa87af60 
DBUG[12-03|15:16:15] Unmounted ZFS dataset                    dev=default/containers/mycontainer driver=zfs path=/var/snap/lxd/common/lxd/storage-pools/default/containers/mycontainer pool=default
DBUG[12-03|15:16:15] Handling                                 url=/1.0/instances/mycontainer username=afridi ip=@ method=GET protocol=unix
DBUG[12-03|15:16:15] Handling                                 ip=@ method=PUT protocol=unix url=/1.0/instances/mycontainer/state username=afridi
DBUG[12-03|15:16:15] New task Operation: 29861b8e-09a0-4648-84db-22fcc65a729e 
DBUG[12-03|15:16:15] Started task operation: 29861b8e-09a0-4648-84db-22fcc65a729e 
DBUG[12-03|15:16:15] Handling                                 ip=@ method=GET protocol=unix url=/1.0/operations/29861b8e-09a0-4648-84db-22fcc65a729e username=afridi
DBUG[12-03|15:16:15] MountInstance started                    driver=zfs instance=mycontainer pool=default project=default
DBUG[12-03|15:16:15] Container idmap changed, remapping 
DBUG[12-03|15:16:15] Updated metadata for task Operation: 29861b8e-09a0-4648-84db-22fcc65a729e 
DBUG[12-03|15:16:15] MountInstance finished                   driver=zfs instance=mycontainer pool=default project=default
DBUG[12-03|15:16:15] Mounted ZFS dataset                      path=/var/snap/lxd/common/lxd/storage-pools/default/containers/mycontainer pool=default dev=default/containers/mycontainer driver=zfs
DBUG[12-03|15:16:18] Updated metadata for task Operation: 29861b8e-09a0-4648-84db-22fcc65a729e 
DBUG[12-03|15:16:18] Starting device                          device=eth0 instance=mycontainer project=default type=nic
DBUG[12-03|15:16:18] Scheduler: network: veth9ec462d2 has been added: updating network priorities 
DBUG[12-03|15:16:18] Scheduler: network: veth7427ee22 has been added: updating network priorities 
DBUG[12-03|15:16:18] Starting device                          device=root instance=mycontainer project=default type=disk
DBUG[12-03|15:16:18] Starting device                          project=default type=proxy device=PASocket1 instance=mycontainer
DBUG[12-03|15:16:18] Starting device                          instance=mycontainer project=default type=proxy device=X0
DBUG[12-03|15:16:18] Starting device                          instance=mycontainer project=default type=gpu device=mygpu
DBUG[12-03|15:16:18] UpdateInstanceBackupFile started         driver=zfs instance=mycontainer pool=default project=default
DBUG[12-03|15:16:18] Skipping unmount as in use               driver=zfs pool=default refCount=1
DBUG[12-03|15:16:18] UpdateInstanceBackupFile finished        driver=zfs instance=mycontainer pool=default project=default
INFO[12-03|15:16:18] Starting container                       project=default stateful=false used="1970-01-01 01:00:00 +0100 CET" action=start created="2020-12-03 15:16:14.805515395 +0100 CET" ephemeral=false name=mycontainer
DBUG[12-03|15:16:18] Handling                                 url="/internal/containers/mycontainer/onstart?project=default" username=root ip=@ method=GET protocol=unix
DBUG[12-03|15:16:18] Scheduler: container mycontainer started: re-balancing 
DBUG[12-03|15:16:18] Handling                                 protocol=unix url="/internal/containers/mycontainer/onstopns?netns=%2Fproc%2F22349%2Ffd%2F4&project=default&target=stop" username=root ip=@ method=GET
DBUG[12-03|15:16:18] Stopping device                          device=eth0 instance=mycontainer project=default type=nic
DBUG[12-03|15:16:18] Clearing instance firewall static filters ipv4=0.0.0.0 ipv6=:: parent=lxdbr0 project=default dev=eth0 host_name=veth7427ee22 hwaddr=00:16:3e:31:7b:ce instance=mycontainer
DBUG[12-03|15:16:18] Clearing instance firewall dynamic filters hwaddr=00:16:3e:31:7b:ce instance=mycontainer ipv4=<nil> ipv6=<nil> parent=lxdbr0 project=default dev=eth0 host_name=veth7427ee22
DBUG[12-03|15:16:18] Failure for task operation: 29861b8e-09a0-4648-84db-22fcc65a729e: Failed to run: /snap/lxd/current/bin/lxd forkstart mycontainer /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/mycontainer/lxc.conf:  
EROR[12-03|15:16:18] Failed starting container                action=start created="2020-12-03 15:16:14.805515395 +0100 CET" ephemeral=false name=mycontainer project=default stateful=false used="1970-01-01 01:00:00 +0100 CET"
DBUG[12-03|15:16:18] Event listener finished: c9431c5f-5dcb-4ae2-a99c-62cba251f16a 
DBUG[12-03|15:16:18] Disconnected event listener: c9431c5f-5dcb-4ae2-a99c-62cba251f16a 
DBUG[12-03|15:16:19] Handling                                 ip=@ method=GET protocol=unix url="/internal/containers/mycontainer/onstop?project=default&target=stop" username=root
INFO[12-03|15:16:19] Container initiated stop                 stateful=false used="2020-12-03 15:16:18.617314147 +0100 CET" action=stop created="2020-12-03 15:16:14.805515395 +0100 CET" ephemeral=false name=mycontainer project=default
DBUG[12-03|15:16:19] Container stopped, starting storage cleanup container=mycontainer
DBUG[12-03|15:16:19] Stopping device                          type=gpu device=mygpu instance=mycontainer project=default
DBUG[12-03|15:16:19] Stopping device                          device=X0 instance=mycontainer project=default type=proxy
DBUG[12-03|15:16:19] Stopping device                          device=PASocket1 instance=mycontainer project=default type=proxy
DBUG[12-03|15:16:19] Stopping device                          instance=mycontainer project=default type=disk device=root
DBUG[12-03|15:16:19] UnmountInstance started                  driver=zfs instance=mycontainer pool=default project=default
DBUG[12-03|15:16:19] Unmounted ZFS dataset                    dev=default/containers/mycontainer driver=zfs path=/var/snap/lxd/common/lxd/storage-pools/default/containers/mycontainer pool=default
DBUG[12-03|15:16:19] UnmountInstance finished                 driver=zfs instance=mycontainer pool=default project=default
INFO[12-03|15:16:20] Shut down container                      project=default stateful=false used="2020-12-03 15:16:18.617314147 +0100 CET" action=stop created="2020-12-03 15:16:14.805515395 +0100 CET" ephemeral=false name=mycontainer
DBUG[12-03|15:16:20] Scheduler: container mycontainer stopped: re-balancing

Can you try setting nvidia.runtime to false, see if that unblocks it?

The behavior seems consistent with nvidia-container failing.

I did as instructed but now getting the following errors @stgraber :

Error: Error occurred when starting proxy device: Error: Failed to listen on /home/ubuntu/pulse-native: listen unix /home/ubuntu/pulse-native: bind: no such file or directory
Try `lxc info --show-log local:mycontainer` for more info

Here is the log:

Name: mycontainer
Location: none
Remote: unix://
Architecture: x86_64
Created: 2020/12/03 16:15 UTC
Status: Stopped
Type: container
Profiles: default, x11

Log:

lxc mycontainer 20201203161542.554 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.monitor.mycontainer"
lxc mycontainer 20201203161542.555 WARN     cgfsng - cgroups/cgfsng.c:mkdir_eexist_on_last:1152 - File exists - Failed to create directory "/sys/fs/cgroup/cpuset//lxc.payload.mycontainer"
lxc mycontainer 20201203161542.560 WARN     cgfsng - cgroups/cgfsng.c:fchowmodat:1573 - No such file or directory - Failed to fchownat(17, memory.oom.group, 1000000000, 0, AT_EMPTY_PATH | AT_SYMLINK_NOFOLLOW )

Does /home/ubuntu exist inside container?

Hi @tomp the container is stopped.

Even I tried the solution here which was mentioned by @duckhook.

OK can you remove the proxy device too then please.

@tomp Thanks I did as you instructed and Now the the container is running! Can you please let me know whats was the reason for that error very briefly?

I short I want I want to achieve is to setup the nvidia development env and also Ros and use it for the development?

thanks for your valuable time :slight_smile:

Does the /home/ubuntu directory exist in your container?

Yes, it exists

Can you show output of ls -la /home/ubuntu please

Here it is:

rwxr-xr-x 5 ubuntu ubuntu   10 Dec  3 17:08 .
drwxr-xr-x 3 root   root      3 Dec  3 17:01 ..
-rw------- 1 ubuntu ubuntu  389 Dec  3 17:11 .bash_history
-rw-r--r-- 1 ubuntu ubuntu  220 Apr  4  2018 .bash_logout
-rw-r--r-- 1 ubuntu ubuntu 3771 Apr  4  2018 .bashrc
drwx------ 2 ubuntu ubuntu    2 Dec  3 17:08 .cache
drwx------ 3 ubuntu ubuntu    3 Dec  3 17:08 .mozilla
-rw-r--r-- 1 ubuntu ubuntu  807 Apr  4  2018 .profile
drwx------ 2 ubuntu ubuntu    3 Dec  3 17:01 .ssh
-rw-r--r-- 1 ubuntu ubuntu    0 Dec  3 17:01 .sudo_as_admin_successful

I just install firefox just to test x11 forwarding.

Can you show output of lxc config show <instance> --expanded please

Please find the logs:

architecture: x86_64
config:
  environment.DISPLAY: :0
  environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native
  image.architecture: amd64
  image.description: ubuntu 18.04 LTS amd64 (release) (20201125)
  image.label: release
  image.os: ubuntu
  image.release: bionic
  image.serial: "20201125"
  image.type: squashfs
  image.version: "18.04"
  nvidia.driver.capabilities: all
  nvidia.runtime: "false"
  user.user-data: |
    #cloud-config
    runcmd:
      - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
    packages:
      - x11-apps
      - mesa-utils
      - pulseaudio
  volatile.base_image: f42eab18aa248c4caeebc4dd053c8fccd5830589dff4365612b96931275a2989
  volatile.eth0.host_name: veth9a284d58
  volatile.eth0.hwaddr: 00:16:3e:ef:89:a6
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 47dced0a-7e63-41c5-8801-deaecb830a2b
devices:
  X0:
    bind: container
    connect: unix:@/tmp/.X11-unix/X1
    listen: unix:@/tmp/.X11-unix/X0
    security.gid: "1000"
    security.uid: "1000"
    type: proxy
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  mygpu:
    type: gpu
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- default
- x11
stateful: false
description: ""

And what is your host OS?

Its,

Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic

I just tried the same proxy config on my ubuntu 20.04 system and it worked fine, created the unix socket inside the container as expected.

Does /run/user/1000/pulse/native exist on your host?

No, following entries are there

tmpfs           1,5G   24K  1,5G   1% /run/user/121
tmpfs           1,5G   72K  1,5G   1% /run/user/1000

So that will likely be the issue, you don’t have a host-side pulse audio listener, on my system I see:

srw-rw-rw- 1 user user 0 Dec  3 08:56 /run/user/1000/pulse/native

Okay, I will try to run it on another host and check these configurations.

Thank you or your time :slight_smile:

But once again I might disturb you.

Thanks