Installing Chromium in Ubuntu container fails - /dev/pts: Permission denied

I have an LXC container (managed with Incus) that I’m using for JS development (node, npm, angular, etc.). I want to install Chromium (or Chrome) into this container and run it in headless-mode for web application testing.

Since I’m running Ubuntu inside the container, I’ve tried to install apt package chromium-browser, which merely pulls in the chromium snap. I’ve also tried to install the snap directly. Either way, I ultimately get the same error message. Here’s the alternative commands that I’ve tried, and the resulting message:

$ sudo snap install chromium
[...]
$ sudo apt install chromium-browser
[...]
- Run configure hook of "chromium" snap if present (run hook "configure": cannot perform operation: mount -t devpts --make-slave --make-private -o acl,relatime,kernmount,iversion,active,nous
er,0xffffffff00000000 devpts /dev/pts: Permission denied)

Someone else brought up this similar topic about 4 years ago: Installing Chromium in container fails - sudo snap install chromium. But the root cause (lzo compression) seemed quite different back then.

Searching the web for the error message doesn’t bring up anything useful, so I thought I’d fist ask here. I might also try over at snapcraft, and find out what the chromium snap is trying to do with /dev/pts here.

That said, the chromium snap works like a charm when I run it directly on my laptop (i.e. my Incus host) or in an Incus-managed VM - both also running Ubuntu 24.10 (oracular). In contrast, I’ve tried installing the chromium snap in several other Incus-managed LXC containers, and I ran into the same error for all of them. So could it have to do with the way that LXC manages /dev/pts?

Here’s some technical details about my host and the container:

$ incus version 
Client version: 6.0.1
Server version: 6.0.1

$ incus config show container-js-dev
architecture: x86_64
config:
  boot.autostart: "true"
  cloud-init.user-data: |
    [**** omitted ****]
  image.architecture: amd64
  image.description: Ubuntu oracular amd64 (20241124_07:42)
  image.os: Ubuntu
  image.release: oracular
  image.serial: "20241124_07:42"
  image.type: squashfs
  image.variant: cloud
  limits.cpu: "2"
  limits.memory: 2GiB
  raw.idmap: both 40001 40001
  volatile.base_image: 64b3c49eae9b7874467871efbbe6541fbf9a8801925a0ec15f1370794ec16011
  volatile.bridge-nic.host_name: vethec4ba243
  volatile.bridge-nic.hwaddr: 00:16:3e:06:c8:3b
  volatile.bridge-nic.name: eth0
  volatile.cloud-init.instance-id: 83fbbb47-b286-4426-b934-2feba2505f83
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":40001},{"Isuid":true,"Isgid":true,"Hostid":40001,"Nsid":40001,"Maprange":1},{"Isuid":true,"Isgid":false,"Hostid":1040002,"Nsid":40002,"Maprange":999959998},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":40001},{"Isuid":false,"Isgid":true,"Hostid":1040002,"Nsid":40002,"Maprange":999959998}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":40001},{"Isuid":true,"Isgid":true,"Hostid":40001,"Nsid":40001,"Maprange":1},{"Isuid":true,"Isgid":false,"Hostid":1040002,"Nsid":40002,"Maprange":999959998},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":40001},{"Isuid":false,"Isgid":true,"Hostid":1040002,"Nsid":40002,"Maprange":999959998}]'
  volatile.last_state.idmap: '[]'
  volatile.last_state.power: RUNNING
  volatile.last_state.ready: "false"
  volatile.uuid: 1d75ac8e-4830-4012-8d8e-851962c72df7
  volatile.uuid.generation: 1d75ac8e-4830-4012-8d8e-851962c72df7
devices:
  bridge-nic:
    ipv4.address: 192.168.66.34
    network: incusbr0
    type: nic
  disk-0:
    path: /mnt/meeque/dev/web/xss-demo-app
    shift: "false"
    source: /home/michael/Programmierung/web/xss-demo-app
    type: disk
ephemeral: false
profiles:
- default
stateful: false
description: ""

$ incus project show user-40001
config:
  features.images: "true"
  features.networks: "false"
  features.networks.zones: "true"
  features.profiles: "true"
  features.storage.buckets: "true"
  features.storage.volumes: "false"
  restricted: "true"
  restricted.containers.nesting: block
  restricted.devices.disk: allow
  restricted.devices.disk.paths: /home/michael/,/tmp/.X11-unix/
  restricted.devices.gpu: allow
  restricted.devices.nic: allow
  restricted.devices.proxy: allow
  restricted.idmap.gid: "40001"
  restricted.idmap.uid: "40001"
  restricted.networks.access: incusbr0,incusbr-40001
  user.mq_config_hash: 3997d7e1f0e0f314feec84c2dccdec9c48c42b57
description: ""
name: user-40001
used_by:
- [**** omitted ****]
- /1.0/instances/container-js-dev?project=user-40001
- [**** omitted ****]
- /1.0/profiles/default?project=user-40001

Fyi, 40001 (not 1000) is my uid/gid (for user michael), both on the host and inside the container.

Here’s additional info from inside the container:

$ ls -la /dev/pts/
total 0
drwxr-xr-x 2 root    root      0 Jan 20 21:23 .
drwxr-xr-x 8 root    root    500 Jan 20 21:23 ..
crw--w---- 1 root    tty  136, 0 Jan 20 21:23 0
crw--w---- 1 michael tty  136, 2 Jan 26 22:01 2
crw-rw-rw- 1 root    root   5, 2 Jan 26 22:01 ptmx

$ mount | grep /dev/pts
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=1000005,mode=620,ptmxmode=666,max=1024)

Does not look suspicious to me. Any idea how to fix this or investigate further?

I know, I should try with latest Incus version instead, but I’d need some time to set that up. I can also try a different approach to install Chromium/Chrome in the container… but I’d still like to know why I’m running into this error?

You don’t need the latest, but the LTS branch is currently on 6.0.3. I suggest you start with that, in case this is an issue that’s already been found and fixed.

1 Like

Sorry for leaving this stale for so long. I have not made much progress on the original problem. But I’ve found a workaround a while back:

I’m now using the chromium package from ppa:xtradeb/apps.

This seems to be a native .deb package that does not pull in any snap. In any case, Chromium installs and runs just fine in my container. (As mentioned, I’m only using it in headless mode though.)

Indeed, the package chromium-browser on recent versions of Ubuntu is a snap package in disguise.

If you are using Puppeteer or Playwright, you will notice that they pull in a tar.gz package of Chromium. These are packages of specific versions that have been tested. I suggest to visit Chrome for Testing: reliable downloads for browser automation  |  Blog  |  Chrome for Developers and get the appropriate package from there.

1 Like

Getting back to the original problem:

I’ve re-tested with Incus 6.0.3 on the host. It looks even weirder now. I don’t even get to the original pts problem anymore. Instead apt fails to set up snapd:

$ sudo apt install chromium-browser
Installing:
  chromium-browser

Installing dependencies:
  apparmor  liblzo2-2  snapd  squashfs-tools

Suggested packages:
  apparmor-profiles-extra  apparmor-utils  zenity  | kdialog

Summary:
  Upgrading: 0, Installing: 5, Removing: 0, Not Upgrading: 4
  Download size: 32.8 MB
  Space needed: 125 MB / 940 GB available

Continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu oracular/main amd64 apparmor amd64 4.1.0~beta1-0ubuntu3 [663 kB]
Get:2 http://archive.ubuntu.com/ubuntu oracular/main amd64 liblzo2-2 amd64 2.10-3 [54.2 kB]
Get:3 http://archive.ubuntu.com/ubuntu oracular/main amd64 squashfs-tools amd64 1:4.6.1-1build1 [189 kB]
Get:4 http://archive.ubuntu.com/ubuntu oracular-updates/main amd64 snapd amd64 2.67.1+24.10 [31.9 MB]
Get:5 http://archive.ubuntu.com/ubuntu oracular/universe amd64 chromium-browser amd64 2:1snap1-0ubuntu2 [50.0 kB]
Fetched 32.8 MB in 60s (550 kB/s)
Preconfiguring packages ...
Selecting previously unselected package apparmor.
(Reading database ... 27543 files and directories currently installed.)
Preparing to unpack .../apparmor_4.1.0~beta1-0ubuntu3_amd64.deb ...
Unpacking apparmor (4.1.0~beta1-0ubuntu3) ...
Selecting previously unselected package liblzo2-2:amd64.
Preparing to unpack .../liblzo2-2_2.10-3_amd64.deb ...
Unpacking liblzo2-2:amd64 (2.10-3) ...
Selecting previously unselected package squashfs-tools.
Preparing to unpack .../squashfs-tools_1%3a4.6.1-1build1_amd64.deb ...
Unpacking squashfs-tools (1:4.6.1-1build1) ...
Selecting previously unselected package snapd.
Preparing to unpack .../snapd_2.67.1+24.10_amd64.deb ...
Unpacking snapd (2.67.1+24.10) ...
Setting up apparmor (4.1.0~beta1-0ubuntu3) ...
Created symlink '/etc/systemd/system/sysinit.target.wants/apparmor.service' → '/usr/lib/systemd/system/apparmor.service'.
Reloading AppArmor profiles 
Setting up liblzo2-2:amd64 (2.10-3) ...
Setting up squashfs-tools (1:4.6.1-1build1) ...
Setting up snapd (2.67.1+24.10) ...
Created symlink '/etc/systemd/system/multi-user.target.wants/snapd.apparmor.service' → '/usr/lib/systemd/system/snapd.apparmor.service'.
Created symlink '/etc/systemd/system/multi-user.target.wants/snapd.autoimport.service' → '/usr/lib/systemd/system/snapd.autoimport.service'.
Created symlink '/etc/systemd/system/multi-user.target.wants/snapd.core-fixup.service' → '/usr/lib/systemd/system/snapd.core-fixup.service'.
Created symlink '/etc/systemd/system/multi-user.target.wants/snapd.recovery-chooser-trigger.service' → '/usr/lib/systemd/system/snapd.recovery-chooser-trigger.service'.
Created symlink '/etc/systemd/system/multi-user.target.wants/snapd.seeded.service' → '/usr/lib/systemd/system/snapd.seeded.service'.
Created symlink '/etc/systemd/system/cloud-final.service.wants/snapd.seeded.service' → '/usr/lib/systemd/system/snapd.seeded.service'.
Created symlink '/etc/systemd/system/multi-user.target.wants/snapd.service' → '/usr/lib/systemd/system/snapd.service'.
Created symlink '/etc/systemd/system/timers.target.wants/snapd.snap-repair.timer' → '/usr/lib/systemd/system/snapd.snap-repair.timer'.
Created symlink '/etc/systemd/system/sockets.target.wants/snapd.socket' → '/usr/lib/systemd/system/snapd.socket'.
Created symlink '/etc/systemd/system/final.target.wants/snapd.system-shutdown.service' → '/usr/lib/systemd/system/snapd.system-shutdown.service'.

Progress: [ 71%]

It does not even fail properly. It just hangs there. Repeatedly at 71%. For hours, if I let it.

Also, I cannot kill the apt process from inside the container. Not even as root. However, I can get rid of it by stopping the whole container from the Incus host.

I guess it has something to do with apparmor (which also gets pulled in as a dependency). Is this a general constraint? Are apparmor and snapd supposed to work in user-confined Incus projects? Could not find clues on this here or in the docs.

If this is supposed to work in general, what am I doing wrong? Except for the newer Incus version, my setup hasn’t changed much since the initial post. Oh, and I’ve upgraded the host to Ubuntu 25.04 (plucky). Guest is still Ubuntu 24.10 (oracular).

Could the new behavior be due to changes of Ubuntu-specific apparmor configuration on the Incus host?

You would rather stick to the LTS versions of Ubuntu. The latest is 24.04, the next will be 26.04 (in April 2026). The intermediate versions are dev only, and they are supported for 9 months after release. You are supposed to upgrade or you are stuck.

There are a few reports of AppArmor issues with 25.04.

1 Like

Thanks for the feedback. And sorry to hear about these AppArmor issues.

I used to stick to Ubuntu LTS for Desktop in the past. But last fall, I felt adventurous and tried the interims.

Hadn’t considered interim releases “dev only”. Ubuntu release cycle docs say “These are production-quality releases”. But they do feel slightly more glitchy. Guess I’ll stick to LTS next time…

I do not know what they mean with production-quality. These are interim to test things that will eventually will lead to the LTS version. As a solely desktop user you may use those interim versions. Not for production though.

Every six months between LTS versions, Canonical publishes an interim release of Ubuntu, with 25.04 being the latest example. These are production-quality releases and are supported for 9 months, with sufficient time provided for users to update, but these releases do not receive the long-term commitment of LTS releases.

1 Like