Just curious if anyone has gotten Incus 0.2-r1 to work on Gentoo. I’ve get to get it to compile as it seems to depend on cowsql which isn’t building.
Was that using the ebuild?
Hey,
please open a new bug at https://bugs.gentoo.org/ with emerge --info
and your build.log file attached. Or post them here - but bugs.gentoo.org has a better chance for us Gentoo maintainers to notice it.
I can confirm cowsql and incus builds and works on Gentoo.
Yes, from ebuilds. Incus 0.2-rc1 was masked and I think requires some testing version of it’s dependencies so it’s not a simple emerge to get it loaded. And probably not a problem for the maintainers. I’m sure I just need to figure out the correct packages. Gotta love Gentoo.
Thanks for the info.
So, did get it to build using test packages for the dependencies cowsql and raft, but when running …
$ incus admin init
got …
$ sudo incus admin init
Would you like to use clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (dir, lvm) [default=dir]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=incusbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:
Error: Failed to create local member network "incusbr0" in project "default": Failed to setup firewall: Failed to run: iptables -w -t nat -I POSTROUTING -s 10.60.239.0/24 ! -d 10.60.239.0/24 -j MASQUERADE -m comment --comment generated for Incus network incusbr0: exit status 1 (Warning: Extension comment revision 0 not supported, missing kernel module?
iptables: No chain/target/match by that name.)
Looks like I’m running into some sort of iptables issue and getting 403’s for trying to get to the Incus documentation.
https://linuxcontainers.org/incus/docs/main/
Got that part fixed at least
The error you’re getting seems to suggest that your kernel doesn’t have the module necessary to attach comments to firewall rules. I believe that is CONFIG_NETFILTER_XT_MATCH_COMMENT
Apparently that’s no longer an option as I look through a 6.1.19 kernel.
root@castiana:~# grep -i comment /boot/config-6.6.1-zabbly+
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
It’s still a thing in 6.6.1
I’m not having much luck with Incus on Gentoo and can’t provide much feed back off of the stale ebuilds so I’ve cloned the source, but have no experience with Go. What does it take to build the tree after cloning?
Basically, make sure you have a recent version of Go installed, make sure you have the various dependencies installed, then run make deps
and make
to get things built.
Any thoughts as to what this error means?
Creating node-1
Starting node-1
Error: Failed to start device "enp5s0": Failed to create the veth interfaces "veth106f5498" and "vethef05dbf7": Failed adding link: Failed to run: ip link add name veth106f5498 mtu 1500 txqueuelen 1000 up type veth peer name vethef05dbf7 mtu 1500 address 00:16:3e:77:d3:99 txqueuelen 1000: exit status 2 (Error: Unknown device type.)
Try `incus info --show-log local:dtn-1` for more info
This happens when I launch my instance.
incus launch --profile p-node images:gentoo/openrc test
My profile is:
$ incus profile show p-node
config:
user.network-config: |
network:
ethernets:
eth0:
dhcp4: yes
dhcp6: no
nameservers: addresses: [8.8.8.8, 8.8.4.4]
enp5s0:
addresses: [192.168.100.20/24]
gateway4: 192.168.100.5
description:
devices:
enp5s0:
name: enp5s0
network: my_lan
type: nic
eth0:
name: eth0
nictype: bridged
parent: incusbr0
type: nic
root:
path: /
pool: default
size: 20GB
type: disk
name: p-node
used_by:
Sounds like your kernel is lacking the veth device driver (CONFIG_VETH
) or the module can’t be found on the system.
Thank you for that last.
Now I’ve got the following error …
Error: Failed to run: /usr/sbin/incusd forkstart node-1 /var/lib/incus/container
s /var/log/incus/node-1/lxc.conf: exit status 1
Try `incus info --show-log local:node-1` for more info
… so I then run the following to see …
$ incus info --show-log node-1
Name: node-1
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2023/12/02 19:04 CST
Last Used: 2023/12/02 19:04 CST
Log:
lxc node-1 20231203010440.679 ERROR conf - ../lxc-5.0.3/src/lxc/conf.c:lxc_map_ids:3701 - newuidmap failed to write mapping "newuidmap: uid range [0-1000000000) -> [1000000-1001000000) not allowed": newuidmap 5242 0 1000000 1000000000
lxc node-1 20231203010440.679 ERROR start - ../lxc-5.0.3/src/lxc/start.c:lxc_spawn:1788 - Failed to set up id mapping.
lxc node-1 20231203010440.679 ERROR lxccontainer - ../lxc-5.0.3/src/lxc/lxccontainer.c:wait_on_daemonized_start:878 - Received container state "ABORTING" instead of "RUNNING"
lxc node-1 20231203010440.679 ERROR start - ../lxc-5.0.3/src/lxc/start.c:__lxc_start:2107 - Failed to spawn container "node-1"
lxc node-1 20231203010440.679 WARN start - ../lxc-5.0.3/src/lxc/start.c:lxc_abort:1036 - No such process - Failed to send SIGKILL via pidfd 29 for process 5242
lxc 20231203010440.689 ERROR af_unix - ../lxc-5.0.3/src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20231203010440.689 ERROR commands - ../lxc-5.0.3/src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_init_pid"
And I’m guessing it’s something with shiftfs
. I know I’ve installed shiftfs
on Ubuntu, but on my gentoo box, I can’t seem to find any ebuilds for it. The /etc/subuid
and /etc/subgid
files exist. But, when I run incus info
, there are no lines containing shiftfs=true
.
Anyone aware of how this might be done on Gentoo?
Update:
Just found a post on it in the Gentoo forums and though it’s not clear, appears to maybe be something in the kernel? FWIW, I’m running 6.1.57 which is an LTS kernel.
Nope, the one you’re hitting now is about your host /etc/subuid and /etc/subgid not having been properly configured which is causing that particular container’s idmap to not be allowed.
- cat /etc/subuid
- cat /etc/subgid
- incus config show --expanded node-1
Would show your system configuration and the instance configuration.
I thought I had that one figured but … here’s what I’ve got …
$ cat subgid
user1:100000:65536
$ cat subuid
user1:100000:65536
$ incus config show --expanded node-1
architecture: x86_64
config:
image.architecture: amd64
image.description: Gentoo current amd64 (20231202_16:07)
image.os: Gentoo
image.release: current
image.requirements.secureboot: "false"
image.serial: "20231202_16:07"
image.type: squashfs
image.variant: openrc
user.network-config: |
network:
version: 1
config:
- type: bridge
name: incusbr0
- type: physical
name: enp5s0
mac_address: '00:11:22:33:44:55'
subnets:
- type: static
address: 192.168.100.20
netmask: 255.255.255.0
user.vendor-data: |
local: us_US.UTF-8
timezone: America/Chicago
users:
- name: user1
gecos: Development
primary_group: user1
groups: users
expiredate: '2030-01-01'
lock_passwd: false
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
passwd: xxxxxxxxxxqQBaumRPs
volatile.base_image: 5d379e2b5e8279ce42fb7a0ab617f84b051a24b98b35b8aeed2f411bf
7879f4a
volatile.cloud-init.instance-id: bb8d8177-7451-4c30-a7d8-011cad1ba9b3
volatile.enp5s0.host_name: vethe65d76a0
volatile.enp5s0.hwaddr: 00:16:3e:ec:e6:db
volatile.eth0.host_name: veth9d759186
volatile.eth0.hwaddr: 00:16:3e:b5:ae:38
volatile.idmap.base: "0"
volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":
0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"
Maprange":1000000000}]'
volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"
Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Map
range":1000000000}]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: STOPPED
volatile.last_state.ready: "false"
volatile.uuid: cd6e7bca-88c2-4f8a-a339-d4956d7289d0
volatile.uuid.generation: cd6e7bca-88c2-4f8a-a339-d4956d7289d0
devices:
enp5s0:
name: enp5s0
network: dtn_lan
type: nic
eth0:
name: eth0
nictype: bridged
parent: incusbr0
type: nic
opt_dir:
path: /opt/test
shift: "true"
source: /opt/test
type: disk
root:
path: /
pool: default
size: 20GB
type: disk
ephemeral: false
profiles:
- p-test-node
stateful: false
description: ""
You need:
root:1000000:1000000000
In both files.
Thanks again for that last.
Now at the point of creating a virtual machine using the following command:
incus launch --profile p-test images:gentoo/openrc node-1 --vm
Results in the following error:
Error: Failed instance creation: Failed creating instance record: Instance type "virtual-machine" is not supported on this server: vhost_vsock kernel module not loaded
It would appear more LKM’s are required to be loaded, but … if I can’t load the machine, how can I tweak it for this support? Do the images on a server not already support this virtualization or am I grabbing the wrong ones?
Update:
I ran the following command and see that there is no gentoo option of type VIRTUAL-MACHINE.
incus image list images: | grep -i desktop
I’ve also found a list of the kernel options needed for a VM to operate. When I run Qemu (which I’m trying to replace), I provide a kernel and pass it in on the command line. Is there some way to do the same here?
Update:
Maybe I’ve got the wrong image since I see a VM image here on this page.
https://images.linuxcontainers.org/
However, I’m unable to get the image.
I’ve tried
images:gentoo/current/amd64
images:gentoo/current/openrc
Do I have those image names correct? Cause I get the following:
Error: Failed instance creation: Failed getting remote image info: Failed getting image: The requested image couldn't be found
Only container images are available for Gentoo, there are no VM images built.
You can see that on https://images.linuxcontainers.org as the Incus (VM)
column shows NO
for Gentoo.
Canonical still hosts VM images for Gentoo. In theory it should be possible to just add their image server since the images should be compatible, but in practice it’s not.
I don’t know which is faster; to build your own vm image using distrobuilder, or to “convert” from lxd.
https://wiki.gentoo.org/wiki/Distrobuilder
(we could use an update on Gentoo wiki how to create VM images…)