I have become curious about confidential computing.
Does anyone have much idea about the current status of this feature? I found security.sev.session.dh which looks very promising, and some threads here and there in this forum.
Is it viable at this point to give people the capability to run snoop/tamper-proof VMs (preferably with Incus, but I can use something else) in a way where they can validate those properties?
I think full confidential computing is still a bit of a pipe dream at this stage, not just with Incus but in general. That’s not to say that features like AMD SEV aren’t valuable on their own though!
Incus indeed has basic AMD SEV and SEV-ES support which allows for using memory encryption and seeding some specific keys as part of the instance configuration.
When I’m saying that confidential computing is still a bit of a pipe dream, I mostly refer to a lot of extra complexity both on the consumer side in preparing a suitable image and knowing exactly how to verify it when running it in an untrusted environment but also just on the technical side as you really need to ensure that your guest OS doesn’t react to a variety of events and data that may be coming from a compromised hypervisor.
I’ve attended quite a few talks at various conferences where time and time again, it’s shown just how easy it is to miss a tiny bit of attack surface here and there in the kernel, with potentially disastrous consequences if actually running in a hostile environment
Ah, so you mean things like incus exec from an administrator could operate on an encrypted VM? (Maybe not that, but similar things, I guess.)
Yeah, that makes sense.
I guess the encryption of the image preparation could be automated in whatever image preparation tools you want to use. And hopefully at some point there would be an incus verify that could give some certainty.
However, that’s the input I needed- I will shelve this for now, thanks!
Right, the guest agent is something you’d definitely want to disable inside the guest if running in an untrusted environment.
But the more problematic issues are things like ACPI events sent from the Hypervisor, injecting data through the CPUID tables, … basically a bunch of things that the kernel reads at various times which can be altered by the Hypervisor and used to attack an otherwise protected guest.
You CAN do it yourself, but not, as far as I have been able to determine, with incus.
I had to use regular libvirt (virt-manager/virt-install) with some custome xml code, but when you do, you can get to this:
andrew@gateway-obiwan:~$ sudo journalctl -b | grep -i sev Mar 10 11:08:44 gateway-obiwan kernel: Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP Mar 10 11:08:44 gateway-obiwan kernel: SEV: Status: SEV SEV-ES SEV-SNP Mar 10 11:08:44 gateway-obiwan kernel: SEV: APIC: wakeup_secondary_cpu() replaced with wakeup_cpu_via_vmgexit() Mar 10 11:08:44 gateway-obiwan kernel: SEV: Using SNP CPUID table, 28 entries present. Mar 10 11:08:44 gateway-obiwan kernel: SEV: SNP running at VMPL0. Mar 10 11:08:44 gateway-obiwan kernel: SEV: SNP guest platform device initialized. Mar 10 11:08:45 gateway-obiwan kernel: sev-guest sev-guest: Initialized SEV guest driver (using VMPCK0 communication key) Mar 10 11:09:10 gateway-obiwan kernel: Modules linked in: veth nft_masq nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 bridge nf_tables libcrc32c cfg80211 rfkill 8021q garp stp mrp llc binfmt_misc zfs(POE) spl(OE) nls_ascii nls_cp437 vfat fat intel_rapl_msr intel_rapl_common virtio_gpu virtio_dma_buf sev_guest drm_shmem_helper tsm pcspkr drm_kms_helper virtio_balloon button evdev drm efi_pstore configfs nfnetlink vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock vmw_vmci efivarfs qemu_fw_cfg virtio_rng ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 crc32c_generic dm_crypt dm_mod crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 sha256_ssse3 xhci_pci sha1_ssse3 xhci_hcd iTCO_wdt aesni_intel intel_pmc_bxt usbcore psmouse ahci iTCO_vendor_support watchdog gf128mul libahci libata crypto_simd usb_common virtio_net scsi_mod cryptd i2c_i801 scsi_common net_failover virtio_blk failover serio_raw i2c_smbus lpc_ich andrew@gateway-obiwan:~$ ls -l /dev/sev-guest crw------- 1 root root 10, 262 Mar 10 11:08 /dev/sev-guest andrew@gateway-obiwan:~$ ls -l /dev/tpm0 crw-rw---- 1 tss root 10, 224 Mar 10 11:08 /dev/tpm0 andrew@gateway-obiwan:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS vda 254:0 0 20G 0 disk ├─vda1 254:1 0 952M 0 part /boot/efi ├─vda2 254:2 0 953M 0 part /boot └─vda3 254:3 0 18.1G 0 part └─vda3_crypt 253:0 0 18.1G 0 crypt ├─deb13–master–vg-root 253:1 0 17.1G 0 lvm / └─deb13–master–vg-swap_1 253:2 0 1008M 0 lvm [SWAP] vdb 254:16 0 50G 0 disk └─vdb_crypt 253:3 0 50G 0 crypt andrew@gateway-obiwan:~$ incus list c1 ±-----±--------±--------------------±-----±----------±----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | ±-----±--------±--------------------±-----±----------±----------+ | c1 | RUNNING | 172.16.16.84 (eth0) | | CONTAINER | 1 | ±-----±--------±--------------------±-----±----------±----------+
I tried for ages to get incus to work. This is is close as I have got to a luks-encrypted vm running, essentially, “AES-encrypted” live, with incus containers (not vms - not possible) therein, benefiting from the extra privacy protection.
EPYC Milan gets you SEV-SNP, EPYC Rome gets SEV-ES and Naples ets you SEV (the latter is not fully tested with my setup, but Rome and MIlan have been).
I believe Incus itself is a sniff away (an OVMF upgrade…which I had to do somewhat manually) from achieving this natively, but a good second-best is running libvirt vms with incus running containers therein.