LXD NFSv4 client mount inside continer uses global zone's request-key nfsidmap instead of container's

I’m mounting some NFS filesystems within an lxd container. I’m using nfs v4 w/kerberos so that I can map usernames. This all works fine except that the client side idmap is using the host’s kernel upcall mechanism (request-key + nfsidmap) instead of the container’s. My container is on its own vlan which has its own kerberos realm and I really need the id mapping to occur within the container’s namespace.

Is this something that should be expected to work? Host is running ubuntu 22.04 with kernel 5.19.17. The client lxd container is running in privileged mode as that’s the only way I could get nfs mounting to work at all.

The NFS server is also running in an lxd container on another host and krb5 identity mapping works well with it when using other non-lxd clients. The server side uses rpc.idmapd and rpc_pipefs filesystem mounted in the container to map identities. That’s a different kernel interface than the client so it may explain why server idmapd works inside the container namespace but the client lookup mechanism doesn’t.