LXCFS can't be upgraded - does not have a Release file

lxcfs --version
4.0.3

That’s really old and has issues

I’ve tried both

sudo add-apt-repository ppa:ubuntu-lxc/lxcfs-stable
sudo apt update

add-apt-repository ppa:ubuntu-lxc/lxcfs-stable
 This PPA contains the latest stable release of LXCFS as well as the latest stable version of any of its dependencies.
 More info: https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxcfs-stable
Press [ENTER] to continue or Ctrl-c to cancel adding it.

Hit:1 http://nova.clouds.archive.ubuntu.com/ubuntu focal InRelease
Get:2 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]  
Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]                         
Ign:4 http://ppa.launchpad.net/ubuntu-lxc/lxcfs-stable/ubuntu focal InRelease                               
Err:5 http://ppa.launchpad.net/ubuntu-lxc/lxcfs-stable/ubuntu focal Release               
  404  Not Found [IP: 185.125.190.52 80]
Get:6 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Get:7 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [1685 kB]
Get:8 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [2072 kB]
Get:9 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main Translation-en [368 kB]
Get:10 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 c-n-f Metadata [15.8 kB]
Get:11 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [1251 kB]
Get:12 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/restricted Translation-en [177 kB]
Get:13 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [943 kB]
Get:14 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/universe Translation-en [213 kB]
Reading package lists... Done                                    
E: The repository 'http://ppa.launchpad.net/ubuntu-lxc/lxcfs-stable/ubuntu focal Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

How do I upgrade it? There are no clear instructions

same

add-apt-repository https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxcfs-stable

@stgraber are we only providing LXCFS 5.x as source or bundled in the LXD snap package nowadays?

@ben LXCFS 5.0 is bundled as a deb with Ubuntu Jammy Ubuntu – Package Search Results -- lxcfs

But you only need to install it if using LXC, not LXD.

So I have to uninstall the whole system and go back to square one?

If jammy is 20.04 then that’s what I have and apt upgrade lxcfs shows 4.0.3

apt upgrade lxcfs
Reading package lists... Done
Building dependency tree       
Reading state information... Done
lxcfs is already the newest version (4.0.3-0ubuntu1).
Calculating upgrade... Done
The following packages will be upgraded:
  ubuntu-advantage-tools
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 146 kB of archives.
After this operation, 2625 kB disk space will be freed.

Jammy is 22.04.

Are you trying to use LXCFS with LXD?

Because if you are using the snap package there is no need to install LXCFS directly.

swap is not acting properly not being used inside of the containers and everything is using tonnes of RES in top and im reading that lxcfs early versions are problematic and that it got fixed in newer versions. everything is grinding to a holt in my containers even though each are only running one thing each.

What LXD version?

5.3

OK so you don’t need LXCFS installed so for clarity I would suggest apt remove lxcfs and then ensure you’re running the latest supported version of LXD (as LXD 5.3 is out of support now):

sudo snap refresh lxd --channel=latest/stable

LXD 5.5 is the current supported latest/stable release.

Once you’ve done that, please can you provide full details on the issue you are experiencing, with examples. Thanks

1 Like

snap refresh lxd --channel=latest/stable
lxd 5.5-37534be from Canonical✓ refreshed

root@h:~# lvs
  LV                                                                      VG  Attr       LSize  Pool        Origin Data%  Meta%  Move Log Cpy%Sync Convert
  LXDThinPool                                                             ALL twi-aotz-- <7.25t                    0.58   10.66                           
  containers_asafe                                                        ALL Vwi-aotz-k 10.00g LXDThinPool        92.54                                  
  containers_bir                                                          ALL Vwi-aotz-k 10.00g LXDThinPool        94.18                                  
  containers_faucex                                                       ALL Vwi-aotz-k 10.00g LXDThinPool        34.41                                  
  containers_holdip                                                       ALL Vwi-aotz-k 10.00g LXDThinPool        33.87                                  
  containers_pny                                                          ALL Vwi-aotz-k 10.00g LXDThinPool        63.83                                  
  containers_sapp                                                         ALL Vwi-aotz-k 10.00g LXDThinPool        99.77                                  
  images_e299296138c256b79dda4e61ac7454cf4ac134b43f5521f1ac894f49a9421d00 ALL Vwi---tz-k 10.00g LXDThinPool

top on the host (for only the processes inside my containers) RES keeps going up and up and up

top - 15:25:32 up 1 day, 23:22,  5 users,  load average: 5.52, 4.68, 3.08
Tasks:   6 total,   0 running,   6 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.2 us,  6.4 sy,  7.4 ni, 80.0 id,  3.8 wa,  0.0 hi,  1.3 si,  0.0 st
MiB Mem :  64214.8 total,  51722.8 free,   5398.5 used,   7093.5 buff/cache
MiB Swap:  10752.0 total,  10749.0 free,      3.0 used.  58010.1 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                    
 901138 root      20   0  890872  61824  27148 S   0.3   0.1   0:05.29 relay                                                                      
 901041 root      20   0  919144  91024  28740 S   0.0   0.1   0:11.33 faucex                                                                     
 901586 root      34  14 2850744 510764 152780 S  66.3   0.8   5:56.35 SAPP.daemon                                                                
 901814 root      39  19 2791840 443136  83708 S  18.3   0.7   1:12.45 PNY.daemon                                                                 
 901694 root      39  19 3403632 991604 137604 S  12.3   1.5   1:02.80 BIR.daemon                                                                 
 901470 root      34  14 4543480   2.2g 137512 S  36.0   3.4   6:14.83 ASAFE.daemon

notice 2.2g for one process (that will start next with others) even though I’m using both nice and cpulimit on each process inside the container and I have set

limits.cpu.allowance: 7ms/10ms
  limits.cpu.priority: "10"
  limits.memory: 10%
  limits.memory.enforce: hard
  limits.memory.swap.priority: "10"

on each container

One of the processes for example hasn’t done anything for 8 minutes

tail /home/SAPP/data/debug.log
2022-08-31 15:22:22 UpdateTip: new best=bc39f923cd206f6fb0682f97b8c05b493255393a3504b64cc3f933b7cd183999  height=884574 version=7  log2_work=68.6262293348501657  tx=8237975  date=2021-01-02 05:41:07 progress=0.525336  cache=9.1MiB(59501txo)
2022-08-31 15:22:22 ProcessNewBlock : ACCEPTED Block 884574 in 312571 milliseconds with size=147180
2022-08-31 15:22:22 receive version message: /Sapphire Core:1.4.2.3/: version 70929, blocks=1741676, us=167.114.211.121:46382, peer=13
2022-08-31 15:22:22 timeOffset (-312 seconds) too large. Disconnecting node 209.145.49.8:45328
2022-08-31 15:22:22 CheckBlock: Masternode/Budget payment checks skipped on sync

strace on the process shows

sapp t
strace: Process 394 attached
restart_syscall(<... resuming interrupted read ...>) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170975, tv_nsec=143569148}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170975, tv_nsec=350256423}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170975, tv_nsec=550503722}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170975, tv_nsec=750820321}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170975, tv_nsec=951103035}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170976, tv_nsec=151353984}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170976, tv_nsec=351636217}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170976, tv_nsec=551851948}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170976, tv_nsec=752255501}, FUTEX_BITSET_MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
futex(0x7ffd548ca480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7ffd548ca4d0, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=170976, tv_nsec=952530679}, FUTEX_BITSET_MATCH_ANY^Cstrace: Process 394 detached
 <detached ...>

see swap on the host has 10 GB

root@h:~# swapon -s
Filename				Type		Size	Used	Priority
/dev/sda3                              	partition	524284	3084	-2
/rttswapfile                           	file    	10485756	0	-3

see swap in the container

root@sapp:~# swapon -s
root@sapp:~#

I see each container is not using swap

Tasks:  26 total,   1 running,  25 sleeping,   0 stopped,   0 zombie
%Cpu(s):  4.3 us,  0.0 sy,  0.0 ni, 95.7 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :   6271.0 total,   5135.0 free,    502.3 used,    633.7 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   5768.7 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                    
    394 root      34  14 2849576 513200 151940 S  65.6   8.0  12:04.77 SAPP.daemon 

In my old setup before learning LXD I used solus VM and had a vm for each daemon and I had 14 different daemons each with node.js app running side-by-side in each container.

I moved to LXD because I wanted to do more now I can’t even run a few!

These containers actually appear to slow each other down and step on each other.

When I had raspberry pis I could run 4 of these same daemons and 4 node.js apps per Pi.

The daemons are on config settings that are the lowest just as if they were on a Pi.

OK so first things first: LXCFS is only responsible for displaying memory limits and usage, it doesn’t actually enforce any limits. This is what the kernel’s cgroups do (which are configured by LXD).

With regards to how swap is displayed inside the container please see https://github.com/lxc/lxcfs/blob/master/README.md#swap-handling, but it sounds like your host doesn’t have swap accounting on and is why you don’t see any swap usage.

Please note this doesn’t mean your containers aren’t using swap, it just means the kernel doesn’t provide the mechanism for LXCFS to display how much swap the container is using.

So the question is why are you seeing high load, and is the memory limit working?

Can you show output of cat /proc/meminfo on the host please?

root@h:~# cat /proc/meminfo
MemTotal: 65755916 kB
MemFree: 52706324 kB
MemAvailable: 59319108 kB
Buffers: 582764 kB
Cached: 6327140 kB
SwapCached: 0 kB
Active: 6746028 kB
Inactive: 4745528 kB
Active(anon): 4526852 kB
Inactive(anon): 149408 kB
Active(file): 2219176 kB
Inactive(file): 4596120 kB
Unevictable: 36108 kB
Mlocked: 36108 kB
SwapTotal: 11010040 kB
SwapFree: 11006956 kB
Dirty: 1292 kB
Writeback: 0 kB
AnonPages: 4603800 kB
Mapped: 1113180 kB
Shmem: 151716 kB
KReclaimable: 528036 kB
Slab: 1171788 kB
SReclaimable: 528036 kB
SUnreclaim: 643752 kB
KernelStack: 20288 kB
PageTables: 33448 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 43887996 kB
Committed_AS: 13257796 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 176496 kB
VmallocChunk: 0 kB
Percpu: 61824 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 1653612 kB
DirectMap2M: 28649472 kB
DirectMap1G: 38797312 kB

So you have 64GB of RAM on the host, with each instance being hard limits to 10%, so approx 6.5GB of RAM each.

the host has

root@h:~# sysctl -a | grep swappiness
vm.swappiness = 60

You could try setting that to 1 if you are looking to reduce swapping.

See Understanding vm.swappiness

So that process with 2.2GB resident is allowed under the limits.memory setting currently used.

where was the memory.enforce = true that you put earlier? I was just about to try it but I don’t see it here anymore

Could you also show lxc info <instance> and lxc config show <instance> --expanded for the instances, as that shows memory usage and the configured limits.