Strange continuous read io burst when container is "low" in memory

Hi,

I have seen continuous read io burst on containers, that then become unavailable.
Only way to fix it is to restart the container.

I have been able to reproduce it on an AWS Ubuntu Jammy instance with a Ubuntu Jammy container running clamd with 1GB of container memory.

both with;
lxd 5.0.0 / kernel 5.15.0-1011-aws
lxd 5.1 / kernel 5.15.0-1005-aws
lxd 5.2 / kernel 5.15.0-1011-aws

Can anyone confirm this is a bug?

Regards,

Justin

It’s not a bug, it’s normal Linux behavior.

When you run out of memory, even in a container, the kernel runs out of VFS cache space.
So instead of being able to hold your open files content in cache memory, it will have to re-fetch the data over and over and over again.

Technically Linux does what you asked it to do, it’s not exceeding the memory limit and is trying not to trigger the OOM killer but this comes at the cost of having no cache space for data and so constantly re-reading it from disk.

@stgraber thank you very much for your reply.

As I mentioned in the github issue, running out of VFS cache surely would explain partly the issue seen.

But somethings make me wonder;

  • The container is doing nothing. It only has the initial system processes and clamd running. No virus scanning is taking place. So what are these processes reading at maximum throughput (128MB/s) without ever stopping? (init/@dbus-daemon/clamd/systemd-hostnamed)

  • If memory is so critically low that a container is becoming unavailable, why is OOM not intervening?

  • I don’t see swap being used.

  • Before I had all containers sharing a disk, when this issue would occur the whole server became unavailable.

  • I migrated after 10 years away from OpenVZ to LXD, never seen containers in OpenVZ be so self destructive. (no pun intended)

If it is like you say, by design. What can I do to either have OOM be more aggressive or improve the behavior/impact in general.
Sure I could remove all memory limits on my containers, but I think I’m then just covering up the issue.

Regards, Justin

The OOM killer will only kick in if flushing all caches/buffers doesn’t yield enough memory to serve the allocation.

You may want to strace the different processes to see if something odd is going on.

Another interesting user of memory which isn’t super visible is tmpfs, so you may want to check that you don’t have a near-full tmpfs in that instance too.

@stgraber again thank you for your response, very much appreciated.

tmpfs in the container does not look as a possible source of issues;

root@burst1:~# df -h |grep tmpfs
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 774M 156K 773M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock

I tried two things now.

  1. try to catch a strace

iotop output:

TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                                                                                                                      

21902 be/4 1000000 25.25 M/s 0.00 B/s ?unavailable? init
21980 be/4 1000000 25.67 M/s 0.00 B/s ?unavailable? systemd-journald
22049 be/4 1000102 26.84 M/s 0.00 B/s ?unavailable? @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
22057 be/4 1000106 23.30 M/s 0.00 B/s ?unavailable? clamd --foreground=true
22070 be/4 1000000 26.57 M/s 0.00 B/s ?unavailable? polkitd --no-debug [gdbus]

running strace on these PID’s and stop it after 10 seconds.

strace -f -p 21902

strace: Process 21902 attached
strace: Process 21902 detached

strace -f -p 21980

strace: Process 21980 attached
gettid() = 71
epoll_wait(6,
strace: Process 21980 detached
<detached …>

strace -f -p 22049

strace: Process 22049 attached
strace: Process 22049 detached

strace -f -p 22057

strace: Process 22057 attached
strace: Process 22057 detached

strace -f -p 22070

strace: Process 22070 attached with 3 threads
[pid 22065] restart_syscall(<… resuming interrupted read …> <unfinished …>
[pid 22067] restart_syscall(<… resuming interrupted read …>
strace: Process 22070 detached
strace: Process 22065 detached
strace: Process 22067 detached
<detached …>

According to strace these processes are not doing much.

  1. try to proof that shortage or non existing vfs cache could cause this behavior for this specific situation.

lxc stop -f burst1
lxc config set burst1 limits.memory 1500MB (1500MB mem does not trigger the io read burst)

start this script in a screen on the server to mimic no VFS;

#!/bin/bash
while true; do
sync; echo 3 > /proc/sys/vm/drop_caches
sleep 1
done

lxc start burst1

This does not trigger the read io burst.
The server has a steady 1MB/s io read after the container has started.
Nothing compared to the 128MB/s when the io burst occurs.

Went back to openvz to see how this is handled there.

Installed a Centos7 container on kernel-2.6.32-042stab145.3

vzctl set burst1 --ram 1G --swap 1G --dcachesize 256M --save (sort sure if this is fully comparable to lxc config set burst1 limits.memory 1GB)
vzctl exec burst1 yum -y install epel-release
vzctl exec burst1 yum -y install clamd
vzctl exec burst1 freshclam
vzctl exec burst1 sed -i s/^#LocalSocket/LocalSocket/g /etc/clamd.d/scan.conf
vzctl exec burst1 systemctl enable clamd@scan
vzctl exec burst1 systemctl start clamd@scan.service

I don’t seem to be getting any read io burst. I have seem clamd being killed when trying to start other things in the container.

@stgraber please let me know if providing access to an ec2 instance to reproduce this io burst would help, instead of following my reproduction steps.

I’m not really sure how to debug this further to be honest.

LXD/LXC configure your memory limit in the kernel through the memory cgroup controller. How that’s enforced and the effect you’ll see when getting close to the limit isn’t something that we have any control over unfortunately.

It’s possible that the OpenVZ patchset (which at least used to heavily modify some of the resource tracking logic) is behaving differently than stock Linux, but it’s also possible that cgroup2 and the whole memory pressure (PSI) work that’s happened over the past few years has changed how Linux handles such cases.

Unfortunately comparing behavior between a heavily patched 2.6.32 and mostly stock 5.15 isn’t going to be very helpful here.

Can you show what you have in ls -lh /sys/fs/cgroup? Just want to see if dealing with cgroup1 or cgroup2 as the behavior could differ between the two, as does the information available about high memory pressure.

@straber thank you again for your reply.

I was not expecting any debugging yet.

Firstly I was looking for someone with extensive LXD/LXC knowledge to reproduce what I see.
Then agree or disagree the behavior is as expected, desirable and or possibly a bug.

But maybe you already reproduced what I see?

If there is agreement this does not look like desirable behavior, pinpoint where this is controlled, try to make the reproduction steps closer to the source of the problem and open a report upstream.

The only thing I tried to show with OpenVZ and the test scenario whereby I “disabled” VFS cache, that it does not seems to be like you said before “Linux does what you asked it to do” and therefore having processes endlessly reading from disk at highest possible throughput without these processes showing anything in a strace does not look like the Linux solution kicking in for shortage of memory.

Regards, Justin

I’m not sure if it is a cgroupv1 vs v2 issue.
As I think Amazon Linux2 uses cgroupv1 and tested ubuntu with both cgroupv1 and cgroupv2

Here the output you requested:

Amazon Linux 2 lxd 5.3 2 5.10.112-108.499 on which I can reproduce the issue with a Amazon Linux 2 container:

ls -lh /sys/fs/cgroup /proc/cgroups
-r–r–r-- 1 root root 0 Jul 14 10:55 /proc/cgroups

/sys/fs/cgroup:
total 0
dr-xr-xr-x 6 root root 0 Jul 14 10:55 blkio
lrwxrwxrwx 1 root root 11 Jul 14 10:55 cpu → cpu,cpuacct
lrwxrwxrwx 1 root root 11 Jul 14 10:55 cpuacct → cpu,cpuacct
dr-xr-xr-x 6 root root 0 Jul 14 10:55 cpu,cpuacct
dr-xr-xr-x 4 root root 0 Jul 14 10:55 cpuset
dr-xr-xr-x 6 root root 0 Jul 14 10:55 devices
dr-xr-xr-x 5 root root 0 Jul 14 10:55 freezer
dr-xr-xr-x 4 root root 0 Jul 14 10:55 hugetlb
dr-xr-xr-x 6 root root 0 Jul 14 10:55 memory
lrwxrwxrwx 1 root root 16 Jul 14 10:55 net_cls → net_cls,net_prio
dr-xr-xr-x 4 root root 0 Jul 14 10:55 net_cls,net_prio
lrwxrwxrwx 1 root root 16 Jul 14 10:55 net_prio → net_cls,net_prio
dr-xr-xr-x 4 root root 0 Jul 14 10:55 perf_event
dr-xr-xr-x 6 root root 0 Jul 14 10:55 pids
dr-xr-xr-x 6 root root 0 Jul 14 10:55 systemd

On Ubuntu jammy-22.04 lxd 5.3 / kernel 5.15.0-1011-aws, on which I can reproduce the issue with an Amazon linux 2 container (I assume cgroup1, as I had to set kernel parameterGRUB_CMDLINE_LINUX=“systemd.unified_cgroup_hierarchy=false” to get the Amazon container to start )

ls -lh /sys/fs/cgroup /proc/cgroup
ls: cannot access ‘/proc/cgroup’: No such file or directory
/sys/fs/cgroup:
total 0
dr-xr-xr-x 12 root root 0 Jul 14 08:55 blkio
lrwxrwxrwx 1 root root 11 Jul 14 08:55 cpu → cpu,cpuacct
dr-xr-xr-x 12 root root 0 Jul 14 08:55 cpu,cpuacct
lrwxrwxrwx 1 root root 11 Jul 14 08:55 cpuacct → cpu,cpuacct
dr-xr-xr-x 3 root root 0 Jul 14 08:55 cpuset
dr-xr-xr-x 12 root root 0 Jul 14 08:55 devices
dr-xr-xr-x 4 root root 0 Jul 14 08:55 freezer
dr-xr-xr-x 3 root root 0 Jul 14 08:55 hugetlb
dr-xr-xr-x 12 root root 0 Jul 14 08:55 memory
dr-xr-xr-x 3 root root 0 Jul 14 08:55 misc
lrwxrwxrwx 1 root root 16 Jul 14 08:55 net_cls → net_cls,net_prio
dr-xr-xr-x 3 root root 0 Jul 14 08:55 net_cls,net_prio
lrwxrwxrwx 1 root root 16 Jul 14 08:55 net_prio → net_cls,net_prio
dr-xr-xr-x 3 root root 0 Jul 14 08:55 perf_event
dr-xr-xr-x 12 root root 0 Jul 14 08:55 pids
dr-xr-xr-x 3 root root 0 Jul 14 08:55 rdma
dr-xr-xr-x 13 root root 0 Jul 14 08:55 systemd
dr-xr-xr-x 13 root root 0 Jul 14 09:11 unified

On Ubuntu jammy-22.04 lxd 5.3 / kernel 5.15.0-1011-aws, on which I can reproduce the issue with an Ubuntu jammy-22.04 container (I assume cgroup2)

ls -lh /sys/fs/cgroup /proc/cgroups
-r–r–r-- 1 root root 0 Jul 14 08:30 /proc/cgroups

/sys/fs/cgroup:
total 0
-r–r–r-- 1 root root 0 Jul 14 08:30 cgroup.controllers
-rw-r–r-- 1 root root 0 Jul 14 08:31 cgroup.max.depth
-rw-r–r-- 1 root root 0 Jul 14 08:31 cgroup.max.descendants
-rw-r–r-- 1 root root 0 Jul 14 08:31 cgroup.procs
-r–r–r-- 1 root root 0 Jul 14 08:31 cgroup.stat
-rw-r–r-- 1 root root 0 Jul 14 08:31 cgroup.subtree_control
-rw-r–r-- 1 root root 0 Jul 14 08:31 cgroup.threads
-rw-r–r-- 1 root root 0 Jul 14 08:31 cpu.pressure
-r–r–r-- 1 root root 0 Jul 14 08:31 cpu.stat
-r–r–r-- 1 root root 0 Jul 14 08:31 cpuset.cpus.effective
-r–r–r-- 1 root root 0 Jul 14 08:31 cpuset.mems.effective
drwxr-xr-x 2 root root 0 Jul 14 08:31 dev-hugepages.mount
drwxr-xr-x 2 root root 0 Jul 14 08:31 dev-mqueue.mount
drwxr-xr-x 2 root root 0 Jul 14 08:30 init.scope
-rw-r–r-- 1 root root 0 Jul 14 08:31 io.cost.model
-rw-r–r-- 1 root root 0 Jul 14 08:31 io.cost.qos
-rw-r–r-- 1 root root 0 Jul 14 08:31 io.pressure
-rw-r–r-- 1 root root 0 Jul 14 08:31 io.prio.class
-r–r–r-- 1 root root 0 Jul 14 08:31 io.stat
drwxr-xr-x 2 root root 0 Jul 14 08:34 lxc.pivot
-r–r–r-- 1 root root 0 Jul 14 08:31 memory.numa_stat
-rw-r–r-- 1 root root 0 Jul 14 08:31 memory.pressure
-r–r–r-- 1 root root 0 Jul 14 08:31 memory.stat
-r–r–r-- 1 root root 0 Jul 14 08:31 misc.capacity
drwxr-xr-x 2 root root 0 Jul 14 08:31 proc-sys-fs-binfmt_misc.mount
drwxr-xr-x 2 root root 0 Jul 14 08:31 sys-fs-fuse-connections.mount
drwxr-xr-x 2 root root 0 Jul 14 08:31 sys-kernel-config.mount
drwxr-xr-x 2 root root 0 Jul 14 08:31 sys-kernel-debug.mount
drwxr-xr-x 2 root root 0 Jul 14 08:31 sys-kernel-tracing.mount
drwxr-xr-x 40 root root 0 Jul 14 08:38 system.slice
drwxr-xr-x 3 root root 0 Jul 14 08:34 user.slice

Centos/OpenVZ6 2.6.32-042stab145.3 on which I cannot reproduce the issue:

ls -lh /sys/fs/cgroup /proc/cgroups
ls: cannot access /sys/fs/cgroup: No such file or directory
-r–r–r-- 1 root root 0 Jul 14 10:45 /proc/cgroups

Regards, Justin

It may be interesting to look at /sys/fs/cgroup/lxc.payload.CONTAINER-NAME/, specifically the files:

  • memory.events
  • memory.events.local
  • memory.pressure
  • memory.stat

As that can give you some hints as to why the kernel is behaving in the way that it is.

@stgraber thank you for your suggestion.

Made a small script that starts the container and then collects the dedicated container disk read throughput and the cgroup values you mentioned 10 times.

I tested on the Ubuntu jammy-22.04 server with a Ubuntu jammy-22.04 container on kernel 5.15.0-1011-aws

Here the output;

starting container burst1 Fri Jul 15 09:12:36 UTC 2022

– measurement 1 Fri Jul 15 09:12:36 UTC 2022
25 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 407
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 407
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=0.00 avg60=0.00 avg300=0.00 total=22448
full avg10=0.00 avg60=0.00 avg300=0.00 total=22448

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 921116672
file 555839488
kernel_stack 360448
pagetables 2965504
percpu 250960
sock 4096
shmem 167936
file_mapped 41738240
file_dirty 909312
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 921149440
active_anon 135168
inactive_file 339181568
active_file 216489984
unevictable 0
slab_reclaimable 16465104
slab_unreclaimable 1854104
slab 18319208
workingset_refault_anon 0
workingset_refault_file 0
workingset_activate_anon 0
workingset_activate_file 0
workingset_restore_anon 0
workingset_restore_file 0
workingset_nodereclaim 0
pgfault 313029
pgmajfault 384
pgrefill 2
pgscan 23410
pgsteal 21840
pgactivate 51705
pgdeactivate 2
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 2 Fri Jul 15 09:12:47 UTC 2022
108 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 9332
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 9332
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=10.62 avg60=2.39 avg300=0.52 total=1621622
full avg10=10.05 avg60=2.27 avg300=0.49 total=1560154

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489858560
file 1138688
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 679936
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489891328
active_anon 135168
inactive_file 565248
active_file 405504
unevictable 0
slab_reclaimable 1543744
slab_unreclaimable 1844760
slab 3388504
workingset_refault_anon 0
workingset_refault_file 240267
workingset_activate_anon 0
workingset_activate_file 22680
workingset_restore_anon 0
workingset_restore_file 11930
workingset_nodereclaim 494
pgfault 472278
pgmajfault 8785
pgrefill 542365
pgscan 959406
pgsteal 436618
pgactivate 491720
pgdeactivate 515443
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 3 Fri Jul 15 09:12:57 UTC 2022
128 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 16749
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 16749
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=18.05 avg60=5.49 avg300=1.26 total=3990903
full avg10=17.39 avg60=5.26 avg300=1.21 total=3830513

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489932288
file 757760
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 86016
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489965056
active_anon 135168
inactive_file 192512
active_file 266240
unevictable 0
slab_reclaimable 1554976
slab_unreclaimable 1844760
slab 3399736
workingset_refault_anon 0
workingset_refault_file 569285
workingset_activate_anon 0
workingset_activate_file 47149
workingset_restore_anon 0
workingset_restore_file 26548
workingset_nodereclaim 496
pgfault 500858
pgmajfault 20780
pgrefill 906890
pgscan 1788933
pgsteal 765798
pgactivate 806939
pgdeactivate 855295
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 4 Fri Jul 15 09:13:07 UTC 2022
128 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 30551
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 30551
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=22.43 avg60=8.48 avg300=2.06 total=6553415
full avg10=21.89 avg60=8.20 avg300=1.99 total=6297730

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489952768
file 905216
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 16384
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489985536
active_anon 135168
inactive_file 557056
active_file 180224
unevictable 0
slab_reclaimable 1551152
slab_unreclaimable 1844760
slab 3395912
workingset_refault_anon 0
workingset_refault_file 898388
workingset_activate_anon 0
workingset_activate_file 56004
workingset_restore_anon 0
workingset_restore_file 32521
workingset_nodereclaim 496
pgfault 524801
pgmajfault 31696
pgrefill 1393683
pgscan 6049803
pgsteal 1095033
pgactivate 1277976
pgdeactivate 1335208
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 5 Fri Jul 15 09:13:17 UTC 2022
128 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 48506
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 48506
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=23.09 avg60=10.82 avg300=2.80 total=8997331
full avg10=22.80 avg60=10.55 avg300=2.72 total=8703356

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489952768
file 860160
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 8192
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489985536
active_anon 135168
inactive_file 503808
active_file 151552
unevictable 0
slab_reclaimable 1551744
slab_unreclaimable 1844760
slab 3396504
workingset_refault_anon 0
workingset_refault_file 1227271
workingset_activate_anon 0
workingset_activate_file 57080
workingset_restore_anon 0
workingset_restore_file 33046
workingset_nodereclaim 496
pgfault 545967
pgmajfault 42131
pgrefill 1914061
pgscan 12821287
pgsteal 1423946
pgactivate 1796667
pgdeactivate 1854982
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 6 Fri Jul 15 09:13:27 UTC 2022
128 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 66851
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 66851
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=23.61 avg60=12.83 avg300=3.51 total=11428288
full avg10=23.27 avg60=12.55 avg300=3.43 total=11113295

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489952768
file 1036288
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 0
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489985536
active_anon 135168
inactive_file 688128
active_file 180224
unevictable 0
slab_reclaimable 1551744
slab_unreclaimable 1844760
slab 3396504
workingset_refault_anon 0
workingset_refault_file 1555768
workingset_activate_anon 0
workingset_activate_file 57533
workingset_restore_anon 0
workingset_restore_file 33191
workingset_nodereclaim 496
pgfault 566763
pgmajfault 52489
pgrefill 2455322
pgscan 20105745
pgsteal 1752445
pgactivate 2337404
pgdeactivate 2396165
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 7 Fri Jul 15 09:13:37 UTC 2022
128 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 95473
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 95473
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=52.25 avg60=21.01 avg300=5.61 total=18166296
full avg10=50.39 avg60=20.37 avg300=5.44 total=17570050

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489952768
file 1044480
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 4096
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489985536
active_anon 135168
inactive_file 733184
active_file 94208
unevictable 0
slab_reclaimable 1549904
slab_unreclaimable 1843536
slab 3393440
workingset_refault_anon 0
workingset_refault_file 1884316
workingset_activate_anon 0
workingset_activate_file 58773
workingset_restore_anon 0
workingset_restore_file 33307
workingset_nodereclaim 496
pgfault 587292
pgmajfault 62840
pgrefill 3655215
pgscan 33569473
pgsteal 2080991
pgactivate 3535994
pgdeactivate 3596013
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 8 Fri Jul 15 09:13:47 UTC 2022
128 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 124556
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 124556
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=62.22 avg60=28.19 avg300=7.71 total=24971288
full avg10=59.74 avg60=27.21 avg300=7.45 total=24110139

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489952768
file 1044480
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 65536
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489985536
active_anon 135168
inactive_file 630784
active_file 245760
unevictable 0
slab_reclaimable 1549904
slab_unreclaimable 1843536
slab 3393440
workingset_refault_anon 0
workingset_refault_file 2213087
workingset_activate_anon 0
workingset_activate_file 60240
workingset_restore_anon 0
workingset_restore_file 33516
workingset_nodereclaim 496
pgfault 608744
pgmajfault 73476
pgrefill 4859003
pgscan 47129241
pgsteal 2409762
pgactivate 4737672
pgdeactivate 4799124
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 9 Fri Jul 15 09:13:57 UTC 2022
128 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 150729
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 150729
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=63.56 avg60=33.74 avg300=9.62 total=31468963
full avg10=59.85 avg60=32.24 avg300=9.22 total=30149246

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489952768
file 1044480
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 65536
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489985536
active_anon 135168
inactive_file 622592
active_file 253952
unevictable 0
slab_reclaimable 1550128
slab_unreclaimable 1846464
slab 3396592
workingset_refault_anon 0
workingset_refault_file 2541878
workingset_activate_anon 0
workingset_activate_file 62844
workingset_restore_anon 0
workingset_restore_file 34019
workingset_nodereclaim 496
pgfault 631941
pgmajfault 84562
pgrefill 5851979
pgscan 58396971
pgsteal 2738553
pgactivate 5725774
pgdeactivate 5789927
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

– measurement 10 Fri Jul 15 09:14:07 UTC 2022
128 MB_read/s

– /sys/fs/cgroup/lxc.payload.burst1/memory.events
low 0
high 0
max 172472
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.events.local
low 0
high 0
max 172472
oom 0
oom_kill 0

– /sys/fs/cgroup/lxc.payload.burst1/memory.pressure
some avg10=46.12 avg60=34.62 avg300=10.65 total=35507940
full avg10=44.01 avg60=33.14 avg300=10.21 total=34024471

– /sys/fs/cgroup/lxc.payload.burst1/memory.stat
anon 1489952768
file 999424
kernel_stack 344064
pagetables 4087808
percpu 157520
sock 4096
shmem 167936
file_mapped 0
file_dirty 0
file_writeback 0
swapcached 0
anon_thp 0
file_thp 0
shmem_thp 0
inactive_anon 1489985536
active_anon 135168
inactive_file 643072
active_file 188416
unevictable 0
slab_reclaimable 1550128
slab_unreclaimable 1848344
slab 3398472
workingset_refault_anon 0
workingset_refault_file 2870294
workingset_activate_anon 0
workingset_activate_file 65391
workingset_restore_anon 0
workingset_restore_file 35387
workingset_nodereclaim 496
pgfault 653764
pgmajfault 95170
pgrefill 6624339
pgscan 67260257
pgsteal 3067022
pgactivate 6493981
pgdeactivate 6560697
pglazyfree 0
pglazyfreed 0
thp_fault_alloc 0
thp_collapse_alloc 0

Has this issue been resolved?