1

Recently (like 1-2 months ago) my laptop has started to become sluggish for short periods (~1 minute).

It is due to kswapd. I would love it if kswapd preemptively slowly freed up RAM in the background, but it is a pain when it causes the computer to become sluggish.

As you can see there is 15.5 GB buff/cache, and there is hardly any I/O.

top - 22:08:55 up 5 days, 10:48, 47 users,  load average: 2.97, 1.60, 1.12
Tasks: 663 total,   3 running, 640 sleeping,   3 stopped,  17 zombie
%Cpu(s): 17.5 us, 26.4 sy,  0.0 ni, 55.1 id,  0.7 wa,  0.0 hi,  0.3 si,  0.0 st
GiB Mem :     31.2 total,      0.3 free,     15.4 used,     15.5 buff/cache
GiB Swap:    158.3 total,    139.6 free,     18.6 used.      0.2 avail Mem 
Change delay from 3.0 to 
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND            
1647145 tange     20   0   16.8g   3.6g 266672 R 189.8  11.4   1453:59 GeckoMain          
    125 root      20   0       0      0      0 S  54.8   0.0  27:56.87 kswapd0            
   9508 tange     20   0 4040616  76224   3436 S   7.2   0.2 238:50.24 lbrynet            
 416426 root      20   0 1101072  99296  82680 S   7.2   0.3 108:39.06 Xorg               

What causes kswapd to go crazy when there is 50% RAM available? How can I change this?

The earlier behaviour would cause kswapd to be active when there was around 1-2 GB available, so I would like to go back to that.

As a (poor) workaround I have enabled zswap. It helps a little: The system is still swapping, but now it swaps to RAM.

I have changed /proc/sys/vm/swappiness from 60 to 20. I still see it happening.

$ uname -a
Linux aspire 5.4.0-88-generic #99-Ubuntu SMP Thu Sep 23 17:29:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/issue.net
Ubuntu 20.04.3 LTS
$ mount |grep tmpfs|field 3|parallel -Xj1 df
Filesystem     1K-blocks   Used Available Use% Mounted on
udev            16303656      0  16303656   0% /dev
tmpfs            3270404   2464   3267940   1% /run
tmpfs           16352020 534520  15817500   4% /dev/shm
tmpfs               5120      4      5116   1% /run/lock
tmpfs           16352020      0  16352020   0% /sys/fs/cgroup
tmpfs            1000000      0   1000000   0% /Mnt/ram
tmpfs            3270404   2464   3267940   1% /run/snapd/ns
tmpfs            3270404    284   3270120   1% /run/user/1000
none             3270404      0   3270404   0% /tmp/shm/parallel
$ cat /proc/sys/vm/vfs_cache_pressure
100
Ole Tange
  • 35,514
  • 1
    AFAICT, "0.2 avail Mem" in top's output says that your system has almost no available memory, despite all the buffer/cache. I would look at tmpfs-backed (possibly open and deleted) files (e.g. in /tmp or /run, I don't know what happens to be backed by tmpfs on Ubuntu), which count against buffer/cache and can't be evicted from memory without swapping. – fra-san Nov 24 '21 at 22:00
  • @fra-san It does not seem to be the case (see edit). – Ole Tange Nov 25 '21 at 19:51
  • What's the vfs_cache_pressure ? – symcbean Nov 25 '21 at 20:00
  • @symcbean See edit. – Ole Tange Nov 25 '21 at 20:18
  • I wonder if this is related to the behaviour I saw here where the system started swaping heavily even at < 40% usage. That was caused by high IO reading a lot of directories. Focussing "what changed 2 months ago?" is it possible that something is now indexing your files? The task may be gone by the time you start investigating ... leading to 15GB cache/buffer and a lot of swap usage but no IO. If you can find a process group causing it then maybe it's as simple as using cgroups. – Philip Couling Jan 18 '22 at 12:58

0 Answers0