2

Minimal test case when Linux system does not have swap (or run sudo swapoff -a before testing). Run following bash one-liner as normal user:

while true; do date; nice -20 stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; sleep 5s; done

and run following bash one-liner with high priority root shell (e.g. sudo nice -n -19 bash):

while true; do NS=$(date '+%N' | sed 's/^0*//'); let "S=998000000 - $NS"; S=$(( S > 0 ? S : 0)); LC_ALL=C sleep "0.$S"; date --iso=ns; done

The high priority process is supposed to run date every second as accurately as possible. However, even if this process is running with priority -19, the background process running on priority 20 is able to cause major delays. It seems that there's no limit for the latency induced by the low priority background process because higher delays can be activated by increasing the stress --timeout value.

Is there a way to limit maximum latency and automatically kill the stress if needed to accomplish that? Increasing /proc/sys/vm/user_reserve_kbytes or /proc/sys/vm/admin_reserve_kbytes or /proc/sys/vm/min_free_kbytes does not seem to help.

  • CPU pinning to at least the most priority process might somewhat mitigate it. used it with some success in the past for similar real-world situations. have a look at https://unix.stackexchange.com/questions/417672/how-to-disable-cpu/419555#419555 – Rui F Ribeiro Feb 10 '18 at 18:01
  • I believe that the latency is caused by near OOM situation and the high priority process still needs to launch small new processes. Pinning to another CPU does not help if there is not enough RAM to start even a small new process such as date. As far as I can see it, the problem is memory starvation, not CPU starvation. – Mikko Rantalainen Feb 10 '18 at 19:15
  • When you have one, you usually end up having another. Granted, there are situations where it would not help. Depending on the situation, having a controlled reboot under a watchdog might be preferable of starting killing things remotely. https://unix.stackexchange.com/questions/366973/restart-system-if-it-runs-out-of-memory/366983#366983 – Rui F Ribeiro Feb 10 '18 at 19:17
  • I think I'm hitting some kernel bug. https://lkml.org/lkml/2017/5/20/186 – Mikko Rantalainen Feb 11 '18 at 18:19
  • See also: https://elinux.org/images/a/a9/ELC2017-Effectively_Measure_and_Reduce_Kernel_Latencies_for_Real-time_Constraints%281%29.pdf – Mikko Rantalainen Apr 02 '20 at 14:05

2 Answers2

3

Please consider trying* the kernel patch from this question, as it seems to do the job(avoid high latency near oom) for me so far(even using your code from the question to test it) and I'm also avoiding a ton of disk thrashing(for example when I compile firefox which usually caused the OS to freeze due to running out of memory).
The patch avoids evicting Active(file) pages, thus keeping (at least) the executable code pages in RAM so that context switches don't cause kswapd0(?) to re-read them(which would cause lots of disk reading and a frozen OS).

* or even suggesting a better way?

  • 1
    Interesting patch. I think it's a bit heavy handed but triggering OOM Killer sooner is definitely the correct behavior. I guess triggering on increased mm allocation latency is better than avoiding kswapd but real time benchmarking could be different. – Mikko Rantalainen Aug 30 '18 at 09:52
  • 1
    Were you able to get any dmesg output by using the patch that you mentioned in a comment to your question ( lkml.org/lkml/2017/5/20/186 ) which I adapted to 4.18.5 here: https://github.com/constantoverride/qubes-linux-kernel/blob/acd686a5019c7ab6ec10dc457bdee4830e2d741f/patches.addon/malloc_watchdog.patch I ask because I got no output from it (unless I missed it? or something) and I think I should get some output according to your previous comment: if mm allocation latency is at play. –  Aug 30 '18 at 13:03
  • Perhaps I should've used a 1 sec timeout? I previously used: CONFIG_DETECT_MEMALLOC_STALL_TASK=y and CONFIG_DEFAULT_MEMALLOC_TASK_TIMEOUT=10 Will try 1 sec next and without my patch, then trigger OOM using your stress commands. I should get some dmesg output from the patch, if mm allocation latency is detected, right ? –  Aug 30 '18 at 13:09
  • I stand corrected: I do get output on dmesg from that Memory allocation stall watchdog patch. –  Aug 30 '18 at 13:48
  • If I also apply my patch, then I don't see any output from the malloc stall watch patch, which seems to suggest that no allocation latency is: 1. either detected anymore, or 2. actually happening anymore. (or I'm missing something) –  Aug 30 '18 at 15:22
  • 1
    @MikkoRantalainen it is the same Memory allocation stall watchdog patch that you mentioned in a comment in OP and that's where I originally found it from. But the one you just linked is older (15 March 2017 now, vs 20 May 2017 in OP comment). It is a good patch and I'm keeping both(it and mine) currently applied. Cheers! –  Aug 31 '18 at 11:03
  • Have you tried to automatically trigger OOM Killer whenever your adapted malloc_watchdog.patch detects a stalling task? I would adapt that patch to set timeout in ms but otherwise I agree with the idea of that patch. – Mikko Rantalainen Aug 31 '18 at 11:04
  • @MikkoRantalainen I have not tried that. I'm not actually a programmer, so my skills are puny :D The idea sounds good (even though I don't know, at this moment, how to implement it) However, I'm thinking that by the time it detected stalling(unless timeout is set at 1 second?) there are enough executable pages evicted that disk thrashing is in effect and might take a while even for oom-killer to trigger because of that. –  Aug 31 '18 at 11:12
  • I've a correction to my previous comment where I stated that "if I also apply my patch" (in addition to the malloc stall watchdog patch) then there are no reports of stalling: I was wrong because I mistakenly used a previous kernel build which didn't have the stall patch on it! I've re-run the stress test on the correct kernel and stalling is indeed detected even for a stress that was not killed by OOM-killer, outputs here: https://gist.github.com/constantoverride/84eba764f487049ed642eb2111a20830#gistcomment-2694173 –  Aug 31 '18 at 11:34
  • @MikkoRantalainen Do you think there could be a case when stalling could be detected (by that patch) even though there is plenty (let's say half) of RAM still freely available? If so, then triggering the OOM-killer would kill some program that is using the most RAM at the time(or that has the highest oom_score?) which would be bad because we presumably still have plenty of RAM available in this scenario. –  Aug 31 '18 at 11:58
  • @MikkoRantalainen how about I modify my patch to not allow evicting Active(file) ONLY when the malloc stall patch is in a state of having detected that stalling is currently happening ? (as opposed to not allow evicting them all the time, as it currently does) - I don't know how to do that currently though. But why can't all active processes's executable code NOT be evicted ever? sure if they're inactive do it(evict), but if they're active, context switching will cause their code to be reread from disk, so it doesn't make sense to me to do this(what kernel currently does, w/o patches). –  Aug 31 '18 at 12:05
  • 1
    The kernel does not have enough history per page to do really clever stuff. As far as I know, it basically knows if a page has been loaded back from swap sometimes in history but it has no idea how long ago that happened. And I'm not sure if it even remembers that it had to re-read the file from the disk (in case of executable file). – Mikko Rantalainen Sep 02 '18 at 15:36
  • 1
    you're right that if one sets up a hard malloc timeout trigger for OOM Killer, the system may end up killing a process even with half the memory still free. It should not happen in normal case but if you're not running PREEMPT or RT kernel, I guess it could happen because of locking between different kernel threads if multiple user processes use lots of CPU. However, if you're looking for guaranteed latency, killing processes even with 50% free may be exactly what you want! – Mikko Rantalainen Sep 02 '18 at 15:46
  • 1
    And by the way, you're doing superb work considering you're not a programmer. I've done user mode programmer for living since around year 2000 and kernel mode programming is still hard for me, too. – Mikko Rantalainen Sep 02 '18 at 15:48
  • I did run version with custom patches (https://github.com/mikkorantalainen/linux/commits/WIP) for around 2 months and hit some high latency issues even when Memory allocation stall watchdog did not report anything. Overall the system was pretty stable but xorg did pretty hard hang after 71 days of uptime. Even in that case, I was able to recover the system by logging in with SSH remotely and kill -9 <pid-of-xorg-process>. In the end, I think that those custom patches did slightly improve system behavior near OOM situations but were not able to detect all cases of high latency. – Mikko Rantalainen Nov 18 '18 at 11:58
  • (cont.) I did run the system with 1 sec timeout for reporting the high latency during allocations and still got the reports only during artificial benchmarks and never with actual real world high latency cases. I would guess the real cause behind those high latency cases are not memory allocation stalling but instead some kind of near live-lock situation where forward progress is slow enough to look like stalling for human but kernel does not (yet?) consider it a problem or an actual OOM situation. – Mikko Rantalainen Nov 18 '18 at 12:01
3

There are a few tools designed to avoid this particular issue, listed with increasing complexity/configurability:

  • earlyoom, probably good enough for desktop/laptop computers
  • nohang, a more configurable solution
  • Facebook's solution oomd for their own servers.
nat chouf
  • 131
  • Thanks. I've been running earlyoom but when I overcommit memory a lot, it will start to be too trigger happy. (I often run MemTotal: 32GB and Committed_AS: 45-55 GB which makes MemAvailable often display zero even if system keeps running fine.) Cannot run oomd due dependencies. I guess I need to check out nohang when I have time. – Mikko Rantalainen Mar 19 '19 at 14:17
  • 2
    I also use zram to extend the available amount of memory. It works pretty well. I setup a zram device with 75% of my total ram, with lz4 algo. I observe compression factors around 4 to 6. This means that when the zram device is full, it takes less than 20% of my total ram, effectively adding more than 50% of ram... (when the zram device is empty, it consumes almost no memory). – nat chouf Mar 29 '19 at 22:43