I reckon it's not such an uncommon problem: one process allocates massive amounts of memory (be it due to a memory-leak bug, because you try to process an infeasibly large input file, or whatever). The RAM fills up, and at some point Linux has to switch over to swap. Well, sometimes this is just the last resort: if I have an expensive computation going, I do not want to lose data if towards the end I run out of RAM.
Rather more often however (in my experience), the memory consumption is unbounded, by a rogue, perhaps buggy process. I.e., I do not just end up with some less urgently needed data moved to swap, but the OS is forced to panically swap loads of data. And that unfortunately does not just heavily break the offending process, but can bring the whole system to almost a standstill (it's not quite as bad anymore on machines with SSD, but OTOH it makes me worry whether writing gigabytes and gigabytes of garbage data may in the long term harm the flash cells).
Until I notice the problem and manually kill the process (once it actually took minutes until I even got myself logged into a virtual terminal!), half my running session is in swap, and I need to wait quite a while until the system runs smooth again.
There is one draconic solution to the problem: enforce a hard memory limit. But doing this system-wide would sometimes kill processes that I rather need still, and if I have to manually ulimit
before starting an offending process... well, I'll often forget until it's too late.
Possible kinds of solution I'd be happier with:
- If any process exceeds a certain memory usage, it's artificially throttled down so the rest of the system stays responsive.
- If any process exceeds a certain memory usage, it's
SIGSTOP
ped so I have time to figure out what to do next. - If a process approaches the RAM limit, I get a warning, before the great swapping starts.
Is there any way to get such a behaviour, or similar?
ulimit
is for. – DopeGhoti Nov 25 '15 at 21:07