If I type in my shell x=`yes`, eventually I will get cannot allocate 18446744071562067968 bytes (4295032832 bytes allocated) because yes tries to write into x forever until it runs out of memory. I get a message cannot allocate <memory> because the kernel's OOM-killer told xrealloc there are no more bytes to allocate, and that it should exit immediately.
But, when I ask any game_engine to allocate more graphics memory that does not exist because I have insufficient resources, it turns to my RAM and CPU to allocate the requested memory there instead.
Why doesn't the kernel's OOM-killer ever catch any game_engine trying to allocate tons of memory, like it does with x=`yes`?
That is, if I'm running game_engine and my user hasn't spawned any new processes since memory-hog game_engine, why does said game_engine always succeed in bringing my system to its unresponsive, unrecoverable knees without OOM-killer killing it?
I use game engines as an example because they tend to allocate tons and tons of memory on my poor little integrated card, but this seems to happen with many resource-intensive X processes.
Are there cases under which the OOM-killer is ineffective or not able to revoke a process' memory?
ulimitdoes not work for generic case because it only limits memory per process. The system can be still taken down by misbehaving process that does both fork and comsume lots of memory. One needs to usecgroupsto really force system to never go down because of misbehaving processes or to rely OOM killer. OOM killer should require zero configuration so most people try to deal with that. In pretty much all cases you're still hosed. Ifcgroupsor OOM killer needs to kill your misbehaving but important process, how good is your system after that process has been killed? – Mikko Rantalainen Feb 03 '18 at 12:02