On my server, I have a long running batch process, process_a
, which have been running for several days now. I don't mind the duration of which the process needs. The problem is how process_a
always hogs ALL (99%) of the memory (RAM) to itself. I have process_b
which I really need to be ran now. However, process_b
is always hanging presumably because of insufficient RAM.
Is there a way to limit a process' memory usage while it is running?
I do not want to restart process_a
because all the progress made could be lost. I am not the owner of the program that runs process_a
, so I cannot modify process_a
to save progress checkpoints at regular intervals. I am thinking of maybe somehow forcing half the memory of process_a
to be dumped to swap in order to regain some memory for process_b
.
All the answers to this question and this question does not address the fact the the process is running.
cgclassify
to move an existing process to a cgroup, so you can use this answer https://unix.stackexchange.com/a/125024/260978. Example for cgclassify: https://unix.stackexchange.com/a/40247/260978 – Olorin Mar 06 '19 at 05:38cgexec -g memory:myGroup pdftoppm
which means that I need to start the process withcgexec
in the first place. The second answer also does not mention any thing about limiting the process memory usage while it is running. Do you have any reference to support thatcgclassify
can be used to reduce a process' RAM usage while the process is running? – krismath Mar 06 '19 at 06:04process_a
is running with full RAM, when I set a limit usingcgclassify
, what happens to the used memory that is over the limit? – krismath Mar 06 '19 at 06:28