0

On my server, I have a long running batch process, process_a, which have been running for several days now. I don't mind the duration of which the process needs. The problem is how process_a always hogs ALL (99%) of the memory (RAM) to itself. I have process_b which I really need to be ran now. However, process_b is always hanging presumably because of insufficient RAM.

Is there a way to limit a process' memory usage while it is running?

I do not want to restart process_a because all the progress made could be lost. I am not the owner of the program that runs process_a, so I cannot modify process_a to save progress checkpoints at regular intervals. I am thinking of maybe somehow forcing half the memory of process_a to be dumped to swap in order to regain some memory for process_b.

All the answers to this question and this question does not address the fact the the process is running.

krismath
  • 111
  • You can use cgclassify to move an existing process to a cgroup, so you can use this answer https://unix.stackexchange.com/a/125024/260978. Example for cgclassify: https://unix.stackexchange.com/a/40247/260978 – Olorin Mar 06 '19 at 05:38
  • 1
    Possible duplicate of Limit memory usage for a single Linux process and https://unix.stackexchange.com/questions/19111/use-cgroup-to-limit-the-memory-usage-of-virtualbox – Olorin Mar 06 '19 at 05:39
  • @Olorin The first answer you refer to specifies that the process is executed by cgexec -g memory:myGroup pdftoppm which means that I need to start the process with cgexec in the first place. The second answer also does not mention any thing about limiting the process memory usage while it is running. Do you have any reference to support that cgclassify can be used to reduce a process' RAM usage while the process is running? – krismath Mar 06 '19 at 06:04
  • That's literally what the other post (example for cgclassify) is about. – Olorin Mar 06 '19 at 06:19
  • So if process_a is running with full RAM, when I set a limit using cgclassify, what happens to the used memory that is over the limit? – krismath Mar 06 '19 at 06:28

2 Answers2

2

If process_a does not use the memory actively, then it will be swapped out when process_b starts.

So if you do not see that process_a's memory is swapped out, then it could be because process_a is using the memory actively.

So how can you tell process_a to be inactive for a while?

You suspend it.

kill -TSTP $pid

Then you run process_b and let process_a move to swap.

If you want to push more memory out to swap check out: https://gitlab.com/ole.tange/tangetools/tree/master/swapout

Finally when process_b is done, you release the brake on process_a:

kill -CONT $pid
Ole Tange
  • 35,514
-1
Check for the priority of the process

Higher priority is -20
lower prioritty is +19
Neutral is 0

if process_a is having highher priority based on your requirement reduce the priority (-20 to +19). Higher the priority it will consumes most of resources


You can try with ulimit command options too