24

There are plenty of questions and answers about constraining the resources of a single process, e.g. RLIMIT_AS can be used to constrain the maximum memory allocated by a process that can be seen as VIRT in the likes of top. More on the topic e.g. here Is there a way to limit the amount of memory a particular process can use in Unix?

setrlimit(2) documentation says:

A child process created via fork(2) inherits its parent's resource limits. Resource limits are preserved across execve(2).

It should be understood in the following way:

If a process has a RLIMIT_AS of e.g. 2GB, then it cannot allocate more memory than 2GB. When it spawns a child, the address space limit of 2GB will be passed on to the child, but counting starts from 0. The 2 processes together can take up to 4GB of memory.

But what would be the useful way to constrain the sum total of memory allocated by a whole tree of processes?

jpe
  • 317
  • Releated: http://unix.stackexchange.com/questions/1424/is-there-a-way-to-limit-the-amount-of-memory-a-particular-process-can-use-in-uni – slm Jun 04 '14 at 12:37
  • 7
    I'd take a look at cgroups. – slm Jun 04 '14 at 12:38
  • @slm Thanks! Sounds like cgroups is something to try. The only working solution this far (besides an ugly way of summing memory using PS and killing the parent process if above limit) that might work is using some form of a container (lxc or the likes). – jpe Jun 04 '14 at 12:55
  • Yeah - the tools I'm aware of do not do a group, just single processes, but given how cgroups work for VM technologies like LXC and Docker I'd expect it to do what you want. – slm Jun 04 '14 at 13:00
  • Under which Unix variant? – Gilles 'SO- stop being evil' Jun 04 '14 at 21:56
  • @Gilles It would be good to know how to do it in Linux (the environment where I encountered the problem), but answers for OpenSolaris/Illumos, OSX, BSD are welcome too (e.g. in (Open)Solaris/Illumos it should be easy, right?). – jpe Jun 05 '14 at 06:23
  • @jpe Given that different unix variants are likely to do this in very different ways, it would be better to have one question per variant. – Gilles 'SO- stop being evil' Jun 05 '14 at 11:15
  • @Gilles OK, let the current question be about Linux as the man page excerpt is from Linux. – jpe Jun 05 '14 at 12:21
  • 1
    If it's Linux put the parent PID in its own namespace and control it and all its children that way. Here's an introductory answer to that concept: http://unix.stackexchange.com/a/124194/52934 – mikeserv Jun 07 '14 at 20:53
  • @mikeserv looks like something in the right direction too. But which would be the way that would work in most up-to-date distributions, cgroups or containers/namespaces? – jpe Jun 08 '14 at 09:31
  • 1
    namespaces are containers - just native and handled fully in kernel. And much of the control in control groups is what makes that possible. namespaces finally rolled out production ready circa kernel 3.8. If that last was a small intro - here's the inside out: http://lwn.net/Articles/531114/ – mikeserv Jun 08 '14 at 11:34
  • @mikeserv It seems the chat is convergeing to something constructive: namespaces is a solution and probably the solution. What remains to be said is how to use them in a user friendly way that would work across most distros with recent enough kernel. – jpe Jun 08 '14 at 12:00
  • I completely agree - but I doubt very seriously if I can help you much more - I don't have any practical experience with them. I'm kind of hoping you'll dig into that 7 part series at Linux Weekly News and share your own... That's why - for my part at least - this chat is in the comments block of the question and not an answer... – mikeserv Jun 08 '14 at 12:08
  • What you are trying to achieve may be impossible and dangerous because you may kill/crash off the process tree anyway as you may run out of your 2gig allocation size. That's why a spawned process is a copy of the parent process. – Tasos Jun 10 '14 at 19:14

3 Answers3

15

I am not sure if this answers your question, but I found this perl script that claims to do exactly what you are looking for. The script implements its own system for enforcing the limits by waking up and checking the resource usage of the process and its children. It seems to be well documented and explained, and has been updated recently.

As slm said in his comment, cgroups can also be used for this. You might have to install the utilities for managing cgroups, assuming you are on Linux you should look for libcgroups.

sudo cgcreate -t $USER:$USER -a $USER:$USER -g memory:myGroup

Make sure $USER is your user.

Your user should then have access to the cgroup memory settings in /sys/fs/cgroup/memory/myGroup.

You can then set the limit to, lets say 500 MB, by doing this:

echo 500000000 > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes

Now lets run Vim:

cgexec -g memory:myGroup vim

The vim process and all its children should now be limited to using 500 MB of RAM. However, I think this limit only applies to RAM and not swap. Once the processes reach the limit they will start swapping. I am not sure if you can get around this, I can not find a way to limit swap usage using cgroups.

techraf
  • 5,941
arnefm
  • 3,172
  • The proposed solution does make it possible to limit the resident set size of a tree of processes. The behaviour seems to be different from RLIMIT_AS: it is possible to malloc more memory than is the limit, however it seems not to be possible to actually use more. – jpe Jun 13 '14 at 17:03
  • By default, the cgroup memory limit applies only to (approximately) the physical RAM use. There's a kernel option (CONFIG_MEMCG_SWAP) to enable swap accounting; see the kernel docs for details. – Søren Løvborg Jan 23 '15 at 15:47
  • 1
    On fedora, sudo yum install libcgroup-tools – jozxyqk Mar 10 '15 at 11:46
  • Note that if your OS is running systemd (pretty much all Linux distros these days) you're not supposed to use cgmanager nor cgcreate as far as I know. I think the officially supported systemd way is to use systemd-run --scope -p MemoryLimit=500M ... but it has been buggy in many distros so make sure to test if it actually works with your distro. In my experience, some versions will silently fail – they will run the command but will not limit the memory usage. – Mikko Rantalainen May 09 '22 at 09:29
13

https://unix.stackexchange.com/a/536046/4319:

On any systemd-based distro you can also use cgroups indirectly through systemd-run. E.g. for your case of limiting pdftoppm to 500M of RAM, use:

systemd-run --scope -p MemoryLimit=500M pdftoppm

...

3

I created a script that does this, using commands from cgroup-tools to run the target process in a cgroup with limited memory. See this answer for details and the script.

JanKanis
  • 1,069
  • 1
    Thanks for short cgm guide, it was useful – Vitaly Isaev Sep 18 '17 at 16:20
  • According to Poettering (the creator of systemd) you should not run cgmanager in a system that's running with systemd (that is, any modern Linux distro). Your disto is supposed to use cgroupv2 and you can run systemd-run --user -p MemoryMax=42M ... – however if your system is not cgroupv2 compatible, that command will appear to work but the memory usage is not actually limited in practice. – Mikko Rantalainen Sep 04 '21 at 15:34