53

So I have 4 GB RAM + 4GB swap. I want to create a user with limited ram and swap: 3 GB RAM and 1 GB swap. Is such thing possible? Is it possible to start applications with limited RAM and swap avaliable to them without creating a separate user (and not installing any special apps - having just a default Debian/CentOS server configuration, and not using sudo)?

Update:

So I opened terminall and typed into it ulimit command: ulimit -v 1000000 which shall be like 976,6Mb limitation. Next I called ulimit -a and saw that limitation is "on". Then I started some bash script that compiles and starts my app in nohup, a long one nohup ./cloud-updater-linux.sh >& /dev/null &... but after some time I saw:

enter image description here

(which would be ok if no limitations were applied - it downloaded some large lib, and started to compile it.)

But I thought I applied limitations to the shell and all processes launched with/from it with ulimit -v 1000000? What did I get wrong? How to make a terminal and all sub processes it launches be limited on ram usage?

HalosGhost
  • 4,790
myWallJSON
  • 1,141
  • 1
    You can't put memory restrictions on a user as a whole, only on each process. And you can't distinguish between RAM and swap usage. If you want finer control, run the user's processes in a virtual machine. – Gilles 'SO- stop being evil' Mar 16 '12 at 23:30
  • @Gilles pretty sure that virtual machines just use cgroups and namespaces, or derivatives of – RapidWebs Aug 15 '14 at 00:38
  • @RapidWebs no they don't. They just emulate the predefined amount of RAM, and the guest OS then decides how to allocate it to the processes. – Ruslan Aug 10 '16 at 16:18
  • Containers (not virtual machines) use cgroups, to limit memory usage. Limiting virtual memory is not a good idea; A process can use a lot of virtual memory, but may only use a little RAM. For example my system has 34359738367 kB of virtual memory allocated, but much less ram. – ctrl-alt-delor Dec 10 '18 at 20:12

4 Answers4

80

ulimit is made for this. You can setup defaults for ulimit on a per user or a per group basis in

/etc/security/limits.conf

ulimit -v KBYTES sets max virtual memory size. I don't think you can give a max amount of swap. It's just a limit on the amount of virtual memory the user can use.

So you limits.conf would have the line (to a maximum of 4G of memory)

luser  hard  as   4000000

UPDATE - CGroups

The limits imposed by ulimit and limits.conf is per process. I definitely wasn't clear on that point.

If you want to limit the total amount of memory a users uses (which is what you asked). You want to use cgroups.

In /etc/cgconfig.conf:

group memlimit {
    memory {
        memory.limit_in_bytes = 4294967296;
    }
}

This creates a cgroup that has a max memory limit of 4GiB.

In /etc/cgrules.conf:

luser   memory   memlimit/

This will cause all processes run by luser to be run inside the memlimit cgroups created in cgconfig.conf.

utopiabound
  • 3,304
  • 1
    is such thing settable on useradd? – myWallJSON Mar 16 '12 at 13:41
  • 5
    @myWallJSON Not directly, but you can immediately add it to limits.conf, or you can setup a group with certain limits in limits.conf and add user to that group. – utopiabound Mar 16 '12 at 14:56
  • 1
    That's awesome! I didn't know you could do this! Great answer +1 – Yanick Girouard Mar 16 '12 at 16:50
  • 1
    @utopiabound: Updated my Q with some data I got trying to use ulimit. – myWallJSON Mar 16 '12 at 22:24
  • cat /proc/cgroups and look up for memory to see if it's supported by your kernel version. – Daniel C. Sobral Oct 22 '12 at 14:26
  • @DanielC.Sobral: And in my case, it's not (Debian 2.6.32). Any tips? – f.ardelian Feb 11 '13 at 00:09
  • 1
    @f.ardelian Upgrade the kernel. Here's an article about how to do just that! – Daniel C. Sobral Feb 11 '13 at 01:05
  • @utopiabound to make /etc/cgrules.conf, do we need to start cgred service ? – Jigar Dec 03 '14 at 18:22
  • Note that to make this work, on Debian, you should run cgconfigparser -l /etc/cgconfig.conf and start the daemon cgrulesengd. – a3nm Mar 11 '15 at 23:41
  • Is there a way to simply test if the user limit is effective? – n1000 Apr 29 '15 at 07:10
  • I found a tutorial based on currend answer: https://enotacoes.wordpress.com/2014/05/06/limiting-memory-and-cpu-per-user/ – GreenRover Nov 19 '15 at 13:04
  • dd if=/dev/zero of=/dev/shm/crap bs=1M count=9999999999999 Now, I ate up all your RAM. – Nehal J Wani Aug 06 '16 at 19:20
  • Important also to consider limiting swap space: "Processes in a cgroup that does not have the memory.memsw.limit_in_bytes parameter set can potentially use up all the available swap (after exhausting the set memory limitation) and trigger an Out Of Memory situation caused by the lack of available swap. " https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-memory.html – kentt Jan 11 '17 at 09:17
  • Suppose we used this method to assign a group (say @luser) rather than a single user to the memlimit cgroup. Would each member of the group have the limit_in_bytes individually or would they all have to be sharing that amount between each other? – lampShadesDrifter Mar 15 '18 at 21:11
  • so simple? The official documentation for Ubuntu 16.04 of cgroups is way more complex... Is what is indicated enough? No restart of services needed ? – Antonello Apr 13 '18 at 12:47
  • Do we need to ubdate grub to make this work? is it safe updating the grub? Enable memory management; edit /etc/default/grub GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1" – Abolfazl Nov 16 '19 at 10:56
8

cgroups are the right way to do this, as other answers have pointed out. Unfortunately there is no perfect solution to the problem, as we'll get into below. There are a bunch of different ways to set cgroup memory usage limits. How one goes about making a user's login session automatically part of a cgroup varies from system to system. Red Hat has some tools, and so does systemd.

memory.memsw.limit_in_bytes and memory.limit_in_bytes set limits including and not including swap, respectively. The downside memory.limit_in_bytes is that it counts files cached by the kernel on behalf of processes in the cgroup against the group's quota. Less caching means more disk access, so you're potentially giving up some performance if the system otherwise had some memory available.

On the other hand, memory.soft_limit_in_bytes allows the cgroup to go over-quota, but if the kernel OOM killer gets invoked then those cgroups which are over their quotas get killed first, logically. The downside of that, however, is that there are situations where some memory is needed immediately and there isn't time for the OOM killer to look around for processes to kill, in which case something might fail before the over-quota user's processes are killed.

ulimit, however, is absolutely the wrong tool for this. ulimit places limits on virtual memory usage, which is almost certainly not what you want. Many real-world applications use far more virtual memory than physical memory. Most garbage-collected runtimes (Java, Go) work this way to avoid fragmentation. A trivial "hello world" program in C, if compiled with address sanitizer, can use 20TB of virtual memory. Allocators which do not rely on sbrk, such as jemalloc (which is the default allocator for Rust) or tcmalloc, will also have virtual memory usage substantially in excess of their physical usage. For efficiency, many tools will mmap files, which increases virtual usage but not necessarily physical usage. All of my Chrome processes are using 2TB of virtual memory each. I'm on a laptop with 8GB of physical memory. Any way one tried to set up virtual memory quotas here would either break Chrome, force Chrome to disable some security features which rely on allocating (but not using) large amounts of virtual memory, or be completely ineffective at preventing a user from abusing the system.

  • Some parts of the last paragraph of this answer are misinformed. The golang runtime reserves a 1TB+ of contiguous virtual memory address space by calling mmap with PROT_NONE. Reserving address space PROT_NONE does not create any page table entries and does not count toward virtual memory use. Later, as needed, the application creates the page table entries by calling mmap with MAP_FIXED and an address inside the reserved range. That mapped memory will count toward the limit. – Andrew Thaddeus Martin Jan 26 '21 at 17:35
  • You are partially correct, however a simple go "hello world" executable will still consume ~1gb of virtual "memory". Not 1TB+, but still a lot compared to physical usage. If you're running a lot of go processes (or similar), your total virtual memory usage can easily exceed physical memory by significant margins. – Adam Azarchs Aug 10 '21 at 23:56
4

You cannot cap memory usage at the user level, ulimit can do that but for a single process.

Even with using per user limits in /etc/security/limits.conf, a user can use all memory by running multiple processes.

Should you really want to cap resources, you need to use a resource management tool, like rcapd used by projects and zones under Solaris.

There is something that seems to provide similar features on Linux that you might investigate: cgroups.

jlliagre
  • 61,204
1

In system with systemd (e.g. Ubuntu 22.04 I am using) the simplest way to constrain CPU/Memory is using cgroups systemd config files, e.g. for constraining to 4GB RAM and 2 threads:

nano /etc/systemd/system/user-.slice.d/50-memory.conf

And write in that file:

[Slice]
MemoryMax=4G
CPUQuota=200%

(run then systemctl daemon-reload to apply)

This apply to all users, but you can override individual users on /etc/systemd/system/user-[uid].slice.d/50-memory.conf

(I think the name of the conf file doesn't matter but I am not sure)

Antonello
  • 1,023