As far as I'm concerned, I think cgroups
would be overkill here. However, I tend to use ulimit
whenever I run something witk a fork
system call in it (bad experiences made it a habit...) :
$ ulimit -u 2500
$ ./mypotentiallydeadlyprogram
This way, I put a 2500 processes limit on my current shell. Thanks to this, my fork
calls will end up failing if they get too numerous, hence preventing the system from going down, and allowing me to furiously hit Ctrl + C.
On my machine, I find 2500 to be a good limit, but you might want to increase/decrease this value according to what your machine can take, and how far you want your fork bomb to go. Also remember that your machine needs to spawn things to survive, don't suffocate it. I have seen people writing this in their ~/.bashrc
, therefore restricting even their session's main bash. While this was very funny to the sysadmin, the user was very unhappy to freeze after login.
While ulimit
can be used to set up a temporary limit, you can set something more permanent if you have root access (and want to enforce the limit on specific users). This can be done through /etc/security/limits.conf
:
# <domain> <type> <item> <value>
youruser soft nproc 2500
youruser hard nproc 2750
In the above setup, youruser
has got a soft limit of 2500 processes (max. 2750). This file allows you to set up various kinds of limits, for various entities on your system (users, groups, ...). Have a look at its documentation if you need more information. Note however that this is system-wide configuration, which means that this limit isn't applied per-shell for youruser
.
By the way, /proc/sys/kernel/pid_max
will contain the maximum PID which can be granted by your kernel. Since PIDs are reusable, you may consider this really close from your maximum number of processes.
ulimit
s are in place that prevent a user from spawning more than 128 processes. You’ll just need some help from root to kill them afterwards. – mirabilos Nov 13 '14 at 17:25