0

In our company, we use a headless Linux machine as development machine. However, sometimes users use up all resources (CPU, RAM) on that machine, which influences the work of others. Hence, we want to cap the amount of resources a user can take up for a single process.

Is there a utility on Linux that allows to limit the total amount of resources a user can use? Or that automatically kills processes that use up too many resources?

  • This does not quite answer my question. I'd ideally like to set this up on a per-user basis and I don't know which processes they are running in advance. Of course, I can try to limit the resources of the python process with cgroups, which would kill most of cases, but not all of them. – Green绿色 Mar 04 '22 at 08:49
  • Is there any easy way to categorise the different kinds of processes (e.g. compiling software, running heavy calculations, transcoding videos, etc)? If so, maybe the easiest & best solution is to just build another headless box for stuff that interferes with the primary work of your server. Also, split dev, testing, and production. Depending on what is being run, it may even be worthwhile looking into setting up a cluster with slurm or torque or something, but this is only useful for the kinds of tasks that are amenable to batch scheduling and/or parallel processing with MPI or similar. – cas Mar 04 '22 at 12:16
  • i.e. this may be a problem better solved with policy and/or social pressure than with scripting. – cas Mar 04 '22 at 12:17
  • @Green绿色 cgroups works perfect for restricting per-user. – Marcus Müller Mar 04 '22 at 13:49
  • In that case, I'll should have a closer look at cgroups. Thanks for recommending. – Green绿色 Mar 06 '22 at 10:20

0 Answers0