I use Firefox 83 on my Devuan GNU/Linux Beowulf desktop.
Unfortunately, for reasons which are not clear to me, FF has memory issues which cause it to gradually take up more and more of my system's memory. It's tolerable as long as I'm using the system, but when I leave it alone for several hours, I find it has swapped everything else out, and my system takes... oh, a good several minutes or so of disk I/O to get back into shape (usually after I've done killall firefox-bin
in textual VT).
I've decided I want to hard-cap FF's physical and/or swap memory usage. I've read this post on ServerFault:
Limit memory usage for a single linux process
and there are lots of suggested ways to do this:
- Wrap process execution with use of the
uptime
perl script - Define a memory-limited process control group (the cgroups mechanism), then wrap process execution with the use of
cgexec
- Use a complex cgroup-based wrapper script
- Wrap process execution in a script which sets
ulimit
- Use a monit daemon to kill firefox under certain conditions (goes beyond a certain amount of memory while the machine is determined to be idle, although that might be difficult to detect (?))
But for the life of me I can't decide which one to try. Can I get some pros and cons of the different methods?
Notes:
- Remember Firefox is already often launched by a wrapper script. Also, it launches child processes.
- The machine is an Intel i5-7600K with 16 GB of physical RAM.
- I do use some other significant memory-consumers occasionally (e.g. an in-memory DB I play with); but the machine is not a dedicated server or anything - just my desktop.
- If you need more information about my usage profile, ask.
- If you have another alternative to those listed above, you can add pros and cons for that too.
- You can cover just one or two alternatives you have experience with, no need to discuss all options.