Q. How do you think about how much swap space to allocate?
Does Linux (Debian 10 in this case) need enough swap space to swap out some VM pages plus more to swap in other pages, and thus the RAM + swap file size needs to be considerably larger than the total working set in order to not thrash badly?
Assumptions:
- Consider worker machines on Google Compute Engine, each running one payload process at a time and needing only one CPU. Rules-of-thumb for desktop machines don't apply.
- We can choose how many CPUs, disk GB, and RAM GB to allocate, but we pay for all that.
- Compute Engine N1 machines max out at 6.5 GB RAM/CPU so to get more memory we must either pay for more CPUs or enable swapping.
- Most of the payload processes fit in RAM but when a payload process needs more memory, we don't want it to fail or thrash badly.
- We can configure the swap space size (at the cost of more disk space) and the swappiness parameter.
- Ignore the contribution of memory-mapped files towards saving swap space.
- Hibernation doesn't apply.
What's a conceptual model for how Linux uses swap space so I can pick, debug, and tune it? Is VM size limited to the swap file size so every page has a place to swap out to and clean pages don't need to be written out? Or does VM size approach RAM + swap size? Or somewhere in between, where it needs a significant amount of swap space to write out some pages before swapping in others?