100

From what I understand, the purpose of a swap partition in Linux is to free up some "not as frequently accessed" information from RAM and move it to a specific partition on your harddrive (at the cost of making it slower to read from or write to), essentially allowing active applications more of the "high speed memory".

This is great for when you are on a machine with a small amount of RAM and don't want to run into problems if you run out. However, if your system has 16 GB or 32 GB of RAM, and assuming you aren't running a MySQL database for StackExchange or editing a 1080p full length movie in Linux, should a swap partition be used?

Braiam
  • 35,991
IQAndreas
  • 10,345
  • 1
    @IQAndreas, I have no experience with 16 GB RAM, whether that would be different from lower values. When I had 1 GB ist was good to have a multiple of that as swap, when I had 4 GB it was also good, now I have 8 GB and feel that my 8 GB swap is (occasionally) still too small. So, yes, I'd extrapolate that also with 16 GB it is good to have 1-2 times as much swap. And it doesn't cost anything on the TB disks we currently have. – Janis Mar 16 '15 at 04:33
  • 3
    @Janis - it costs a lot. And you could easily have done without swap on even the 1GB machine if you implemented sound memory management. Swap costs performance - when you have it the kernel will inevitably use it. So on a 1TB or whatever size disk making a swap partition is an invitation to the kernel to put memory pages on disk, rather than keeping them in RAM or dropping them entirely. With 16gb a typical user will never approach using it all - I've got 24gb RAM w/ 2gb used and 10gb cached (because I dl'd a torrent to /tmp) after 3 days uptime. – mikeserv Mar 16 '15 at 04:58
  • 5
    @mikeserv, you're wrong I fear; I have constantly observed the displayed disk metrics. As long as there is free memory the swap has not been used, only when memory was filled swapping started. I think it depends on how you use your computer; a desktop system that is shutdown every evening would rarely encounter memory problems, my system had uptimes of months, though. Yes, swap costs performance, but if there's no RAM space left your system can continue to work. What is the option you have? Maybe you can elaborate on the memory management argument. I use my Linux system as preconfigured. – Janis Mar 16 '15 at 05:14
  • 2
    @Janis - zRAM, zSwap would be better than using real swap. my own machine is used very like your own, probably. And sound memory management is OOM - a linux kernel kills apps based on their OOM score before not continuing to work. – mikeserv Mar 16 '15 at 05:26
  • @mikeserv, thanks for the keywords; I'll have a look into those features. From a brief peek I've got the impression that 'z' stands for compression. I'm not sure that's relevant in my case since I just installed ZFS a week ago (which supports compression). Maybe those z-tools are also not supported by my kernel version. But it's too early to judge; I'll see. Thanks, again. – Janis Mar 16 '15 at 05:36
  • @Janis - did you look any harder at zRAM, zSwap? Neither one would have anything to do w/ ZFS, both are methods of partitioning off then compressing a portion of RAM (like a tmpfs, kind of) and using it to expand available memory (as swap does, but without having to use disk space). The tradeoff is CPU-cycles for available mem rather than disk I/O for available mem. In general, the performance impact for the former is less than the latter - (which is why, by the way, ZFS supports compression). – mikeserv Mar 16 '15 at 17:46
  • 1
    @mikeserv, yes, I've looked at that already yesterday. (I had just been referrring to the compression available in both.) The mutual tradeoffs had been well perceived already. My above point, that there were no tradeoffs with my existing swap space on my system still holds; and meanwhile that seems to be also confirmed by some other repliers in this thread. Anyway, your pointers to those z-tools were informative, and it's nice to know about them. Thanks again. – Janis Mar 16 '15 at 20:33
  • 3
    Have you thought about using swap files? The advantage is that it is much easier to modify the size of a file than of a partition. You can test with different sizes and check performance. You don't even need to reboot to increase your swap size. Just add a file. – Robert Jacobs Mar 16 '15 at 20:50
  • @RobertJacobs - yours is the best suggestion here, I think. For vms I will sometimes dynamically allocate a flash volume w/ bcache for use as their swap file. It can be helpful in those cases. For a baremetal system it is better, in my opinion, to handle load according to hardware specs and tuning config toward best performance as much as can be than it is to rely on swap partitions. But if special cases require it, then loop-mounting a swap file is better (and more easily managed) than partitioning for it. – mikeserv Mar 16 '15 at 21:21
  • 3
    The "create a partition sized about twice the amount of RAM" rule has been wrong since computers started having 1GB of RAM and higher. It could be good for computers with 32 or 64 MB, but nowadays (and IMHO), 2 or 4 GB should be enough for all but corner cases. – rsuarez Mar 17 '15 at 09:30
  • 2
    Maybe not what you're looking for, but a swap partition (not smaller than ram) may be required for hibernation. – basic6 Mar 17 '15 at 10:11
  • I think we could ask ourselves the opposite question: would there be any drawbacks in using a swap partition along with 16GB of RAM? ; we can't really use the "I don't want to waste disk space" excuse anymore, can we? And it's not like it's really difficult to set it up... – John WH Smith Mar 17 '15 at 14:35
  • 2
    See also http://askubuntu.com/questions/49109/i-have-16gb-ram-do-i-need-32gb-swap --- it's on askubuntu, but it is valid for any Unix. – Rmano Mar 17 '15 at 18:58
  • @JohnWHSmith perhaps for VMs, or installations on smaller-capacity SSDs... – h.j.k. Mar 18 '15 at 09:59
  • I have 4GB ram with normally 2GB used, but still my system(64bit) still use some swap, I normally use browser and watch videos or listen songs – Alex Jones Mar 18 '15 at 10:56
  • Reading advice for "what do do after installing" Mint or Ubuntu, it mentions the "swappiness" parameter that can be set much lower to prevent pointless swapping. – JDługosz Mar 18 '15 at 18:15
  • 2
    @mikeserv Claiming that swap costs performance is strongly misleading. Here is an experiment I have performed on multiple Linux systems. First configure it without swap, then start some programs using lots of memory. Eventually performance will suffer. Once performance does start suffering enable a swap file or swap partition. As soon as the swap is enabled, performance improves. If you did not add quite enough swap, the swap may fill up. Then performance will suffer again until you add even more swap space. – kasperd Mar 18 '15 at 20:08
  • 1
    @kasperd - it does cost performance, though. You're comparing apples to oranges - the kernel does not come preconfigured for you - it just defaults to generic. This means that the kernel does not have any specialized OOM rules defined - it just behaves as miserly as it might in low-mem situations because the alternative is to kill an app by default - which is not a good one. However - if you spec the killer according to your workflow I think you'll find that your test results change - it can free rather than preserve unimportant data to the point that it needs even disk space to work. – mikeserv Mar 20 '15 at 02:36
  • 1
    @mikeserv I am comparing identical workloads on identical hardware with the only difference being whether swap is enabled or not. Everything else being equal, a Linux system performs better with swap than without swap. If you want to ignore that fact and keep believing your misconception, then feel free to do so. But please stop spreading it to people trying to learn. – kasperd Mar 20 '15 at 05:47
  • 1
    @kasperd - you are comparing identical workloads on systems not configured to prune them intelligently. All things are not equal, and you'd know why if read this then afterward compared your default system settings to the huge amount of knobs you can turn per process, per mem control cgroup, or even just via ulimit. Please don't try to generalize such a complicated topic. The fact is swap is probably nothing more than nuisance if you rein in memory over-commitment, allocation, and out-of-memory killing to your specs. – mikeserv Mar 20 '15 at 12:43
  • @mikeserv I know for a fact, that I did not change any of those settings. Hence they were identical before and after. The only change between the before and after scenario were, that I enabled swap. And the effect was the performance improved when enabling swap. By continuing your arguments you are not going to convince me that I did something differently from what I really did. However you might convince me, that you haven't read what you are replying to. – kasperd Mar 20 '15 at 12:54
  • 1
    @kasperd - I didn't know, but assumed, that you changed none of those settings when performing your tests. That is the problem - you're comparing system performance on systems not configured to kill processes to custom spec and claiming that your tests prove that swap is a better alternative to doing so. That doesn't make any sense to me. Also, I have no quarrel w/ you - and am discussing this only in the (selfish) hope that I may learn from it - which is typical of me. Perhaps you could adopt a similar approach? – mikeserv Mar 20 '15 at 13:06
  • @mikeserv I am comparing two identical scenarios except from one having swap and the other not having swap. The scenario with swap was the one producing better performance. And this is when all other settings are left with their default values. Even if settings could theoretically be tweaked to produce a situation in which swap hurts performance, this won't be the case for the majority of users. The majority of users will experience better performance with swap than without. – kasperd Mar 20 '15 at 13:17
  • @kasperd - swapping does hurt performance unless you can somehow swap to a faster medium than RAM. The assumption I'm making about your tests is that your systems are allowing for memory overcommitment by relying on swap whether it exists or not - this the default (and fairly sane considering fork() and CoW implications) config for most linux systems to the best of my knowledge. It is not in my experience, however, an optimal config - I've found that explicit caching to tmpfs for processes that need it and killing ancillary others is better overall than overcommitting defacto. – mikeserv Mar 20 '15 at 13:32
  • @kasperd - and by the way, without setting swappiness=[01] or something along those lines, the performance will only be positively affected in low memory situations - the opposite is true for all other cases, if to a far (approaching infinitely) lesser extent because all memory is always in RAM. kswapd is best left sleeping as much as possible - and when swappiness=>1 the kernel will swap old pages even when memory allows for it to remain. This, though, also depends on stuff like cache pressure and which is most important in your application. – mikeserv Mar 20 '15 at 13:50
  • 1
    @mikeserv If the machine does have more than enough RAM, then swap won't be used at all. I have a server on which swappiness is 60. It has 64GB of swap, of which it is using absolutely nothing. – kasperd Mar 20 '15 at 15:33
  • @kasperd - interesting? What is the kernel version? I believe you, of course, but I think what you say is either true of kernels +/- 3.12 - (though I can't remember whether the change at that time fixed it so swap would not be used or vice versa). But for either case, the kernel will swap for caches depending on cache pressure - which is what I meant. You must not need to cache enough to push into swap, basically. And in that case, swap basically serves no purpose at all if you just do vm.overcommit=2. – mikeserv Mar 20 '15 at 15:42
  • 1
    @mikeserv That is a Debian system running 3.2.0-4-amd64. 32GB of RAM, 28GB used of which 22GB is cache. That's a server which was chosen to have room to grow, and at the time being 32GB of RAM does qualify as more than enough. – kasperd Mar 20 '15 at 16:12

13 Answers13

108

Yes.

You should most definitely always have swap enabled, except if there is a very compelling, forbidding reason (like, no disk at all, or only network disk present). Should you have a swap on the order of the often recommended ridiculous sizes (such as, twice the amount of RAM)? Well, no.

The reason is that swap is not only useful when your applications consume more memory than there is physical RAM (actually, in that case, swap is not very useful at all because it seriously impacts performance). The main incentive for swap nowadays is not to magically turn 16GiB of RAM into 32 GiB, but to make more efficient use of the installed, available RAM.

On a modern computer, RAM does not go unused. Unused RAM is something that you could just as well not have bought and saved the money instead. Therefore, anything you load or anything that is otherwise memory-mapped, anything that could possibly be reused by anyone any time later (limited by security constraints) is being cached. Very soon after the machine has booted, all physical RAM will have been used for something.

Whenever you ask for a new memory page from the operating system, the memory manager has to make an educated decision:

  1. Purge a page from the buffer cache
  2. Purge a page from a mapping (effectively the same as #1, on most systems)
  3. Move a page that has not been accessed for a long time -- preferably never -- to swap (this could in fact even happen proactively, not necessarily at the very last moment)
  4. Kill your process, or kill a random process (OOM)
  5. Kernel panic

Options #4 and #5 are very undesirable and will only happen if the operating system has absolutely no other choice. Options #1 and #2 mean that you throw something away that you will possibly be needing soon again. This negatively impacts performance.

Option #3 means you move something that you (probably) don't need any time soon onto slow storage. That's fine because now something that you do need can use the fast RAM.

By removing option #3, you have effectively limited the operating system to doing either #1 or #2. Reloading a page from disk is the same as reloading it from swap, except having to reload from swap is usually less likely (due to making proper paging decisions).

In other words, by disabling swap you gain nothing, but you limit the operation system's number of useful options in dealing with a memory request. Which might not be, but very possibly may be a disadvantage (and will never be an advantage).

[EDIT]

The careful reader of the mmap manpage, specifically the description of MAP_NORESERVE, will notice another good reason why swap is somewhat of a necessity even on a system with "enough" physical memory:

"When swap space is not reserved one might get SIGSEGV upon a write if no physical memory is available."

-- Wait a moment, what does that mean?

If you map a file, you can access the file's contents directly as if the file was somehow, by magic, in your program's address space. For read-only access, the operating system needs in principle no more than a single page of physical memory which it can repopulate with different data every time you access a different virtual page (for efficiency reasons, that's of course not what is done, but in principle you could access terabytes worth of data with a single page of physical memory). Now what if you also write to a file mapping? In this case, the operating system must have a physical page -- or swap space -- ready for every page written to. There's no other way to keep the data around until the dirty pages writeback process has done its work (which can be several seconds). For this reason, the OS reserves (but doesn't necessarily ever commit) swap space, so in case you are writing to a mapping while there happens to be no physical page unused (that's a quite possible, and normal condition), you're guaranteed that it will still work.

Now what if there is no swap? It means that no swap can be reserved (duh!), and this means that as soon as there are no free physical pages left, and you're writing to a page, you are getting a pleasant surprise in the form of your process receiving a segmentation fault, and probably being killed.

[/EDIT]

However, the traditional recommendation of making swap twice the size of RAM is nonsensical. Although disk space is cheap, it does not make sense to assign that much swap. Wasting something that is cheap is still wasteful, and you absolutely don't want to be continually swapping in and out working sets several hundreds of megabytes (or larger) in size.

There is no single "correct" swap size (there are as many "correct" sizes as there are users and opinions). I usually assign a fixed 512MiB, regardless of RAM size, which works very well for me. The reasoning behind that is that 512MiB is something that you can always afford nowadays, even on a small disk. On the other hand, adding several gigabytes of swap is none better. You are not going to use them, except if something is going seriously wrong.

Even on a SSD, swap is orders of magnitude slower than RAM (due to bus bandwidth and latency), and while it is very acceptable to move something to swap that probably won't be needed again (i.e. you most likely won't be swapping it in again, so your pool of available pages is effectively enlarged for free), if you really need considerable amounts of swap (that is, you have an application that uses e.g. a 50GiB dataset), you're pretty much lost.

Once your computer starts swapping in and out gigabytes worth of pages, everything goes to a crawl. So, for most people (including me) this is not an option, and having that much swap therefore makes no sense.

Damon
  • 1,685
  • 7
    Completely untrue: it could be an advantage for the kernel to not use the disk, especially if you have configured OOM to your specs. If the OOM killer is configured to handle your cleanup, then having it do so rather than wasting disk space and slowing your machine down is advantageous. – mikeserv Mar 16 '15 at 16:22
  • 28
    What's the difference between having 8 GB of RAM and 8 GB swap and 16 GB of RAM and no swap? If your computer decides it needs 16.001 GB of memory, won't it start purging/killing things just the same (but the performance will crater before it starts happening)? – Nick T Mar 16 '15 at 19:10
  • 6
    @NickT: The swap isn't for more RAM so much as a red flag that something is going to get killed soon. I like having a red flag before a kill, rather than having a process "randomly" disappear before my eyes. – Mooing Duck Mar 16 '15 at 19:34
  • @MooingDuck: so you're saying it's a human benefit rather than a machine benefit – user541686 Mar 16 '15 at 19:36
  • 12
    -1 this answer makes no sense. Why the heck would having slower memory (swap) have better performance than the same amount of faster memory (RAM)?? at some point you have to acknowledge enough RAM means no swap is necessary.. – user541686 Mar 16 '15 at 20:23
  • 18
    @Mehrdad: It certainly does make sense. Slower memory (swap) improves performance insofar as "slower" does not matter for things that you access rarely or never. Swap effectively increases the amount of memory that is available for "hot" data by moving "cold" data out. Daemons which only execute something once per hour or memory allocated by a kernel module that is loaded by default but never used are an example of that. You can swap out those, or you can instead drop pages from the cache. Which one is better? – Damon Mar 16 '15 at 20:32
  • 4
    If slower does not matter for things rarely/never accessed, then how is it beneficial to swap it to some other disk location vs reloading it from the original when needed? And i would be most interested to see your reply to @NickT's comment here - at some point this logic must give way to numbers - either swap is infinite and the OOM killer is never exercised, or swap only adds a finite amount of space to some finitely sized RAM pool and the OOM killer is exercised anyway. If the latter, then is it not better that the killer jump in before the system slows to a crawl? – mikeserv Mar 16 '15 at 20:47
  • 2
    @mikeserv: You have an opinion ("completely untrue!") which is debatable and contrary to the opinon of noteworthy experts, yet you are of course entitled to that opinion. Also, you make a lot of assumptions that are not true. For example, reloading from the original is simply not possible for data that has no "original" backed by a file. Also, the system does not slow to a crawl when swapping occurs, this only happens when you have an active dataset of considerable size (as I've pointed out in the answer). Under normal conditions, swapping will be hardly noticeable, if at all, for example... – Damon Mar 16 '15 at 21:48
  • 1
    @mikeserv:"how is it beneficial to swap it to some other disk location vs reloading it from the original when needed?" You can only reload there is an original. Yes, for (eg) executable images it's best to just ditch the page; for data that has been written to, but is then rarely/never used again, it can be beneficial to write that page to disk, and free up a page for other uses. It's got nothing to do with the OOM killer. If you told the kernel to use zero disk cache, performance would suck; caches help performance. Flushing that unused page to disk to make more cache can be a good trade. – psmears Mar 16 '15 at 21:48
  • 1
    ... when you switch to a different desktop, the system can swap out the terminal and you will not notice a difference at all. If you have a dozen programs open, the OS can (it does not need to, but it can) swap out the program you're not using. It will take 50-100 milliseconds to swap it in again at a later time, but you will usually hardly notice that this is happening. Overall, it is a huge improvement, since you effectively have more RAM available for the stuff you use, when you're using it. – Damon Mar 16 '15 at 21:50
  • 1
    The reason why I advocate a comparatively small swap (half a gigabyte) is precisely that swap is not meant to turn 16GiB into 32GiB -- this would indeed cause things to come to a crawl. It's meant to swap out a few things that aren't needed right now, which is usually a few dozen megabytes or less (so 512 MiB is plenty more than you'll need). Actually using several gigabytes of swap is a severely unhealthy condition. If you have 64GiB of live data to deal with, you need 64GiB of RAM, throwing swap at the problem is the wrong way. – Damon Mar 16 '15 at 21:53
  • Damon - but the small swap gains nothing and only costs performance (albeit a proportional amount to is usage) if the system is configured to use the RAM it has. The problem can still be boiled down to @NickT.'s question - how is 16.5GB more effective than 16GB is less than 16GB is needed, and how is it also more effective if 16.26GB is needed? The swap is only effective if it should be needed, and it is only ever needed if the system is not configured to use only the amount that is there. – mikeserv Mar 16 '15 at 21:58
  • @psmears - the if there is an original point is a good one. As is the caches point. I would personally prefer, though, (and have configured it so) that an application which is rarely/never used should be killed. If it has cached some page that is rarely/never used then it won't matter - because the executable will be killed anyway. I also configure common caches (browser, /tmp) to use tmpfs as opposed to on-disk cache - and would much prefer that the kernel do that by default anyway. I suppose that this could be a taste thing, but when there is a mem surplus, swap is a hindrance. – mikeserv Mar 16 '15 at 22:03
  • 2
    @mikeserv: You're assuming that there is one amount that is "needed". The kernel can schedule disk I/O in more than one way, trading off memory usage with performance - generally bigger cache => better performance (because it can avoid having to read/write the same block multiple times). If rarely-used pages are put in swap, the kernel has more flexibility scheduling I/O. The right tradeoff will depend on exactly what you're optimising for (data throughput? Requests/second? Latency? Responsive feel? etc), but if your disk is larger than memory, there is no such thing as a "mem surplus". – psmears Mar 16 '15 at 22:09
  • 1
    @psmears - all very good points - especially that last sentence. I tend to think in terms of my own machine - which I have configured to cache in long-term > SSD w/ bcache and in short-term as mentioned to tmpfs. Yes, the configuration is all-important. On my phones I typically do zRAM rather than swap because - even on those little things - the CPU cycles for handling the compression are usually less-costly to me than the i/o hit for flash-based swap. But I also charge them every day - the battery-life is noticeably negatively affected w/ a zRAM setup as well. – mikeserv Mar 16 '15 at 22:15
  • 1
    Where does the swappiness parameter come into play in this regard? I guess it's a parameter to guide the kernel to choose between 3. and 4.? – AF7 Mar 17 '15 at 07:32
  • 1
    @AF7, swappiness controls how the kernel is prone to swap things to preserve RAM, so it'd be more like choosing between 2 and 3 (and not exactly). With a high swappiness, the kernel will tend to swap things even if there isn't a dearth of RAM. With a low swappiness, the kernel will avoid swapping as much as it can. – rsuarez Mar 17 '15 at 09:22
  • 4
    @AF7 The swappiness setting will never affect whether the OOM killer runs (unless you set it to 0). Swappiness controls the pre-emptive swapping out of process pages to free up memory for caching / write buffers. e.g. you haven't touched firefox for several minutes while extracting some big tar files. With swappiness = 60 (default), it's probably slow every time you do something in firefox until the relevant pages are all paged back in. With swappiness = 10, you probably won't notice anything. But a recursive grep might have to hit the disk because the data set is too big to cache. – Peter Cordes Mar 17 '15 at 11:04
  • 3
    Anyway, swappiness = 10 or so is one of the key tuning settings for a desktop, since it's all about responsiveness, not really throughput. A small swap partition, like 512MB to 1GB is useful. It will allow paging out of data that goes untouched for a REALLY long time, giving your machine some more RAM to work with. No sense keeping physical memory tied up holding some data structure gnome-terminal built while starting up, but won't need to touch until it exits, or until you use some menu item that you never use. – Peter Cordes Mar 17 '15 at 11:08
  • Note that the kernel could well purge a page from a mapping that hasn't been used for a long time. – user253751 Mar 18 '15 at 05:54
  • 2
    @mikeserv How is killing a (semi-)random process the answer to running out of DRAM? If a program is running, it's probably for a reason. The last time I went through that, a lot of things started breaking really quickly, and data loss ensued. It would be quite terrible for a moderately quick memory leak to get a hold of such a system, as you would get little warning before it started mass killing A LOT of programs. With swap, you would probably notice a slow down where you could still see what ate all the RAM. – jbo5112 Mar 19 '15 at 18:00
  • 1
    @jbo5112 - because there's nothing random about it if you configure it not to be. The killer targets processes according to their OOM score - which is adjustable explicitly per process and/or applied automatically based on definable rules (read: cgroups). Again - this is what Android systems have done by default from day one. It makes obvious sense in that market because the devs put some work into defining sane rules - as it also makes for any other market so long as the admin defines rules that make sense. – mikeserv Mar 21 '15 at 18:51
  • 2
    I can't think of any reason why a computer that would run fine with 8GB memory and 8GB swap would have trouble running with 16GB memory and no swap. – RemcoGerlich Mar 22 '15 at 09:44
  • @RemcoGerlich: It's not that a computer with 16GB RAM and no swap has any trouble running fine. It's that a computer with 16GB RAM and swap runs better. Swap does not cost you anything (well, an insignificant portion of your harddisk, but that hardly counts), but it allows you to make better use of the available RAM for the stuff that matters by paging the stuff that isn't needed out. – Damon Mar 22 '15 at 17:06
  • 1
    @mikeserv Are you assuming his system with 16 or 32 GB of RAM is running Android or that desktops should be modeled after Android? Are you suggesting that taking the time to continually manage the OOM adjustments of my sometimes 50-100+ Chrome processes will make me more productive than having swap running to handle running out of RAM? What if I have rogue JavaScript eating all my RAM in Firefox while I'm reading an important webpage and creating an important document? Swap provides a nice warning before something dies off, so I don't have to monitor my RAM or live in fear. – jbo5112 Mar 22 '15 at 20:40
  • @jbo5112 - Neither of those. And no - you needn't take time. You would configure cgroups once. How many times did you configure your swap? chrome, by the way, is a particularly bad example as a case to the contrary because it is already sandboxed in similar ways and for similar reasons. – mikeserv Mar 22 '15 at 21:41
  • @mikeserv According to Chrome's help, processes for tabs can be killed by the OS if the OS runs out of memory, but it the example doesn't have to be Chrome (browser). I'm sure there are other applications that can run as a number of processes that can be independently killed and can eat through all your memory. Chrome is just a common example. How would cgroups differentiate between 50+ processes with the same name, same user, same group, same ppid, same executable, different importance, and even tabs of different importance in the same process? I've not seen any rules that complex. – jbo5112 Mar 23 '15 at 00:37
  • @jbo5112 - well of course they can - anything can be killed by the os. The process per tab is also a configurable thing - you can also do a process per domain. Regardless, though, if you didn't want them killed, then you would just up the parent process's OOM immunity - or start it in the first place from a shell or parent process of some of other kind already assigned said immunity. And chrome caches those pages anyway - so if one is killed, clicking reload restores it first from cache, or, failing that, from the source. You can also be notified before the OOM killer is executed. – mikeserv Mar 23 '15 at 00:47
  • @mikeserv Chrome is usually the easiest place to reclaim RAM by closing things, but it's not a uniform solution that you're trying to cram it into. The issue is that some tabs I wouldn't care if they got killed, and some tabs I'd want to chuck the computer out the window for killing. Chrome does not cache all the work I might have done in a web page (e.g. typing in a long post), making it a problem to kill. Notification, while helpful, wouldn't be of much use if I have a page open with a slow memory leak or something else that triggers the OOM killer when I'm not at the computer. – jbo5112 Mar 23 '15 at 04:53
  • @jbo5112 - you're trying to cram it in - I'm not. And it still doesn't matter - if there was a leak it would trigger OOM when you're not at the computer regardless of swap. If you want a solution for handling chrome tabs programmatically you use chrome: there are many ways of accessing the various sandboxes via IPC and the chrome setuid executable. chrome, by the way, comes with its own task manager, and so that's probably where you should start. chrome also can cache all of the work you might have done in a webpage as well - but none of this has to do with OOM or swap. – mikeserv Mar 23 '15 at 05:03
  • You all have to think about targets - now we had a SonarQube server in swap, that was really bad for performance. Nobody would solve it, so much better is to turn the swap off. But I have swap on my notebook, I would be crazy to disable it. OOM is always possible on both systems, but I would restart the notebook (or swapoff -a && swapon -a) much sooner. But nobody would care about the server, only some Jenkins build would fail because of a timeout to SonarQube. – dmatej Jul 16 '18 at 11:46
  • @Damon - Can you recommend some values for vm.swappiness, vm.vfs_cache_pressure, vm.dirty_background_ratio and vm.dirty_ratio ? – Vahid Pazirandeh Mar 21 '20 at 19:46
  • @VahidPazirandeh: I would leave these as they are. Tuning them is a bit of black magic, and what is "correct" heftily depends on what you want, there is always a trade-off. For example, setting anything containing the word "dirty" to a larger value will be advantageous for batching writes, so in a scenario where pages are continuously modified in small increments, that will help (both for speed and wear). On the other hand, it will reduce safety (think power failure). The default values "mostly work", and it's uncertain whether you can tune them to be much better without sacrifying something. – Damon Mar 22 '20 at 10:03
  • @Damon - thanks so much. Regarding vm.swappiness, why is it so many webpages recommend a value of "10" ? – Vahid Pazirandeh Mar 22 '20 at 19:52
  • @VahidPazirandeh: I don't know why, I wouldn't recommend that. But of course it depends, there is no single one truth. With a value of 10, you basically tell the OS to only start swapping if RAM is 90% allocated. If you have lots of RAM so that basically is never an issue anyway, that may work as a kind of "optimization". It saves you some disk IO and write cycles. But other than that it really isn't a thing. Think what happens when the kernel swaps. The memory page is still there, only now it can be discarded if memory is needed. It's still accessible almost instantly otherwise. Writes... – Damon Mar 22 '20 at 20:27
  • ... happen asynchronously, so apart from maybe taking away a bit of bandwidth, they are not really noticeable (might be if you do super hefty IO, of course). So basically... it's none worse if you swap earlier. But, if you actually do need RAM as today's special surprise, you have it ready. No need to start swapping only then, and no huge delay which you didn't anticipate. I personally deem this preferrable, it's more "steady and reliable" overall, so there's no need in my opinion to go away from the typical 60 setting, which is really not so bad as compromise. – Damon Mar 22 '20 at 20:31
50

I'm going to disagree with a few of the opinions that I see stated here. I'd still be creating a SWAP partition especially in a production environment. I do it for my home machines and VMs as well.

These days I'm sizing them around 1-1.5 times memory. 2 times memory used to be the rule of thumb. The swap disk is "cheap" in that it does not need to be backed up or protected.

Should you run low on memory, then your swap space gives you a little time and cushion to address the problem.

Realize that things like /tmp can reside in the swap space.

The swap area can hold a partial kernel dump so that it can be restored on the next reboot. This might be nice for some future debugging emergency that you are called to do.

mdpc
  • 6,834
  • I'm with you. I've had /tmp in memory for many years. I imagine it could wind up eating enough memory that it starts swapping (particularly if I'm also running a virtual machine or two). And I imagine that having swap space is as cheap while it is not used, as it is useful when you need it. – The Sidhekin Mar 16 '15 at 07:59
  • 19
    +1. Having a swap means the difference between hurt performance under low memory and a hard crash. – Davidmh Mar 16 '15 at 08:35
  • 1
    @Davidmh - no it doesn't. It only makes that difference if you have irresponsibly ignored OOM tuning. And kernel dumps go to system flash nowadays. – mikeserv Mar 16 '15 at 16:25
  • 6
    @mikeserv no matter how you configure it, if your program tries to allocate more memory than there is available, something is going to crash, or the OS will start killing them. – Davidmh Mar 16 '15 at 16:29
  • 1
    @Davidmh - no nothing crashes - something is killed. It's the OOM killer. That's what it's for. It's how Android has worked from day one. In my opinion killing old, unused apps is a better alternative than wasting disk space and slowing the entire computer down. – mikeserv Mar 16 '15 at 16:30
  • 4
    @mikeserv: In my opinion having apps not killed is preferable to having them killed. – Mooing Duck Mar 16 '15 at 19:32
  • 7
    @Davidmh: By that reasoning your swap file must be infinitely big, otherwise your programs will still crash when they exhaust the swap file. – user541686 Mar 16 '15 at 19:35
  • 1
    @Mehrdad it is of course a cost balance. Dedicating a few GB of my 2 TB HD costs very little. And in the meantime, if I am sitting at it, I have time to notice the performance and hopefully correct it. – Davidmh Mar 16 '15 at 19:44
  • 1
    @mikeserv right, wrong terminology. The problem there is that the OS decides what to kill, not me; and usually is the worst offender (that, presumably, is the process I am actually most interested in). The performance hit of the swap kicking in serves as a warning and hopefully gives me time to decide what to kill. – Davidmh Mar 16 '15 at 19:50
  • @Davidmh - that is reasonable. But if you have taken some time and tuned the OOM in the first place, you can do without even that initial performance hit. It is configurable, but it can be tedious to setup as well. – mikeserv Mar 16 '15 at 19:52
  • 1
    @Davidmh: My point is that if you have X GB of RAM and Y GB of swap, it is strictly worse than X + Y GB of RAM and 0 GB of swap. What is X + Y in your case? – user541686 Mar 16 '15 at 19:52
  • 6
    @Mehrdad but it is also more expensive. Given that you have X GB of RAM, because that is as much as you can afford, it is strictly worse than having X GB of RAM and Y of swap, for some reasonable values of Y (that will depend on your usage and size of your HD). – Davidmh Mar 16 '15 at 19:58
  • 1
    You can add that swap is also used for hibernation (so too small swap can sometimes prevent hibernation). – user31389 Mar 16 '15 at 20:05
  • @Davidmh - your last comment is most reasonable as well - it is the price vs performance tradeoff which makes swap most useful in my opinion. But I don't think it applies so well to the question as asked. – mikeserv Mar 16 '15 at 20:07
  • @Davidmh: sure, but price is not part of the question here. The question is if you have enough money to buy whatever you want, do you still need swap if you have 16 GB of RAM? You're avoiding the question. – user541686 Mar 16 '15 at 20:18
  • @Mehrdad - I think the question is actually 'if you have enough money to buy 16GB of RAM, do you still need swap?' Nobody ever claimed infinite money... in that case the solution would obviously always be 'just more RAM!' :P – Commander Coriander Salamander Mar 16 '15 at 20:57
  • 11
    "your swap space gives you a little time and cushion to address the problem" -- It also means I have to sit that much longer in front of an unresponsive machine until a memory leak "solves" itself. – Raphael Mar 17 '15 at 06:26
  • 3
    "1-1.5 times memory" - and what will you do while your machine with 16 GB RAM swaps 23 GB? Wait some hours? – Martin Schröder Mar 17 '15 at 23:22
  • 1
    What if the cost issue has brought a single os installed on an ssd of 256 gb, and 64 gig ram (actual situation on one of the servers i have), is it really suggested to make up to 64*1.5 swap space? even 128 seems a bit large – user-2147482637 Mar 18 '15 at 09:30
  • 2
    One downside - imagine that your disk storage is slow. I'm thinking of a case where I had a vmware installation with VM storage on an ISCSI SAN. SAN was connected via 1GE. We had plenty of IOPS but not a ton of bandwidth. If a VM started swapping heavily due to a runaway application, the only reasonable course of action was usually a hard reboot - it was so slow to get logged in, and the server was hosed anyway. We changed things so that we had small swap partitions, and let the OOM killer take over much sooner. – Dan Pritts Mar 18 '15 at 16:13
  • Swap is compressed so with large amounts of RAM you only need a swapfile size of about 2/3rds of installed memory size. – JamesRyan Mar 19 '15 at 11:28
  • @user1938107 if you can afford 64GB of RAM then you can afford a bigger SSD! What is the point in buying so much RAM and then setting up your server in such a way that it can't be used effectively? – JamesRyan Mar 19 '15 at 11:30
  • 1
    @JamesRyan the main function of it is for computation, requiring large allocations in memory, but not necessarily disk space. Having 32 gig of ram and a 512 gig ssd is not as useful as 64 gig of ram and 256 ssd, or even 128ssd – user-2147482637 Mar 19 '15 at 12:26
  • @user1938107 yes but the point of the swapfile is not as an overflow but to allow pages to be removed from physical memory asap. (vast majority still allocated but won't actually be used again) Having a swapfile always means there will be more of your 64Gb available for your computation than without one. – JamesRyan Mar 19 '15 at 12:31
26

Maybe:

I've given a lot of thought to this topic and seen opinions landing on both sides of the argument more times than I can count. My approach was to develop a way to find out.

Start with an active swap partition of what you think is a sufficient size.

Then, open a terminal in a workspace and issue the command free -hs 1 which will report usage once every second.

Optionally switch to other workspaces.

Do every thing you are likely to ever do and then some more. Run all your common apps at once, browse multiple tabs and try desperately to give the system a real workout for you this might mean re-encoding a 1/2 dozen videos while running a compile operation and checking your email or whatever. Let's face facts, this is all about how You use your system.

When you feel you have the system under a high load (or as high as your ever likely to get and then some) look at the terminal and examine the results. or better yet redirect output to a file by adding >output.txt to the command so that you can examine the full run. If your Swap used never exceeds Mem free you don't need swap. If it does, you do. free.png

I don't need swap. Maybe you do. Why not find out?

As far as sizing swap is concerned, Rules of thumb are typically over-rated as this is use based question.

  • 2
    So, are you saying that swap is only ever used when your system absolutely requires it? In which case, is there ever a penalty in having swap enabled - just in case? Other comments seem to suggest that even the presence of swap can be detrimental to performance? – MrWhite Mar 17 '15 at 10:35
  • 3
    @w3d No I'm not saying that. As you can see from my output above, swap is used even if not required. this can be adjusted somewhat with the swappiness factor. I'm talking about necessity of having swap or not. – Elder Geek Mar 17 '15 at 14:25
  • Can you script this to increment swap upwards any time swap goes over free? – jfa Mar 18 '15 at 18:59
  • @JFA I haven't seen this done. Personally I have difficulty with the concept of reserving an indeterminate amount of swap space for this purpose. Theoretically all things are possible. It's the implementation phase that gets interesting. – Elder Geek Mar 19 '15 at 13:22
  • My problem is that in interactive numeric computing, I sometimes allocate too much memory by mistake, which leads to thrashing so hard it is sometimes impossible to kill the offending process. – I did the same, doing just about anything I ever do all at once, and was able to push it up to 590 MiB. From which I don't take the lesson that I don't need swap at all, but I'll limit it to 1 GiB. That should fill up quickly enough, hopefully leading to the numerical process being killed. – A. Donda Aug 04 '21 at 20:25
18

NOTE: This happened to me, in a specific, unusual situation. If you are troubleshooting a problem, this might be useful. I do not mean to imply that ALL machines MUST have swap.

MAYBE!

I have run into problems in the past with an "appliance" I built, running Linux - running on a compact flash device, I did not want to wear my CF by using swap, and there was enough memory for the application.

Most of these appliances worked fine, but on a particularly busy box, I ran into a problem:

MEMORY FRAGMENTATION

Without swap space, the memory gradually became more and more fragmented, especially with long running processes (even though I had lots of free memory, it was all in very small bits). I put some swap space in, and told Linux not to use it unless it had to; this solved the problem completely.

In addition to everything else, swap space allows memory to be moved around, defragmenting it. If you have fragmented memory, and you need a single large chunk, the fragments will be swapped out; as they are swapped back in, they are effectively defragmented.

Check out /proc/buddyinfo - mine looks like this right now:

Node 0, zone      DMA      9      5      3      4      2      3      2      2      3      3      1 
Node 0, zone    DMA32  33901   1149      0      0      0      0      0      0      0      0      1 
Node 0, zone   Normal   2414   1632    259     22      3      0      2      0      1      1      0 

The numbers represent blocks of different sizes; each size is half the size of the next block, from 4mb blocks on the left, to 4kb on the right (that is, 4mb, 2mb, 1mb, and so forth). A newly booted machine should have all the blocks on the left, very few on the right (= not fragmented). Remember also that the same amount of memory (e.g. 4mb) will be represented as different numbers across the columns - 1 block in the left-most column, 1024 on the right-most column.

Memory is allocated from the pool that is as far right as possible; e.g. if your program wants 12kb of memory (in one go), it will take it from the 16kb column; the remainder will appear in the 4kb column. If there are no 16kb blocks, then it will take from the 32kb blocks, resulting in a 16kb and a 4kb left over, and so forth.

If there are no memory blocks large enough, AND YOU HAVE SWAP SPACE, then e.g. if you want 16kb of memory, it will find the least-used block of 16kb (which might, e.g. contain a 4kb used block, a 4kb available block, and 2 more used 4kb blocks), move the USED portions only to swap, and allocate the freed memory to the new application.

In the box that crashed, I had hundreds of thousands of 4kb and 8kb blocks, and not much else.

AS FAR AS I CAN TELL (going by the crashed machines!) the kernel will move from memory to swap, and swap to memory, but will never move from memory to memory.

  • 3
    Your case looks like a good one for allocating/implementing hugepages. It would avoid fragmentation issues (because a hugepage of a certain size, once allocated, will henceforth and forever be allocated again only in that size). – mikeserv Mar 16 '15 at 20:39
  • 2
    That's the best reason I've ever read in favour of using a swap partition. I tried a quick search and found no reference about it, I wonder why this feature isn't more often documented. – To마SE Mar 18 '15 at 10:47
  • 1
    (non-transparent) hugepages may not be usable by your app. Typically apps that need tons of memory can use them; e.g., database servers, java jvm. They must be explicitly enabled, and they become a separate memory pool that can't be used for other purposes. THis may be good or may be bad, depending on your circumstances. Also, learn about Transparent hugepages. These attempt to move things around so that you're using hugepages (for their performance improvement) even though your apps don't know how. If memory is fragmented things can go south as the hugepage sweeper has more to do. – Dan Pritts Mar 18 '15 at 16:19
  • I'll have to do some testing, but I see similar fragmentation on a system with swap on so I'm a bit sceptical. – Elder Geek Mar 19 '15 at 12:34
  • What version of the kernel were you running when you had this result? Have you had this problem since the anti-fragmentation patches? ~ kernel 2.6.24? Everything I can find on this topic is while not as old as me, still old in tech terms.. http://www.shocksolution.com/2013/11/memory-fragmentation-degrades-performance-in-linux-kernels-2-6-24-rhel-5-4/ – Elder Geek Mar 19 '15 at 13:34
  • Kernel 2.6.38. @ElderGeek, it's not just that I got fragmentation, the system actually crashed due to insufficient memory. I can't prove that the fragmentation was the cause, but I have proved that adding swap stopped the crashes. On the other hand, other systems (less busy) have run for years without problems (or fragmentation). Same (binary identical) kernel. Running asterisk, web server, postgres. – AMADANON Inc. Mar 19 '15 at 19:45
  • 1
    I don't doubt your experience with your flash appliance, I'm just not convinced that the single use case you presented equates to probably needing swap and I remain unconvinced that memory fragmentation is related to having swap or not. It does raise an interesting question that could easily be tested and proved or disproved. – Elder Geek Mar 20 '15 at 01:34
  • 1
    Follow-up question. What do you mean by “memory fragmentation”? The MMU doesn't care whether the physical pages used for consecutive pages in some virtual address space are consecutive or not. Physical pages are fragmented, but this doesn't matter (as long as we're talking ordinary application memory on a single-cluster physical machine and not e.g. memory pages used by a peripheral or a hypervisor). When a program asks for 16kB, it gets four pages, which may or may not be anywhere close together in physical memory. – Gilles 'SO- stop being evil' Mar 20 '15 at 21:17
  • @mikeserv - I didn't write the applications, I just wrapped them in an OS. – AMADANON Inc. Aug 04 '17 at 04:53
  • @ElderGeek - I would expect system with swap to have fragmentation too, but not to crash from it; I was (and am) running 2.6.38-generic.I do not have the knowledge to assert that this is a general case, but it helped in my case. – AMADANON Inc. Aug 04 '17 at 04:54
  • @Gilles it ran out of memory (sorry, don't have the error message anymore), and showed the contents of buddyinfo as part of the log. This showed that there was lots of memory available, in 4kb blocks. Adding swap solved the problem. I am perfectly willing to accept other explanations for these symptoms. – AMADANON Inc. Aug 04 '17 at 04:56
17

You should never have swap larger than the maximum size you'd be able to tolerate waiting for the kernel to swap in/out; otherwise, you're just creating a new failure mode for your system (becoming unrecoverably bogged down in swapping). Note that, despite modern drives being able to transfer on the order of GB/sec, Linux is typically only able to move swap at rates more along the lines of hundreds of kB, or at best some MB, per second. So huge swap can leave your system unusable for minutes, hours, or even days.

If you have sufficient physical memory for what you're doing, the ideal size for swap is to match it to the amount of "junk data" processes are keeping around but never using. This is probably in the range of a few to a few hundred megabytes. This strategy allows all of your physical memory to be utilized for caching useful information rather than as permanent store for data that will likely never be used again.

If you don't have sufficient physical memory, you need to evaluate whether you can tolerate severe slowdown from heavy swapping. If so, having up to 1-2 GB of swap might make sense, and perhaps up to 4 GB if you have extremely fast drives. But any more than that is just going to make your system's failure modes worse, and you should consider just buying more RAM instead.

  • 2
    I think this is the best answer. If you have too much swap space it can make the machine totally unresponsive due to all the swapping. It means you can't kill the offending process. If you reduce the swap space the OOM killer will automatically do your job for you. The arguments in other answers about file buffer caching are totally unconvincing. You should have swap so the kernel can move file buffers to swap... which is on disk? Please. – Timmmm Aug 07 '18 at 15:48
  • The only argument against swap space I can find is this: "some program leaks memory".

    And? If OOM killer kills important thing that you didn't save, you effectively lost your work. Which means that this argument becomes completely irrelevant, I can just force shutdown my machine to have exact same effect while still having high swap space for low priority tasks that use a lot of memory and often are swapped out. If your machine gets stuck in such state it's your own fault. The only time it happened to me was when I myself wrote a program to fill entire working memory.

    – Sahsahae Oct 01 '19 at 19:49
15

A swap partition has significant value above and beyond simply acting as some extra RAM when you run out.

For one, Linux uses as much memory as possible to cache files and IO operations, if you have some swap you may find that more memory goes into caching IO and making it faster (by minimizing disk access and also lowering wear on SSDs) as opposed to holding data which some program has allocated but is only using once every 12 hours which may be the case for some daemons.

In addition, Linux uses an optimistic memory allocation strategy by which it will allow pages to be nominally allocated even if it is not sure it has real memory to fill them. This is more efficient than doing a proper check and map every allocation and usually causes no problems. However, the heuristics which the kernel uses to determine if allowing an allocation is sensible include the level of swap available on the system, therefore allocations may be faster if the system has plenty of swap, even if it is not used much.

These factors together bring me to personally believe that it is better to have some swap on almost every normal system however for large ram sizes I ignore the ram * 2 rule and simply cap my swap at 4-8GB (depending on the size of the disk).

Vality
  • 489
  • 12
    Even 4GB is too big for a desktop. I'd rather have the OOM killer activate before waiting for that much swap to fill up if something goes wrong. (If I ever ran into a case where i needed more, I could dd and mkswap a swap FILE before running whatever huge job needed more virtual memory than my machine had RAM. That is, if exitting my web browser didn't free up enough...) – Peter Cordes Mar 17 '15 at 11:14
  • @PeterCordes That probably depends on what kind of tasks you are doing, I entirely understand. Personally, I find I benefit from high levels of swap as I often do long compiles at low priority in the background while using my machine, these then get swapped out when memory is otherwise tight (usually when the machine is too busy to do any compiling anyway). Still, I agree it is a very personal work load based thing, one should always think about their own usage before deciding. – Vality Mar 18 '15 at 02:09
  • 1
    funny story: just today, on my laptop with 4GB of RAM, 0.5G swap (SSD), firefox triggered the OOM killer (which picked firefox). I hadn't had that happen before. (Although I was using my laptop more than usual). Pretty much nothing else was running (just gnome-terminal in xfce). So I think I'm ok with it, since firefox should know better than to tie up so much RAM. I saw someone say a while ago "web browsers exist to stress-test the virtual memory subsystem", or something like that. Firefox is pretty bad at freeing caches for tabs that haven't been used for days, etc. – Peter Cordes Mar 18 '15 at 07:06
  • 3
    One case where Linux considers available swap space is when a large process attempts to launch a child: the fork()+exec() system calls begin by roughly duplicating the parent's allocation - the kernel can't guarantee that the new child will be smaller than the parent. The swap space is typically not used, but unless it's available the fork() may fail.

    A typical example on a client is a browser starting a plug-in, or on a server when a large application container invokes a helper program. 1/2 or 1GiB of swap is usually enough to solve the problem.

    Google "java exec cannot allocate memory"

    – James Mar 19 '15 at 11:56
11

Only if you want to be able to hibernate to swap (This feature is also called "suspend to disk" and involves saving the entire contents of RAM and turning off the power). Typically this is only used on laptops and other mobile devices, so it depends.

Random832
  • 10,666
hanetzer
  • 488
6

There's no universal and clear answer because it depends on a task you're about to perform. If you're about to run DB, HTTP, Virtualization or Cache server you should never enable any kind of swap, regardless of the ram amount you have. If you have a desktop or mixed-task host and you have 16+ Gb of fast RAM - take a look here : zRam

  • zRam is a great choice. I've used it in conjunction with tiny flash drive systems to good effect. – Elder Geek Mar 20 '15 at 01:39
  • 1
    @ElderGeek I've also used it in a diskless environment, e.g. network boot. Great work, but there's some caveats : first, you need not just more ram, you need a FAST one. Slow and cheap ram modules, that would probably suite you in a usual classical case, can cause a problem. CPU Front-Side bus is also must be taken into account : even in case when you can have a slower CPU frequency but a faster FSB frequency will improve your perfomance greatly. Second concern is a number of the zRam swap partitions. Try numbers from 1 to your CPU cores number. Do not use on single-core systems! – Alexey Vesnin Mar 28 '15 at 13:05
  • Good valid points all. As I'm a performance geek and always design and build my own systems, I'm not personally plagued with such concerns – Elder Geek Mar 28 '15 at 14:13
  • @ElderGeek me too :) nice to meet you! Also I've noticed one strange thing in zRam behaviour. It appears and disappears in different kernel versions and configs, but highly reproduceable at kernel HZ 100 and 1000. zRam works fine for 1,2,4 - but not more partitions. Even on 8 and 16 physical cores on top hardware on almost-idle tasks. So keep it in mind playing with tweaks! Greetings from Russia! – Alexey Vesnin Mar 31 '15 at 09:49
  • I disagree with "if you run a db, ... you should never enable any kind of swap, regardless of the ram mount you have". Swapping is usually preferable to OOM killer killing random processes. Yes, you should have enough memory for the applications you are running, and yes, using large amounts of swap may slow down your system. I envy you living in an ideal world. – AMADANON Inc. Mar 12 '19 at 01:09
  • @AMADANONInc. Nowdays a Google Perftools tcmalloc is a quite ubiquitous thing - and even if the software is not supporting it specifically out of the box, you can always use at least two ways to elaborate it and literally kill a memleaks. Try it! And - yes - I do remember times when we were looking at 128Megabytes of RAM on a server and it was an unthinkable huge, nowdays even a desktop CPU bears 512Gigabytes and it's pretty normal, so - having 32 or 64 gigabytes for a moderately-loaded database is not a problem at all and it's cheap and available. – Alexey Vesnin Mar 15 '19 at 23:17
5

There is no way to tell if you need swap space or not if the only parameter we know is the amount of RAM installed.

In any case, there is a common misconception that having a swap space is negatively affecting system performance. This is incorrect. As long as you have enough RAM, having a swap area whatever its size doesn't hurt performance at all. What affects performance is being short in RAM and effectively using the swap space.

  • case 1: If you have no swap space and happen to be out of RAM, the Linux kernel will pick one or more processes which it thinks are good candidates and kill them.

  • case 2: If you have a swap space and are out of RAM, the kernel will pick less used memory pages and put them on the swap area to free RAM. This will slow down the system but your applications won't be affected otherwise.

I always prefer case 2, as I feel unconfortable loosing parts or all of my work because the kernel thinks my applications are worth killing. Moreover, with the current size of an average disk being in the TB range, reserving a few percent for swap shouldn't be an issue.

jlliagre
  • 61,204
  • 6
    I prefer case 1, because the only time I run out of RAM is when I've got a runaway program. The OOM killer is generally pretty good at identifying runaways, and I'd really rather have it killed immediately than after a few hours of heavy swapping. – Mark Mar 16 '15 at 22:33
  • 1
    @Mark, That's a valid point. I still try to avoid delegating to an algorithm the shoot first, think later approach. – jlliagre Mar 16 '15 at 22:40
  • 2
    case 2 is valid only if you have something which warns you about this situation, and have enough time to manually "do something" before the system runs out of swap. But you could instead monitor the percentage of used RAM for the same results. – Totor Mar 17 '15 at 12:51
  • @Totor (Massive) memory leaks are not the only cause of RAM exhaustion situations. The issue with no swap is even very small RAM deficits will trigger the OOM killer. In any case, I agree monitoring RAM and swap usage is useful. – jlliagre Mar 17 '15 at 13:10
  • 1
    Based on your reasoning, monitoring RAM usage is enough. – Totor Mar 17 '15 at 13:14
  • 1
    @Totor I prefer not to be awaked in the middle of the night by my laptop monitoring app because I launched a batch that processes a bunch of files and exhausted the RAM. I prefer the job to be done, even if it takes a couple of hours more because of RAM undersizing than having the whole stuff interrupted. – jlliagre Mar 17 '15 at 13:21
  • 1
    I find there is a common misconception that having a swap space is negatively affecting system performance to be a very mildly put statement. It is a fact, that having no swap space is likely to negatively affect system performance. Having swap space means the memory you have can be used more efficiently. There will be situations where the kernel has to chose between swapping out a page of data which is never used or removing a page of a frequently used library from RAM. If you have no swap the kernel will be forced to chose the least beneficial of those two options. – kasperd Mar 18 '15 at 20:17
  • @kasperd Indeed, thanks. I didn't mention either memory reservations because the question is about Linux which is generally configured to overcommit memory allocations. Non overcomitting memory would also be case where having no swap area can degrade performance and reliability by making a portion of RAM unusable. – jlliagre Mar 18 '15 at 20:28
  • @jlliagre Running with no overcommitment and no swap can certainly make it difficult (if not impossible) to use all of the RAM for running applications. It does however mean that you'll practically always have plenty of RAM available for caching, so as long as the applications don't fail due to failing memory allocations, they will run with a good speed. – kasperd Mar 18 '15 at 20:35
  • @kasperd unless you use ZFS which I guess cannot use commited still unused memory for its ARC cache. – jlliagre Mar 18 '15 at 20:42
3

My rule to apply swap in any system is to have the answer for this:

  • What is the purpose of the system?
  • How much memory will the applications consume?
  • Is this a critical system?
  • Do I need a temporary disk space for files transfers?
  • Predicted growth rate of the applications?

When I get answer for this information I size the system accordingly. In previous years I was doing the rule of thumb from Sun Microsystem. Up to 16 GB twice as much RAM for SWAP, from 16 GB up the same amount. But on the other hand, if you have enough RAM to spare and your apps don't use SWAP forcibly you can omit swap. If you need is just a matter of putting a new disk or lun and configure the SWAP. the rule of Sun applied mostly because on Solaris in case of "kernel panic" the memory would be dumped entirely to swap for further analysis.

BitsOfNix
  • 5,117
3

The short answer:

Yes, you always need some swap, just in the unlikely the case that an application doesn't even bother mapping memory but maps virtual memory directly.

Set your swap file to:

  • RAM+round(sqrt(RAM)) if you use hibernation
  • round(sqrt(RAM)) if you don't

Set your swappiness to 10 on a desktop, but not on a server!

The long answer:

In the past:

The rule of thumb in use for the last 25 years has been a minimum of 1xRAM and maximum 2xRAM so that is what you'll see quoted all the time.

That minimum was set back in the stone age when I was a teenager and dinosaurs still roamed the Earth and because RAM was just too expensive and you absolutely needed that swap space to be able to accomplish anything.

The maximum was set at that time because of diminishing returns: it's just too slow to have to swap so much memory as HDD access is a factor of 1000 slower then RAM: good in an emergency, but not really good for everyday use! At the time, when you ran out of swap space, it was time to add more RAM! (which is still true today).

In the present:

  1. If you do not use hibernation and your memory is in excess of 1GByte the new rule of thumb is round(sqrt(RAM)) where RAM is obviously your RAM size in GB and sqrt the square root. :-)

  2. If you use hibernation, you need to be able to swap the entire amount of RAM+already swapped RAM to disk, thus the formula becomes: RAM+round(sqrt(RAM))

  3. The rule of diminishing returns still holds today for the maximum, but unless you test your actual usage, taking 2xRAM is just a waste of disk space, so don't use the maximum Unless you run out of swap space using the other methodologies.

All of these together give you the following table: (last 3 columns denoting swap space)

    RAM   No hibernation    With Hibernation    Maximum
    1GB              1GB                 2GB        2GB
    2GB              1GB                 3GB        4GB
    3GB              2GB                 5GB        6GB
    4GB              2GB                 6GB        8GB
    5GB              2GB                 7GB       10GB
    6GB              2GB                 8GB       12GB
    8GB              3GB                11GB       16GB
   12GB              3GB                15GB       24GB
   16GB              4GB                20GB       32GB
   24GB              5GB                29GB       48GB
   32GB              6GB                38GB       64GB
   64GB              8GB                72GB      128GB
  128GB             11GB               139GB      256GB
  256GB             16GB               272GB      512GB
  512GB             23GB               535GB        1TB
    1TB             32GB                 1TB        2TB
    2TB             46GB                 2TB        4TB
    4TB             64GB                 4TB        8TB
    8TB             91GB                 8TB       16TB

The above is just a rule of thumb; it's not the law of gravity!
You can break this rule (unlike the law of gravity) if your particular use case is different!

Pro tip: Always allocate SWAP at the start of a HDD as the heads need to move less on the inside of the disk.
Yes: On SSDs, it doesn't really matter any more where you locate the swap area as they use quantum-tunnelling instead of moving heads and modern SSDs use all of their memory cells (even the unallocated space) to prevent quantum degradation.

How to test if your usage of swap is different from the "generic" rule:

Just execute:

for szFile in /proc/*/status ; do 
  awk '/VmSwap|Name/{printf $2 "\t" $3}END{ print "" }' $szFile 
done | sort --key 2 --numeric --reverse | more

which will give you a list of all running programs that are swapped out (with the one using the most swap space on top)

If you're using more then a few KB: resize to more then the minimum, otherwise, don't bother...

If you're on a server, stop reading now: you're all set!


If you're on a desktop/laptop client (not server), you want your GUI to be as responsive as possible and only swap when you really need to. Ubuntu has been optimised to swap early for server use, but on your client you want editing that huge 250 Mega-pixel raw picture in gimp to be speedy, so setting your swappiness to 10 will keep the kernel from swapping too early, while ensuring it doesn't swap too late:

If you have a sysctl.conf file,

sudo nano /etc/sysctl.conf

OR

If you have a sysctl.d directory but no sysctl.conf file, create a new file:

sudo nano /etc/sysctl.d/35_swap.conf 

and in both cases add:

# change "swappiness" from default 60 to 10 
# (theoretically only swap when RAM usage reaches around 80 or 90 percent)
vm.swappiness = 10

to the end of the file, save the file (Ctrl+XY+Enter in nano) and execute a:

sysctl --system

to reload the parameter or take the Window$ approach and reboot... :-)

Fabby
  • 5,384
  • +1 because of the script to test swap usage. But oh boy, that part about use double of RAM when have so much ram is scary for who read. – Emerson Rocha May 31 '20 at 10:58
  • 1
    @EmersonRocha You probably missed this bit: *taking 2xRAM is just a waste of disk space* (and thus the absolute maximum as I've seen people allocate 10* the RAM) – Fabby Jun 02 '20 at 07:22
1

Swap will be needed if you don't have enough RAM to run all your programs.

You say you're not doing anything which requires a lot of RAM. So, you do have enough RAM.

Then, you don't need swap space.

But, if you think that at some point, despite what you imply in your question, your programs will use, say more that half (or two thirds) of your RAM (rule of thumb), then please read the other "pro-swap" answers. You will not need swap, but it could enhance your system performance (by allowing your system to make better use of extra RAM, e.g. for caching or buffers).

Totor
  • 20,040
  • Using swap can help avoid a catastrophic crash but will never enhance preformance, any more than a parachute will speed up a ferrari. ;-) – Elder Geek Mar 20 '15 at 01:39
  • @ElderGeek It might be an edge case, but swap can help. There are times where you have just enough RAM for everything, but idle programs could be swapped out to make better use of the RAM (e.g. some useful disk cache). I used to see this a lot in the late 90's. I see this sometimes now when I have <1GB of free RAM (including disk cache), without actually needing swap space. – jbo5112 Mar 22 '15 at 22:49
  • @jbo5112 I won't rule out the possibility entirely if you say so, my experience has shown otherwise thus far. This isn't too surprising as it's a case of how the system is used more than a hard and fast rule (that everyone seems to seek). – Elder Geek Mar 23 '15 at 12:26
  • @ElderGeek I may only see it now because my disks are a decade old and in desperate need of cache. Usually there is an additional problem of swappiness being set way too high for almost any case, which causes swap space to be used way too aggressively. Dropping the value from 60 to 10 seems a common recommendation. You can go even lower, but at some point you'll dump too much disk cache, causing a lot of extra I/O while swapping. – jbo5112 Mar 24 '15 at 02:49
  • @jbo5112 Good point. It's always a balancing act between the different subsystems. I remember back in the day adjusting thimgs like hard disk interleave and RAM refresh rate to squeeze max performance out of 80286 systems with Seagate ST225 and ST238 drives. We always have to approach things on a case by case basis to squeeze out the maximum performance and reliability. Congratulations BTW! The last time I was able to squeeze a decade out of a drive it was a Micropolis that held a grand total of 600MB. – Elder Geek Mar 24 '15 at 13:58
-1

Compromise answer: it depends on the meaning of "should".

Do you need a swap partition in the sense that something bad will happen if you don't have one under the operating conditions you describe? No.

It is wise to have a swap partition just in case you accidentally spawn an army of memory hogs so you have a chance to kill them before the OOM killer kicks in? Yes.

If your physical RAM "greatly" exceeds the data memory usage of all programs you will run simultaneously at all times, then there is no performance benefit to having swap. If exceeds, but not "greatly", there may be a performance benefit if the OS is able to swap out rarely-used memory to keep more frequently-accessed -file data- in memory.

In summary, it's great that you have 16GB RAM. But, if you also have a 1TB disk, can't you reserve 16GB of it to swap? It's only 1.5% of the disk.

Atsby
  • 394