1

In the process of optimizing the Android system, the configuration of the 'swappiness' value is crucial. The value path is /proc/sys/vm/swappiness. Some sources suggest setting it to 60, while others recommend setting it to 120. Some sources state that its maximum value is 100, while others claim it is 200, which confuses me.

How can I confirm the maximum value supported in my system?

Links I can find on Stack Overflow:

muru
  • 72,889
  • As a general rule, you can guess that if something is configurable, there's no one "right" answer to what's best or simply "better". – Philip Couling Feb 06 '24 at 12:19
  • 0 or no swap at all is the best value. SWAP makes the system behavior unpredictable, introduces lags and increases latency. – Artem S. Tashkinov Feb 06 '24 at 13:03
  • @ArtemS.Tashkinov if what you say is true, why did the kernel developers added this feature? Why have they decided to set the default value on 60? Why didn't they set the default to 0, if indeed it's the "best value"? Either they just generated random number, or they gave it some thought before and it has some reasoning behind it. Maybe you're right for some cases, but you shouldn't be so unequivocal. – aviro Feb 06 '24 at 13:50
  • @ArtemS.Tashkinov It's a bit like saying "McDonald's is the best resteraunt" because it's predictable and consistent the world over. If consistency is all you value (as common for sysadmins running K8s clusters) then that's fine. But it's certainly not true for all of computing, just as there are clearly better resteraunts than McDonalds. – Philip Couling Feb 06 '24 at 22:53
  • @aviro The whole swap thing was created decades ago when RAM was expensive as hell and computers needed to swap out to run the tasks they needed to run. I've had all my PCs and servers (over two hundred, executing hundreds of HTTP/DB requests every second) swapless for over 25 years now with zero issues and more performance to boot. Ha-ha, so funny. I really wanted to say something brutally offensive but I'll restrain myself. – Artem S. Tashkinov Feb 08 '24 at 15:52
  • @PhilipCouling We've had this conversation a couple of times already. Not a single Unix SE user has been able to provide a single scientific reproducible test which shows that SWAP actually increases performance. In the vast majority of cases it cannot and will not. It may allow more applications to run and in some corner cases it may allow to run something which otherwise would be impossible to run. But in the vast majority of cases adding more RAM is a million times more preferable than having SWAP enabled. It's deprecated tech with a ton of drawbacks. – Artem S. Tashkinov Feb 08 '24 at 15:56
  • @ArtemS.Tashkinov I do't recall having such a conversation perhapse you could link it. But I'be personally worked in contexts where swap does improve performance, particularly with low memory embedded devices. What you are describing is fine for web servers and database servers particularly, but it's wrong to claim that it's true for all computing contexts. – Philip Couling Feb 08 '24 at 17:37

1 Answers1

2

The maximum value of swappiness is 100 on kernel version before 5.8, and this value was increased to 200 on kernel 5.8.

If you carefully read the answers in your linked (and there are tons of other similar questions and articles to be found in google), you'll see that none of them provide the best value. It depends on the exact workload and its specific behavior on the machine you're running the OS, and it just requires some benchmarking.

Usually the default value (60) is the most balanced one.

Though the answer in your link explain how swappiness affects the system, I'll try to make it simpler, without the calculations and the numbers that other add to their answers that make them more precise but maybe a little more difficult to read.

If you want the gist of it in extremely simplified manner (just for the sake of making it more intuitive), you could say your RAM holds two primary types of pages:

  1. Anonymous pages, which is just the "regular" memory used by processes. This is the memory used for process dynamic data such as variables.
  2. File-backed memory, or Page Cache (I won't get into the difference). This is used to cache files from a disk in the memory. Usually the cost of reading a file directly from the disk is pretty high, so usually the kernel saves the data it read from the disk in its Page cache, so the next time you read data from this file, it will be satisfied directly from the RAM, and you won't need to access the disk again. This extremely accelerate reading files from disk.

When there's a high memory pressure, and the kernel needs to free some space in the RAM, it faces a difficult decision:

  1. Should it clear the File-backed memory? This clears the memory really fast for new pages, but than if you access the file again you will need to read it again from the disk, which is costly.
  2. Should it swap-out some anonymous pages? Maybe there are some inactive pages that haven't been used for a long time and maybe wouldn't be needed by the process for a long time, so it's better to write it to the swap and keep the file cache in the memory to save additional I/O?

The swappiness is what hints the kernel what it should prefer. A value of "0" tells the kernel to avoid swapping out anon pages as much as possible, and to always clear the file-back memory/cache to make more space. The more you increase it, the more the chances are your anon memory would be swapped out and file-backed memory is favored. On kernels > 5.8, a value of 200 means it should always prefer to swap out anon pages and keep the file-backed memory in the RAM.

That's why the recommended value depends on many different factors, such as the workload type, the I/O speed and the swap speed.

  • For servers that run databases, it's commonly recommended to reduce the swappiness to 0, because it's crucial the DB would be able to keep all the data in the RAM to ensure high performance.
  • If you perform a lot of I/O and your disk is relatively slow, you might prefer increasing the swappiness to possible reduce I/O, if your processes have a lot of inactive memory pages that they probably won't use.
    • On machines that are using zswap, where the swap is compressed and stored in a dedicated section in the RAM (instead of the disk), the cost of swapping is even lower, so maybe it's ok to favor swapping over clearing file data from the RAM.

But the bottom line is: There's no general recommended value. For the standard case and workloads, the default value should be fine. If you see too much swapping, or too much I/O, you might want to carefully tune the value, but you'll have to test it. Just change the value and check how it impacts the performance of the machine until you reach a state that suits your specific machine and workload best.

aviro
  • 5,532
  • There's another good description of this process here: https://unix.stackexchange.com/a/680928/20140 – Philip Couling Feb 06 '24 at 12:26
  • I'm not 100% sure I agree with your wording "Should it clear the File-backed memory?". Surly, if a writable file page has been written to and exists on the same HD has swap, then reclaiming that filesystem page would have the same cost as swapping anonymous pages. Does this just need a couple of extra words of clarification? – Philip Couling Feb 06 '24 at 12:29
  • @PhilipCouling I tried to make it as simple as possible and not get into the gory details. There are other answers that are more accurate and get into the details. My goal in this answer was to prefer clarity and ease of understanding over accuracy and depth. Except for the change in the max value of swappiness, I don't really add anything new to the existing answers for the questions in the OP. They say exactly the same as I wrote, but the additional details there make them less clear for new users. I just tried to show the forest instead of the trees. – aviro Feb 06 '24 at 12:38
  • Thanks for writing this up, I at least think it’s helpful. Unfortunately with swappiness, not only isn’t there a general recommended value, but a single parameter can’t adequately describe all the constraints involved — if configurability is needed, it should really allow I/O costs to be described for each storage and swap device (including zswap). I had an interesting discussion about this a few months ago with Chris Down, and it’s somewhere on various people’s todo lists, but I doubt it will be significantly improved in the medium term. – Stephen Kitt Feb 06 '24 at 12:54
  • @aviro Thank you very much for your response. As you mentioned 'you'll have to test it,' are there any tools to view the debugging results? I only know about using vmstat for analysis; are there any other better tools or methods? For example, is there a way to only check the swap situation for a specific process?" – morty morty Feb 07 '24 at 02:16
  • there are different system requirements to meet.

    Sometimes I want smoother app switching, not easily killed; and sometimes I need as fast cold start speed as possible.

    In fact, this is another sub-issue regarding system optimization(https://stackoverflow.com/questions/77938444/how-to-configure-the-performance-system-parameters-of-android-to-achieve-a-balan). This question has a definitive answer, which is the kernel version 5.8

    Thanks again @aviro

    – morty morty Feb 07 '24 at 02:21