-4

I'm reading this article on clearing buffer/cache from my new Linux VMs since they're taking a full 1/3rd of my RAM, so I ran this:

sync; echo 1 > /proc/sys/vm/drop_caches

This immediately fixed the issue, so I'm going to add it to my cronjob. The article says to create a clearcache.sh file with the contents:

#!/bin/bash
# Note, we are using "echo 3", but it is not recommended in production instead use "echo 1"
echo "echo 3 > /proc/sys/vm/drop_caches"

What happened to sync? Didn't that need to be run for this to work? And why would echoing an echo inside a cronjob run that command? I tried this on the command line and, as predicted, it just echoed the command.

# echo "echo 3 > /proc/sys/vm/drop_caches"
echo 3 > /proc/sys/vm/drop_caches

Is this some sort of magical cron functionality that I'm not privy to?

PatPeter
  • 101
  • 3
    Free RAM is wasted RAM. You don't need to do this. – Mikel Jul 12 '18 at 02:52
  • @Mikel I have two programs that take up between 1.3-2 GB of RAM on a 3 GB virtual machine. Now, 2 GB is the internal limit of said programs, but I've configured them so that they'll crash and restart after reaching 1.2 GB of memory usage. This means I need my extra 1 GB of RAM free for these servers to take up the space, crash, and then start again. – PatPeter Jul 12 '18 at 02:56

1 Answers1

5

You're completely missing the point/philosophy of how Linux uses RAM.

RAM is a wasted resource if it's completely free and the OS is having to access the HDD for files, therefore in Linux it maintains buffers & cache which use up RAM aggressively to improve performance.

This RAM can be used by a process at any time (it's effectively cache).

Reading from a disk is very slow compared to accessing (real) memory. In addition, it is common to read the same part of a disk several times during relatively short periods of time. For example, one might first read an e-mail message, then read the letter into an editor when replying to it, then make the mail program read it again when copying it to a folder. Or, consider how often the command ls might be run on a system with many users. By reading the information from disk only once and then keeping it in memory until no longer needed, one can speed up all but the first read. This is called disk buffering, and the memory used for the purpose is called the buffer cache.

Reference: http://www.tldp.org/LDP/sag/html/buffer-cache.html

Example

$ free -hw
              total        used        free      shared     buffers       cache   available
Mem:           992M         76M        202M         12M         68M        645M        739M
Swap:          2.0G          0B        2.0G

In this output my VM has ~992MB of which it appears as if it's only got 202MB free. But this is where many get confused/misled.

This Linux system actually has 739MB free (available column).

How is this possible?

Simple. Linux is using RAM to improve performance by holding files, libraries, etc. in RAM (buffers & cache) rather than have to reach out the slow HDD to retrieve these files each time.

The buffers & cache are use by the Linux kernel in this manner, and at any point if the memory manager (part of the kernel) perceives pressure where processes are requiring more and more RAM, the kernel can literally drop any of the data that's being cached here to immediately give itself more RAM.

Your question

As far as your question is concerned with the clearcache.sh script. The answer is simple. That's a typo by whomever wrote the article. It should be like this:

$ echo "echo 3 > /proc/sys/vm/drop_caches" | sudo sh

They probably copy/pasted incorrectly from my U&L Q&A answer to this question: How do you empty the buffers and cache on a Linux system?

References

slm
  • 369,824
  • Thank you so much for the answer @slm!!! I've been dealing with what I initially thought was a DDoS when I bought my virtual servers earlier in the week (could not shell into the server at all), and both my users and I have been losing total access to the server every hour or so for 15 minutes. My host confirmed that the host machine isn't under heavy load so my newest lead as to the cause of this was memory issues. – PatPeter Jul 12 '18 at 03:41