2

Here's the output of free -m:

              total        used        free      shared  buff/cache   available
Mem:            421         158         153          39         109         195
Swap:             0           0           0

I executed echo 3 > /proc/sys/vm/drop_caches to drop all possible caches, but buff/cache value still stays at 109MB. What holds those caches? Can I drop them somehow?

System used is XUbuntu 16.04.

Some of those caches (43mb) are probably used by tmpfs:

tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=43188k,mode=700,uid=1000,gid=1000)

Which leaves more space to account for.

Output of df -mt tmpfs:

Filesystem     1M-blocks  Used Available Use% Mounted on
tmpfs                 43     3        40   7% /run
tmpfs                211     1       211   1% /dev/shm
tmpfs                  5     1         5   1% /run/lock
tmpfs                211     0       211   0% /sys/fs/cgroup
tmpfs                 43     1        43   1% /run/user/1000
Rogach
  • 6,303
  • 1
    tmpfs? sync? – frostschutz Nov 09 '16 at 20:40
  • @frostschutz - sync was done already, tmpfs is listed at size 43mb - which still leaves more than 50mb to account for. – Rogach Nov 09 '16 at 20:45
  • Well it only drops clean caches so if any are dirty they'll remain. And also there's always stuff going on e.g. kern.log output and such. – Jason C Nov 09 '16 at 20:52
  • @JasonC - the system is idle, not much should be going on - why should that activity consume 50mb? It's not just idle interest on my part - I need to fight for each megabyte here, so those caches are definitely bothering me =) – Rogach Nov 09 '16 at 20:55
  • tmpfs space is allocated on-demand. use df -mt tmpfs. – sourcejedi Nov 09 '16 at 20:57
  • @sourcejedi - added output to question. It seems to confirm 43mb (but actual size of files stored in /tmp is about 50kb. – Rogach Nov 09 '16 at 21:08
  • You might want to rethink. Linux is good about cache management and reclaiming memory as needed, it doesn't really matter if a lot looks like it's in use by caches at any given time. If you're running into real problems, a better approach would be to decrease the max size of various caches and let it do its thing. You want it to be using as much memory as possible for caches at any time, it's smart about releasing it on demand. While it's possible your usage pattern isn't compatible with its decisions, Linux memory management has been finely tuned over a very long time, and works very well. – Jason C Nov 09 '16 at 23:17
  • you can easily confirm that tmpfs space is allocated on-demand by creating a 50MB file in /dev/shm, buff/cache should increase by about 50M (provided you have space for it in terms of free). – sourcejedi Nov 10 '16 at 08:06
  • dd if=/dev/zero bs=1M count=50 of=/dev/shm/test – sourcejedi Nov 10 '16 at 08:15

1 Answers1

2

The tmpfs would only use that 43MB if you filled it. It does not reserve the memory in advance. However:

Believe it or not, the 39M "shared" figure cannot be dropped, and it is all counted as "buff/cache". It includes all your tmpfs files. It also includes "shared" memory which is allocated from a secret kernel tmpfs :-). This includes "system V shared memory", and also some types of graphics buffers.

Anyway, those two mistakes roughly cancel out. So what about the rest of the memory?

When you drop caches in Linux, it chooses not to drop any cache which is mapped by a currently running program. Many of these mappings will be program/library code files.

Some data files may also be mapped. For example when you run journalctl to browse the systemd log, it accesses the log files using mmap() (as opposed to read()).

You can check what the remaining caches are with sudo smem -t -m. I expect they will mostly be currently running programs, and the libraries they use.

Proof

In case you want to verify this, here are links to the kernel code:

drop_caches works by calling invalidate_mapping_pages() for each cached "inode" (file).

invalidate_mapping_pages() - Invalidate all the unlocked pages of one inode

[...]

invalidate_mapping_pages() will not block on IO activity. It will not invalidate pages which are dirty, locked, under writeback or mapped into pagetables.

If you had any "dirty" pages - cached writes - or in-progress writes, those would also not be dropped or waited for. This was also mentioned in Documentation/sysctl/vm.txt.

In the case of a dirty/writeback page, invalidate_mapping_pages() "[tries] to speed up its reclaim" by calling deactivate_file_page(). I did not check exactly what this means :-).

sourcejedi
  • 50,249