5

/!\ Current state: Update 4 /!\

Some /proc/meminfo values are a sum or a difference of some other values. However, not much is said about how they are calculated in these two links (just do ctrl-f meminfo to get there):

Besides, I've also dug here and there, and here's what I found so far:

MemFree:              LowFree + HighFree
Active:               Active(anon) + Active(file)
Inactive:             Inactive(anon) + Inactive(file)

I have not found much about the other fields, and where I have, the results don't match, like in these Stack Overflow posts:

Are these two values correctly calculated? Or is there some variability due to some external means?

Also, some values can't -obviously- be calculated without external values, but I'm still interested in that.

How are /proc/meminfo values calculated?


If that helps, here's an example of /proc/meminfo:

MemTotal:         501400 kB
MemFree:           38072 kB
MemAvailable:     217652 kB
Buffers:               0 kB
Cached:           223508 kB
SwapCached:        11200 kB
Active:           179280 kB
Inactive:         181680 kB
Active(anon):      69032 kB
Inactive(anon):    70908 kB
Active(file):     110248 kB
Inactive(file):   110772 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:
HighFree:
LowTotal:
LowFree:
MmapCopy:
SwapTotal:        839676 kB
SwapFree:         785552 kB
Dirty:                 4 kB
Writeback:             0 kB
AnonPages:        128964 kB
Mapped:            21840 kB
Shmem:              2488 kB
Slab:              71940 kB
SReclaimable:      41372 kB
SUnreclaim:        30568 kB
KernelStack:        2736 kB
PageTables:         5196 kB
Quicklists:
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     1090376 kB
Committed_AS:     486916 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        4904 kB
VmallocChunk:   34359721736 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:
ShmemPmdMapped:
CmaTotal:
CmaFree:
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       36800 kB
DirectMap2M:      487424 kB
DirectMap4M:
DirectMap1G:

Update 1:

Here's the code used by /proc/meminfo to fill its data:

http://elixir.free-electrons.com/linux/v4.15/source/fs/proc/meminfo.c#L46

However, since I'm not that of a coder, I'm having a hard time to figure out where these enums (e.g. NR_LRU_LISTS, etc) and global variables (e.g totalram_pages from si_meminfo in page_alloc.c#L4673 ) are filled.

Update 2:

The enums part is now solved, and NR_LRU_LISTS equals 5.

But the totalram_pages part seems to be harder to find out...

Update 3:

It looks like I won't be able to read the code since it looks very complex. If someone manages to do it and shows how /proc/meminfo valures are calculated, he/she can have the bounty.

The more the answer is detailed is, the higher the bounty will be.

Update 4:

A year and a half after, I learned that one of the reasons of this very question is in fact related to the very infamous OOM (Out Of Memory) bug that was finally recognized in August 2019 after AT LEAST 16 YEARS of "wontfix", until some famous Linux guy (thank you again, Artem S Tashkinov ! :) ) made the non-elitst Linux community voices' finally heard: "Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop".

Also, most Linux distributions do calculate the real available RAM more precisely mainly since around 2017 (haven't updated my distro at the time of this question) despite the kernel fix landed in 3.14 (March 2014), which also gives a little bit more clues: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=34e431b0ae398fc54ea69ff85ec700722c9da773

But, the OOM problem is still here in 2021 even if it does happen less often with somewhat borderline fixes (earlyoom and systemd-oomd), while the calculated available RAM is still not correctly reporting the real used RAM.

Also, these related questions might have some answers:

So, my point on "update 3" about how /proc/meminfo is getting its values still stands.

However, more insights about the OOM issue on the next link, which also talk about a very promising project against that, and it does even comes with a little bit of a GUI !: https://github.com/hakavlad/nohang

Firsts tests I did seems to show that this nohang tool really seems to do what it promised, and even better than earlyoom.

X.LINK
  • 1,314

1 Answers1

7

The contents of /proc/meminfo are determined by meminfo_proc_show in fs/proc/meminfo.c in the kernel.

The calculations are all relatively straightforward, but the sources of information used aren’t necessarily so obvious. For example, MemTotal is the totalram value from the sysinfo structure; that’s filled in by si_meminfo in mm/page_alloc.c.

Stephen Kitt
  • 434,908
  • That's pretty straightforward indeed. However, I'm having a hard time -I'm not that much of a programmer- to find where those enums members are defined (e.g NR_LRU_LISTS, NR_LRU_BASE, LRU_ACTIVE_ANON, etc). I do have found this website: http://elixir.free-electrons.com/linux/v4.15/source/fs/proc/meminfo.c ; But clicking on these values seems to point on a lot of possibilities, still without knowing what was called right before meminfo_proc_show(). – X.LINK Jan 31 '18 at 15:02
  • 1
    I’ll expand my answer to detail where the values come from, but it will take me a little while to write it all up. – Stephen Kitt Feb 01 '18 at 08:17
  • Any news about how /proc/meminfo values are calculated ? – X.LINK Feb 24 '18 at 03:30
  • 1
    I haven’t forgotten you, it’s taking me longer than I hoped to write everything up... – Stephen Kitt Feb 24 '18 at 10:04
  • In fact, this question is linked to the infamous oomd situation. But even with how Linux changed its way to calculate used RAM since mid-2017 at the very last (I hadn't got time to upgrade my distro yet back then) by adding the "available" memory section in top, and fixes like early-oom and systemd-oomd, the question main and the one about how the values comes from still applies (I still get oomd issues with "5GB" used out of 8GB with a recent distro even when I count the buff/cache in). – X.LINK Jan 20 '21 at 21:44
  • Was this ever formally answered somewhere? I would love to understand just HOW those values are being grabbed about RAM. Is the kernel traversing the entire address space or something and counting some increment? I see the code but it's all so difficult for me to follow since I'm not a programmer. – Kahn Sep 27 '21 at 14:56
  • 1
    @Kahn thanks for the reminder, I’ll bump this up my to-do list. – Stephen Kitt Sep 27 '21 at 15:00