29

I run free -m on a debian VM running on Hyper-V:

             total       used       free     shared    buffers     cached
Mem:         10017       9475        541        147         34        909
-/+ buffers/cache:       8531       1485
Swap:         1905          0       1905

So out of my 10GB of memory, 8.5GB is in use and only 1500MB is free (excluding cache).

But I struggle to find what is using the memory. The output of ps aux | awk '{sum+=$6} END {print sum / 1024}', which is supposed to add up the RSS utilisation is:

1005.2

In other words, my processes only use 1GB of memory but the system as a whole (excluding cache) uses 8.5GB.

What could be using the other 7.5GB?

ps: I have another server with a similar configuration that shows used mem of 1200 (free mem = 8.8GB) and the sum of RSS usage in ps is 900 which is closer to what I would expect...


EDIT

cat /proc/meminfo on machine 1 (low memory):

MemTotal:       10257656 kB
MemFree:          395840 kB
MemAvailable:    1428508 kB
Buffers:          162640 kB
Cached:          1173040 kB
SwapCached:          176 kB
Active:          1810200 kB
Inactive:         476668 kB
Active(anon):     942816 kB
Inactive(anon):   176184 kB
Active(file):     867384 kB
Inactive(file):   300484 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       1951740 kB
SwapFree:        1951528 kB
Dirty:                16 kB
Writeback:             0 kB
AnonPages:        951016 kB
Mapped:           224388 kB
Shmem:            167820 kB
Slab:              86464 kB
SReclaimable:      67488 kB
SUnreclaim:        18976 kB
KernelStack:        6736 kB
PageTables:        13728 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     7080568 kB
Committed_AS:    1893156 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       62284 kB
VmallocChunk:   34359672552 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       67520 kB
DirectMap2M:    10418176 kB

cat /proc/meminfo on machine 2 (normal memory usage):

MemTotal:       12326128 kB
MemFree:         8895188 kB
MemAvailable:   10947592 kB
Buffers:          191548 kB
Cached:          2188088 kB
SwapCached:            0 kB
Active:          2890128 kB
Inactive:         350360 kB
Active(anon):    1018116 kB
Inactive(anon):    33320 kB
Active(file):    1872012 kB
Inactive(file):   317040 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       3442684 kB
SwapFree:        3442684 kB
Dirty:                44 kB
Writeback:             0 kB
AnonPages:        860880 kB
Mapped:           204680 kB
Shmem:            190588 kB
Slab:              86812 kB
SReclaimable:      64556 kB
SUnreclaim:        22256 kB
KernelStack:       10576 kB
PageTables:        11924 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     9605748 kB
Committed_AS:    1753476 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       62708 kB
VmallocChunk:   34359671804 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       63424 kB
DirectMap2M:    12519424 kB
assylias
  • 738

2 Answers2

23

I understand you're using Hyper-V, but the concepts are similar. Maybe this will set you on the right track.

Your issue is likely due to virtual memory ballooning, a technique the hypervisor uses to optimize memory. See this link for a description

I observed your exact same symptoms with my VMs in vSphere. A 4G machine with nothing running on it would report 30M used by cache, but over 3G "used" in the "-/+ buffers" line.

Here's sample output from VMWare's statistics command. This shows how close to 3G is being tacked on to my "used" amount:

vmware-toolbox-cmd stat balloon
3264 MB

In my case, somewhat obviously, my balloon driver was using ~3G

I'm not sure what the similar command in Hyper-V is to get your balloon stats, but I'm sure you'll get similar results

Matt
  • 396
  • 1
  • 4
  • Thanks - you are definitely onto something. lsmod | grep hv_ shows hv_balloon on the low memory machine but not on the other - so the balloon module is loaded on one and not the other. And the behaviour looks very much like this description. – assylias Feb 11 '16 at 05:06
  • Not sure what the equivalent to vmware-toolbox-cmd is on Hyper V though. – assylias Feb 11 '16 at 05:06
  • @assylias I know, sorry. I looked myself while writing this answer and came up empty. However, if you write a program that quickly allocates a lot of memory, that may convince the hypervisor that your VM needs the resources. Similar to disk cache eviction test case, but different root cause. – Matt Feb 11 '16 at 13:20
  • You can unset the dynamic ram feature in Hyper-V to solve this issue. – Ashish Negi Aug 23 '17 at 11:06
  • 3
    I don't really see the solution here I'm afraid. – Jamie Hutber Nov 02 '18 at 11:51
1

https://serverfault.com/questions/85470/meaning-of-the-buffers-cache-line-in-the-output-of-free

Short answer: the kernel uses the buffers/cache memory for various tasks, such as caching files. This memory is available to applications if it is needed, so you are correct in saying you have 944 MB used.

vik
  • 255
  • According to that link, 944MB is the amount of cache – assylias Feb 03 '16 at 19:15
  • 2
    No, 944MB is the amount of RAM actually in use by applications and not available to other applications. Reread that post: "Linux (like most modern OS) will always try to use free RAM for caching stuff, so Mem: free will almost always be very low. Therefore the line -/+ buffers/cache: is shown, because it shows how much memory is free when ignoring caches; caches will be freed automatically if memory gets scarce, so they do not really matter." – vik Feb 04 '16 at 19:33
  • yes and the -/+ buffers/cache shows 1.5GB free mem... – assylias Feb 04 '16 at 19:58
  • Please understand the 1485 free in the -/+ buffers cache is NOT the amount of memory available for applications on the system. The amount of memory actually available to your applications is: (10017 - (9475 - 8531)) = 9073. Does this make sense? – vik Feb 04 '16 at 20:13
  • I think you are wrong: the second line excludes the cache & buffers and is the actual memory utilisation of the applications (and that's what your link says...). – assylias Feb 04 '16 at 20:39
  • That's not what the quote says. Try this test though; turn off all of your swap with swapoff, and run processes that hog memory. Opening multiple firefox tabs with youtube videos autoplaying is a good way to accomplish this, or java -Xms1024M (some process). Now, watch what happens to your memory. If I'm right, the first thing that will happen is free will go down to 0 and used will go to 100%. Next, the used value in buffers/cache will go down as the kernel gives cached/buffered memory to your applications. Cache/buffers will continue to decrease until you run out of ram. – vik Feb 04 '16 at 21:08