263

I want to do some low-resources testing and for that I need to have 90% of the free memory full.

How can I do this on a *nix system?

  • 3
    Does it really have to work on any *nix system? – user Nov 08 '13 at 12:31
  • 37
    Instead of jut filling memory, could you instead create a VM (using docker, or vagrant, or something similar) that has a limited amount of memory? – abendigo Nov 08 '13 at 13:27
  • 4
    @abendigo For a QA many of the solutions presented here are useful: for a general purpose OS without a specific platform the VM or kernel boot parameters could be useful, but for a embedded system where you know the memory specification of the targeted system I would go for the filling of the free memory. – Eduard Florinescu Nov 09 '13 at 17:40
  • 2
    In case anyone else is a little shocked by the scoring here: http://meta.unix.stackexchange.com/questions/1513/what-causes-questions-like-these-to-have-such-a-high-rate-of-views? – goldilocks Nov 13 '13 at 14:46
  • See also: http://unix.stackexchange.com/a/1368/52956 – Wilf Jun 18 '15 at 18:42

16 Answers16

209

stress-ng is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:

stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1

For Linux >= 3.14, you may use MemAvailable instead to estimate available memory for new processes without swapping:

stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1

Adapt the /proc/meminfo call with free(1)/vm_stat(1)/etc. if you need it portable. See also the reference wiki for stress-ng for further usage examples.

tkrennwa
  • 3,525
  • 1
  • 15
  • 17
  • 3
    stress --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.097;}' < /proc/meminfo)k --vm-keep -m 10 – Robert Oct 23 '15 at 16:47
  • 2
    Most of MemFree is kept by OS, so I used MemAvailable instead. This gave me 92% usage on Cent OS 7. stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.98;}' < /proc/meminfo)k --vm-keep -m 1 – kujiy Feb 08 '18 at 00:36
  • good to know, MemAvailable was added to "estimate of how much memory is available for starting new applications, without swapping", see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/proc.txt and https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=34e431b0ae398fc54ea69ff85ec700722c9da773 – tkrennwa Feb 08 '18 at 09:11
  • 2
    Just as an added note, providing both --vm 1 and --vm-keep are very important. Simply --vm-bytes does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution. – ffledgling Mar 26 '19 at 12:56
  • 1
    This is why there is -m 1. According to the stress manpage, -m N is short for --vm N: spawn N workers spinning on malloc()/free() – tkrennwa Mar 27 '19 at 03:03
  • I get an allocation error:

    stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1

    stress-ng: info: [28129] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor

    stress-ng: info: [28129] dispatching hogs: 1 vm

    stress-ng: error: [28148] stress-ng-vm: gave up trying to mmap, no available memory

    stress-ng: info: [28129] successful run completed in 10.02s

    I also notice frequent crashes due to alloc errors. What can it be?

    – cipper Apr 16 '20 at 13:33
  • @cipper if you don't understand behavior of something you can always create a new question using the button on top right corner of this site. That said, it sounds like you have something else eating the memory of the system. Try lesser multipliers than 0.9 to reduce memory usage. The no available memory should be pretty good hint about the problem. – Mikko Rantalainen Jun 29 '21 at 20:30
157

If you have basic GNU tools (head and tail) or BusyBox on Linux, you can do this to fill a certain amount of free memory:

head -c BYTES /dev/zero | tail
head -c 5000m /dev/zero | tail #~5GB, portable
head -c 5G    /dev/zero | tail #5GiB on GNU (not busybox)

This works because tail needs to keep the current line in memory, in case it turns out to be the last line. The line, read from /dev/zero which outputs only null bytes and no newlines, will be infinitely long, but is limited by head to BYTES bytes, thus tail will use only that much memory. For a more precise amount, you will need to check how much RAM head and tail itself use on your system and subtract that.

To just quickly run out of RAM completely, you can remove the limiting head part:

tail /dev/zero

If you want to also add a duration, this can be done quite easily in bash (will not work in sh):

cat <(head -c 500m /dev/zero) <(sleep SECONDS) | tail

<(command) tells the interpreter to run command and make its output appear as a file, hence echo <(true) will output a file processor e.g. /dev/fd/63, so for cat it will seem like it gets passed two files, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html

The cat command will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep tail alive.

If you have pv and want to slowly increase RAM use:

head -c TOTAL /dev/zero | pv -L BYTES_PER_SEC | tail
head -c 1000m /dev/zero | pv -L 10m | tail

The latter will use up to one gigabyte at a rate of ten megabytes per second. As an added bonus, pv will show the current rate of use and the total use so far. Of course this can also be done with previous variants:

head -c 500m /dev/zero | pv | tail

Just inserting the | pv | part will show you the current status (throughput and total by default).

Compatibility hints and alternatives
If you do not have a /dev/zero device, the standard yes and tr tools might substitute: yes | tr \\n x | head -c BYTES | tail (yes outputs an infinite amount of "yes"es, tr substitutes the newline such that everything becomes one huge line and tail needs to keep all that in memory).
Another, simpler alternative is using dd: dd if=/dev/zero bs=1G of=/dev/null uses 1GB of memory on GNU and BusyBox, but also 100% CPU on one core.
Finally, if your head does not accept a suffix, you can calculate an amount of bytes inline, for example 50 megabytes: head -c $((1024*1024*50))


Credits to falstaff for contributing a variant that is even simpler and more broadly compatible (like with BusyBox).


Why another answer? The accepted answer recommends installing a package (I bet there's a release for every chipset without needing a package manager); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed to compile for your target platform); the second top voted answer recommends running the application in a VM (yeah let me just dd this phone's internal sdcard over usb or something and create a virtualbox image); the third suggests modifying something in the boot sequence which does not fill the RAM as desired; the fourth only works in so far as the /dev/shm mountpoint (1) exists and (2) is large (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth tries to get you to install python; the ninth thinks we're all very uncreative and finally the tenth wrote his own C++ program which causes the same issue as the top voted answer.

Luc
  • 3,610
  • lovely solution. Only glitch is that the exit code of the construct is 1 because grep does not find a match. None of the solutions from http://stackoverflow.com/questions/6550484/avoid-grep-returning-error-when-input-doesnt-match seem to fix it. – Holger Brandl May 05 '16 at 18:50
  • @HolgerBrandl Good point, I wouldn't know how to fix that. This is the first time I heard of set -e, so I just learned something :) – Luc May 05 '16 at 19:51
  • $SECONDS does not seem a good choice since it's a built in variable reflecting the time since the shell was started. see http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_02.html – Holger Brandl May 10 '16 at 09:42
  • @HolgerBrandl Good catch, I didn't know that. Kinda cool to find a terminal that's open for >3 million seconds currently :D. I updated the post. – Luc May 11 '16 at 07:40
  • 1
    Cool technique! time yes | tr \\n x | head -c $((1024*1024*1024*10)) | grep n (use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at https://github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel? – Mike S Apr 07 '17 at 20:53
  • Good idea, but the grep seems to allocate in huge chunks. On my 1GB test system with 920MB free, counter stops at 512 MB. With damio's solution, 848 are allocated. Uh, to fix @HolgerBrandl's problem, just add || true at the end. – Otheus Sep 26 '17 at 20:06
  • Works greatly also on OpenWRT, thank you so much! – nemesisdesign Apr 28 '21 at 00:42
  • @Luc, I tried cat /dev/zero | head -c 5G | tail in redis 6.2.1 container and it didn't work - according to docker starts it was using around 7 mb of RAM.

    But when I executed </dev/zero head -c 5G | tail the RAM used was 5.037G, so obviously it worked. Do you have an idea why the one works while the other doesn't?

    Thanks in advance!

    – DPM Jul 06 '22 at 11:43
  • @DPM Don't really know. I am wondering if it also works without tail, so </dev/zero head -c 5G. It would almost seem like your tail drops null bytes but your head does not, from the examples given. Might it be that this memory usage was too short/brief and it didn't show up on the graph when using the cat ... method? – Luc Jul 06 '22 at 12:22
  • Hi @Luc, this one </dev/zero head -c 5G also don't work. I don't think that the problem is that he memory usage was to short, because I tried to use 100G of memory which will close the container and kill it (because I don't have so much memory). The things that I try are on redi:6.2.1 docker container. You can check them if you are interested in. – DPM Aug 30 '22 at 19:02
  • Thanks for the feedback @DPM, in that case I'd say it works as intended: the memory is indeed being used. For a time duration, see some of the other examples (but those tools may not be installed in the contained by default) :) – Luc Aug 30 '22 at 23:06
  • Great answer! One thing I wonder though - is it possible to stop the cat /dev/zero | head | tail pipeline once all the bytes were read? I want to test whether it's possible or not to allocate some amount of memory in a loop without using pv – synapse Nov 04 '22 at 12:05
  • @synapse I don't understand your question. If the head is limited to some amount of bytes, then yes this pipeline construction will (once those bytes are read) output the result and then exit. If you just type cat /dev/zero | head -c 1 | tail does that not exit for you, does it hang forever? – Luc Nov 04 '22 at 23:13
  • This is the exactly what I've been looking for. tacobell programming is really way to go – 0xF4D3C0D3 Nov 07 '22 at 11:16
  • This is a great technique, but I would suggest piping the result to /dev/null. Writing all those null bytes to the terminal costs a huge amount of CPU. Instead of head -c 500m /dev/zero | tail, use head -c 500m /dev/zero | tail > /dev/null. For me, this reduced runtime from about 12s to about 0.27s. – James Scriven Feb 20 '24 at 17:42
  • @JamesScriven Thanks for the suggestion, it is a good one because it's a more clean solution (not filling up the terminal with null bytes). I am not sure if it's better though, because of two aspects: more code makes it harder to remember, and the effect is more observable by taking a few seconds as compared to when it immediately disappears again. Not all environments have the <() syntax available to make the sleep time work. Perhaps this hint for a cleaner solution should be added to the "alternatives" section? If you think that's a good idea, feel free to add an edit and get the credit :) – Luc Feb 20 '24 at 17:52
97

You can write a C program to malloc() the required memory and then use mlock() to prevent the memory from being swapped out.

Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.

heemayl
  • 56,300
Chris
  • 679
  • 1
  • 4
  • 3
  • 31
    Long time back I had to test similar use case. I observed that until you write something to that memory it will not be actually allocated(i.e. until page fault happens) . I am not sure whether mlock() take cares of that. – Sirish Kumar Bethala Nov 08 '13 at 13:31
  • 2
    I concur with @siri; however, it depends on which variant UNIX you are using. – Anthony Nov 08 '13 at 13:34
  • 3
    Some inspiration for the code. Furthermore, I think you don't need to unlock/free the memory. The OS is going to do that for you when your process has ended. – Sebastian Nov 08 '13 at 13:44
  • 12
    You probably have to actually write to the memory, the kernel might just overcommit if you only malloc it. If configured to, e.g. Linux will let malloc return successfully without actually having the memory free, and only actually allocate the memory when it is being written to. See http://www.win.tue.nl/~aeb/linux/lk/lk-9.html – Bjarke Freund-Hansen Nov 08 '13 at 14:32
  • @bjarkef then just use calloc. – Sebastian Nov 08 '13 at 15:32
  • 10
    @Sebastian: calloc will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do a memset of the whole buffer. See the following answer for more info http://stackoverflow.com/a/2688522/713554 – Leo Nov 08 '13 at 16:43
  • 1
    Here is a C program that mallocs and does mlock. https://github.com/Damienkatz/memhog –  Nov 08 '13 at 16:17
49

I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.

That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.

Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.

39

From this HN comment: https://news.ycombinator.com/item?id=6695581

Just fill /dev/shm via dd or similar.

swapoff -a
dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
damio
  • 489
  • 4
  • 6
  • 9
    Not all *nixes have /dev/shm. Any more portable idea? – Tadeusz A. Kadłubowski Nov 08 '13 at 12:24
  • 2
    If pv is installed, it helps to see the count: dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024 – Otheus Sep 26 '17 at 20:01
  • 1
    If you want speed, this method is the right choice! Because it allocates the desired amount of RAM in a matter of seconds. Don't relay on /dev/urandom, it will use 100% of CPU and take several minutes if your RAM is big.

    YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM.

    Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.

    – develCuy Dec 08 '17 at 19:25
  • note to self: dont yes > /dev/shm/asdf, as it will crash your system (even with swapping enabled) – phil294 Jul 25 '20 at 00:21
32
  1. run linux;
  2. boot with mem=nn[KMG] kernel boot parameter

(look in linux/Documentation/kernel-parameters.txt for details).

Benibr
  • 257
  • 2
  • 6
Anon
  • 321
  • 2
  • 2
25

I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248

function malloc() {
  if [[ $# -eq 0 || $1 -eq '-h' || $1 -lt 0 ]] ; then
    echo -e "usage: malloc N\n\nAllocate N mb, wait, then release it."
  else 
    N=$(free -m | grep Mem: | awk '{print int($2/10)}')
    if [[ $N -gt $1 ]] ;then 
      N=$1
    fi
    sh -c "MEMBLOB=\$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1"
  fi
}
valadil
  • 351
  • 1
    This is the nicest solution IMHO, as it essentially only needs dd to work, all the other stuff can be worked around in any shell. Note that it actually claims twice the memory than the data dd produces, at least temporarily. Tested on debian 9, dash 0.5.8-2.4. If you use bash for running the MEMBLOB part, it becomes really slow and uses four times the amount that dd produces. – P.Péter Oct 16 '18 at 07:46
  • Indeed a nice idea. Link to Github example is broken. Consider updating it – Eldad Assis Jul 10 '22 at 07:16
22

How abount a simple python solution?

#!/usr/bin/env python

import sys
import time

if len(sys.argv) != 2:
    print "usage: fillmem <number-of-megabytes>"
    sys.exit()

count = int(sys.argv[1])

megabyte = (0,) * (1024 * 1024 / 8)

data = megabyte * count

while True:
    time.sleep(1)
swiftcoder
  • 321
  • 1
  • 2
  • 8
    That will probably quickly be swapped out, having very little actual impact on memory pressure (unless you fill up all the swap as well, which will take a while, usually) – Joachim Sauer Nov 08 '13 at 13:22
  • 1
    Why would a unix swap while there is available RAM? This is actually a plausible way to evict disk cache when need be. – Alexander Shcheblikin Nov 08 '13 at 23:04
  • @AlexanderShcheblikin This question isn't about evicting disk cache (which is useful for performance testing but not for low resources testing). – Gilles 'SO- stop being evil' Nov 09 '13 at 14:40
  • 1
    This solution worked to cobble up a Gig or two in my tests, though I didn't try to stress my memory. But, @JoachimSauer, one could set sysctl vm.swappiness=0 and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See https://www.kernel.org/doc/Documentation/sysctl/vm.txt and https://www.kernel.org/doc/gorman/html/understand/understand005.html – Mike S Apr 04 '17 at 20:03
  • simply one liner for 1GB: python -c "x=(1102410241024/8)(0,); raw_input()" – adrianlzt Sep 19 '19 at 10:51
  • A python would digest about 2Gb per second on my system. To speed things up I made this: MB=600000; let PWMB=$MB/10; for i in {1..10}; do python -c "x=($PWMB*1024*1024/8)*(0,); import time; time.sleep(10*3600*24)" & echo "started" $i ; done – y.selivonchyk Jul 08 '20 at 17:18
12

How about ramfs if it exists? Mount it and copy over a large file? If there's no /dev/shm and no ramfs - I guess a tiny C program that does a large malloc based on some input value? Might have to run it a few times at once on a 32 bit system with a lot of memory.

Anthon
  • 79,293
nemo
  • 121
  • 2
9

If you want to test a particular process with limited memory you might be better off using ulimit to restrict the amount of allocatable memory.

GAD3R
  • 66,769
sj26
  • 491
  • 4
  • 5
  • 2
    Actually this does not work on linux (dunno about other *nixes). man setrlimit: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED. – phemmer Nov 08 '13 at 13:46
7

I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.

6

I need to have 90% of the free memory full

In case there are not enough answers already, one I did not see is doing a ramdisk, or technically a tmpfs. This will map RAM to a folder in linux, and then you just create or dump however many files of whatever size in there to take up however much ram you want. The one downside is you need to be root to use the mount command

# first as root make the given folder, however you like where the tmpfs mount is going to be.

mkdir /ramdisk

chmod 777 /ramdisk

mount -t tmpfs -o size=500G tmpfs /ramdisk

change 500G to whatever size makes sense; in my case my server has 512GB of RAM installed.

Obtain or copy or create a file of reasonable size; create a 1GB file for example then

cp my1gbfile /ramdisk/file001
cp my1gbfile /ramdisk/file002

do 450 times; 450 GB of 512GB approx 90%

use free -g to observe how much RAM is allocated.

Note: having 512GB physical ram for example, and if you tmpfs more than 512gb it will work, and allow you freeze/crash the system by allocating 100% of the RAM. For that reason it is advisable to only tmpfs so much RAM that you leave some reasonable amount free for the system.

To create a single file of a given size:

truncate -s 450G my450gbfile

man truncate

also dd works well

dd if=/dev/zero of=my456gbfile bs=1GB count=456

ron
  • 6,575
4

I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner

The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.

1

with just dd. This continuously reads and allocates 10GB RES:

dd if=/dev/zero of=/dev/null iflag=fullblock bs=10G 

To just allocate once, add count=1 The downside is it is cpu heavy.

sivann
  • 333
1

This expands @tkrennwa's answer:

You may not wish to spin 100% cpu during the test, which stress-ng does by default.

This invocation will not spin the CPU, but it will allocate 4g of ram, page lock it (so it can't swap), and then wait forever (ie, until ctrl-c):

stress-ng --vm-bytes 4g --vm-keep -m 1 --vm-ops 1 --vm-hang 0 --vm-locked
  • --vm-ops N - stop vm workers after N bogo operations.
  • --vm-hang N - sleep N seconds before unmapping memory, the default is zero seconds. Specifying 0 will do an infinite wait.
  • --vm-locked - Lock the pages of the mapped region into memory using mmap MAP_LOCKED (since Linux 2.5.37).

Also, since you are just eating memory, you might want to --vm-madvise hugepage to use "huge pages" (typically 2MB instead of 4k). This is notably faster when freeing pages after CTRL-C because far fewer pages occupy the pagetable:

]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1  --vm-locked --vm-madvise hugepage
stress-ng: info:  [3107579] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor
stress-ng: info:  [3107579] dispatching hogs: 1 vm
stress-ng: info:  [3107579] successful run completed in 17.15s

real 0m17.186s <<<<<< with huge pages user 0m2.481s sys 0m14.453s ]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1 --vm-locked stress-ng: info: [3108342] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor stress-ng: info: [3108342] dispatching hogs: 1 vm stress-ng: info: [3108342] successful run completed in 36.52s

real 0m36.555s <<<<<< without huge pages user 0m2.598s sys 0m33.538s

KJ7LNW
  • 465
-2

This program works very well for allocating a fixed amount of memory:

https://github.com/julman99/eatmemory

  • 1
    Please don't post link-only answers. Also, the source code just does malloc so it's a duplicate of Chris' and nemo's answer which were both posted in 2013 and recommend making a C program that does malloc. Or arguably of tkrennwa's (also 2013) that recommends using a tool. Why do we need another tool? – Luc Nov 22 '21 at 09:29