I want to do some low-resources testing and for that I need to have 90% of the free memory full.
How can I do this on a *nix
system?
I want to do some low-resources testing and for that I need to have 90% of the free memory full.
How can I do this on a *nix
system?
stress-ng is a workload generator that simulates cpu/mem/io/hdd stress on POSIX systems. This call should do the trick on Linux < 3.14:
stress-ng --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1
For Linux >= 3.14, you may use MemAvailable
instead to estimate available memory for new processes without swapping:
stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1
Adapt the /proc/meminfo
call with free(1)
/vm_stat(1)
/etc. if you need it portable. See also the reference wiki for stress-ng for further usage examples.
stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.98;}' < /proc/meminfo)k --vm-keep -m 1
– kujiy
Feb 08 '18 at 00:36
--vm 1 and --vm-keep
are very important. Simply --vm-bytes
does nothing and you might be misled into think you can allocate as much memory as you need/want. I got bit by this until I tried to sanity check myself by allocation 256G of memory. This is not a flaw in the answer, it provides the correct flags, just an additional caution.
– ffledgling
Mar 26 '19 at 12:56
-m 1
. According to the stress manpage, -m N
is short for --vm N
: spawn N
workers spinning on malloc()/free()
– tkrennwa
Mar 27 '19 at 03:03
stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1
stress-ng: info: [28129] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor
stress-ng: info: [28129] dispatching hogs: 1 vm
stress-ng: error: [28148] stress-ng-vm: gave up trying to mmap, no available memory
stress-ng: info: [28129] successful run completed in 10.02s
I also notice frequent crashes due to alloc errors. What can it be?
– cipper Apr 16 '20 at 13:330.9
to reduce memory usage. The no available memory
should be pretty good hint about the problem.
– Mikko Rantalainen
Jun 29 '21 at 20:30
If you have basic GNU tools (head
and tail
) or BusyBox on Linux, you can do this to fill a certain amount of free memory:
head -c BYTES /dev/zero | tail
head -c 5000m /dev/zero | tail #~5GB, portable
head -c 5G /dev/zero | tail #5GiB on GNU (not busybox)
This works because tail needs to keep the current line in memory, in case it turns out to be the last line. The line, read from /dev/zero
which outputs only null bytes and no newlines, will be infinitely long, but is limited by head
to BYTES
bytes, thus tail
will use only that much memory. For a more precise amount, you will need to check how much RAM head
and tail
itself use on your system and subtract that.
To just quickly run out of RAM completely, you can remove the limiting head
part:
tail /dev/zero
If you want to also add a duration, this can be done quite easily in bash
(will not work in sh
):
cat <(head -c 500m /dev/zero) <(sleep SECONDS) | tail
<(command)
tells the interpreter to run command
and make its output appear as a file, hence echo <(true)
will output a file processor e.g. /dev/fd/63, so for cat it will seem like it gets passed two files, more info on it here: http://tldp.org/LDP/abs/html/process-sub.html
The cat
command will wait for inputs to complete until exiting, and by keeping one of the pipes open, it will keep tail
alive.
If you have pv
and want to slowly increase RAM use:
head -c TOTAL /dev/zero | pv -L BYTES_PER_SEC | tail
head -c 1000m /dev/zero | pv -L 10m | tail
The latter will use up to one gigabyte at a rate of ten megabytes per second. As an added bonus, pv
will show the current rate of use and the total use so far. Of course this can also be done with previous variants:
head -c 500m /dev/zero | pv | tail
Just inserting the | pv |
part will show you the current status (throughput and total by default).
Compatibility hints and alternatives
If you do not have a /dev/zero
device, the standard yes
and tr
tools might substitute: yes | tr \\n x | head -c BYTES | tail
(yes
outputs an infinite amount of "yes"es, tr
substitutes the newline such that everything becomes one huge line and tail needs to keep all that in memory).
Another, simpler alternative is using dd
: dd if=/dev/zero bs=1G of=/dev/null
uses 1GB of memory on GNU and BusyBox, but also 100% CPU on one core.
Finally, if your head
does not accept a suffix, you can calculate an amount of bytes inline, for example 50 megabytes: head -c $((1024*1024*50))
Credits to falstaff for contributing a variant that is even simpler and more broadly compatible (like with BusyBox).
Why another answer? The accepted answer recommends installing a package (I bet there's a release for every chipset without needing a package manager); the top voted answer recommends compiling a C program (I did not have a compiler or toolchain installed to compile for your target platform); the second top voted answer recommends running the application in a VM (yeah let me just dd this phone's internal sdcard over usb or something and create a virtualbox image); the third suggests modifying something in the boot sequence which does not fill the RAM as desired; the fourth only works in so far as the /dev/shm mountpoint (1) exists and (2) is large (remounting needs root); the fifth combines many of the above without sample code; the sixth is a great answer but I did not see this answer before coming up with my own approach, so I thought I'd add my own, also because it's shorter to remember or type over if you don't see that the memblob line is actually the crux of the matter; the seventh again does not answer the question (uses ulimit to limit a process instead); the eighth tries to get you to install python; the ninth thinks we're all very uncreative and finally the tenth wrote his own C++ program which causes the same issue as the top voted answer.
set -e
, so I just learned something :)
– Luc
May 05 '16 at 19:51
time yes | tr \\n x | head -c $((1024*1024*1024*10)) | grep n
(use 10 GiB memory) takes 1 minute 46 seconds. Running julman99's eatmemory program at https://github.com/julman99/eatmemory takes 6 seconds. ...Well, plus the download and compile time, but it compiled with no issue... and very quickly... on my RHEL6.4 machine. Still, I like this solution. Why reinvent the wheel?
– Mike S
Apr 07 '17 at 20:53
|| true
at the end.
– Otheus
Sep 26 '17 at 20:06
cat /dev/zero | head -c 5G | tail
in redis 6.2.1 container and it didn't work - according to docker starts it was using around 7 mb of RAM.
But when I executed </dev/zero head -c 5G | tail
the RAM used was 5.037G, so obviously it worked. Do you have an idea why the one works while the other doesn't?
Thanks in advance!
– DPM Jul 06 '22 at 11:43</dev/zero head -c 5G
. It would almost seem like your tail
drops null bytes but your head
does not, from the examples given. Might it be that this memory usage was too short/brief and it didn't show up on the graph when using the cat ...
method?
– Luc
Jul 06 '22 at 12:22
</dev/zero head -c 5G
also don't work.
I don't think that the problem is that he memory usage was to short, because I tried to use 100G of memory which will close the container and kill it (because I don't have so much memory).
The things that I try are on redi:6.2.1 docker container. You can check them if you are interested in.
– DPM
Aug 30 '22 at 19:02
cat /dev/zero | head | tail
pipeline once all the bytes were read? I want to test whether it's possible or not to allocate some amount of memory in a loop without using pv
– synapse
Nov 04 '22 at 12:05
head
is limited to some amount of bytes, then yes this pipeline construction will (once those bytes are read) output the result and then exit. If you just type cat /dev/zero | head -c 1 | tail
does that not exit for you, does it hang forever?
– Luc
Nov 04 '22 at 23:13
/dev/null
. Writing all those null bytes to the terminal costs a huge amount of CPU. Instead of head -c 500m /dev/zero | tail
, use head -c 500m /dev/zero | tail > /dev/null
. For me, this reduced runtime from about 12s to about 0.27s.
– James Scriven
Feb 20 '24 at 17:42
<()
syntax available to make the sleep time work. Perhaps this hint for a cleaner solution should be added to the "alternatives" section? If you think that's a good idea, feel free to add an edit and get the credit :)
– Luc
Feb 20 '24 at 17:52
You can write a C program to malloc()
the required memory and then use mlock()
to prevent the memory from being swapped out.
Then just let the program wait for keyboard input, and unlock the memory, free the memory and exit.
calloc
will run into the same problem IIRC. All the memory will just point to the same read-only zeroed page. It won't actually get allocated until you try to write to it (which won't work since it is read-only). The only way of being really sure that I know is to do a memset
of the whole buffer. See the following answer for more info http://stackoverflow.com/a/2688522/713554
– Leo
Nov 08 '13 at 16:43
malloc
s and does mlock
.
https://github.com/Damienkatz/memhog
–
Nov 08 '13 at 16:17
I would suggest running a VM with limited memory and testing the software in that would be a more efficient test than trying to fill memory on the host machine.
That method also has the advantage that if the low memory situation causes OOM errors elsewhere and hangs the whole OS, you only hang the VM you are testing in not your machine that you might have other useful processes running on.
Also if your testing is not CPU or IO intensive, you could concurrently run instances of the tests on a family of VMs with a variety of low memory sizes.
From this HN comment: https://news.ycombinator.com/item?id=6695581
Just fill /dev/shm via dd or similar.
swapoff -a dd if=/dev/zero of=/dev/shm/fill bs=1k count=1024k
pv
is installed, it helps to see the count: dd if=/dev/zero bs=1024 |pv -b -B 1024 | dd of=/dev/shm/fill bs=1024
– Otheus
Sep 26 '17 at 20:01
YET, /dev/shm has a relative size in modern Ubuntu/Debian distros, it has a size that defaults to 50% of physical RAM.
Hopefully you can remount /dev/shm or maybe create a new mount point. Just make sure it has the actual size you want to allocate.
– develCuy Dec 08 '17 at 19:25yes > /dev/shm/asdf
, as it will crash your system (even with swapping enabled)
– phil294
Jul 25 '20 at 00:21
mem=nn[KMG]
kernel boot parameter(look in linux/Documentation/kernel-parameters.txt for details).
I keep a function to do something similar in my dotfiles. https://github.com/sagotsky/.dotfiles/blob/master/.functions#L248
function malloc() {
if [[ $# -eq 0 || $1 -eq '-h' || $1 -lt 0 ]] ; then
echo -e "usage: malloc N\n\nAllocate N mb, wait, then release it."
else
N=$(free -m | grep Mem: | awk '{print int($2/10)}')
if [[ $N -gt $1 ]] ;then
N=$1
fi
sh -c "MEMBLOB=\$(dd if=/dev/urandom bs=1MB count=$N) ; sleep 1"
fi
}
How abount a simple python solution?
#!/usr/bin/env python
import sys
import time
if len(sys.argv) != 2:
print "usage: fillmem <number-of-megabytes>"
sys.exit()
count = int(sys.argv[1])
megabyte = (0,) * (1024 * 1024 / 8)
data = megabyte * count
while True:
time.sleep(1)
sysctl vm.swappiness=0
and furthermore set vm.min_free_kbytes to a small number, maybe 1024. I haven't tried it, but the docs say that this is how you control the quickness of swapping out... you should be able to make it quite slow indeed, to the point of causing an OOM condition on your machine. See https://www.kernel.org/doc/Documentation/sysctl/vm.txt and https://www.kernel.org/doc/gorman/html/understand/understand005.html
– Mike S
Apr 04 '17 at 20:03
MB=600000; let PWMB=$MB/10; for i in {1..10}; do python -c "x=($PWMB*1024*1024/8)*(0,); import time; time.sleep(10*3600*24)" & echo "started" $i ; done
– y.selivonchyk
Jul 08 '20 at 17:18
If you want to test a particular process with limited memory you might be better off using ulimit
to restrict the amount of allocatable memory.
man setrlimit
: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit only has effect in Linux 2.4.x, x < 30, and there only affects calls to madvise(2) specifying MADV_WILLNEED.
– phemmer
Nov 08 '13 at 13:46
I think this is a case of asking the wrong question and sanity being drowned out by people competing for the most creative answer. If you only need to simulate OOM conditions, you don't need to fill memory. Just use a custom allocator and have it fail after a certain number of allocations. This approach seems to work well enough for SQLite.
I need to have 90% of the free memory full
In case there are not enough answers already, one I did not see is doing a ramdisk, or technically a tmpfs. This will map RAM to a folder in linux, and then you just create or dump however many files of whatever size in there to take up however much ram you want. The one downside is you need to be root to use the mount command
# first as root make the given folder, however you like where the tmpfs mount is going to be.
mkdir /ramdisk
chmod 777 /ramdisk
mount -t tmpfs -o size=500G tmpfs /ramdisk
change 500G to whatever size makes sense; in my case my server has 512GB of RAM installed.
Obtain or copy or create a file of reasonable size; create a 1GB file for example then
cp my1gbfile /ramdisk/file001
cp my1gbfile /ramdisk/file002
do 450 times; 450 GB of 512GB approx 90%
use free -g
to observe how much RAM is allocated.
Note: having 512GB physical ram for example, and if you tmpfs more than 512gb it will work, and allow you freeze/crash the system by allocating 100% of the RAM. For that reason it is advisable to only tmpfs so much RAM that you leave some reasonable amount free for the system.
To create a single file of a given size:
truncate -s 450G my450gbfile
man truncate
also dd works well
dd if=/dev/zero of=my456gbfile bs=1GB count=456
I wrote this little C++ program for that: https://github.com/rmetzger/dynamic-ballooner
The advantage of this implementation is that is periodically checks if it needs to free or re-allocate memory.
with just dd. This continuously reads and allocates 10GB RES:
dd if=/dev/zero of=/dev/null iflag=fullblock bs=10G
To just allocate once, add count=1
The downside is it is cpu heavy.
This expands @tkrennwa's answer:
You may not wish to spin 100% cpu during the test, which stress-ng
does by default.
This invocation will not spin the CPU, but it will allocate 4g of ram, page lock it (so it can't swap), and then wait forever (ie, until ctrl-c):
stress-ng --vm-bytes 4g --vm-keep -m 1 --vm-ops 1 --vm-hang 0 --vm-locked
--vm-ops N
- stop vm workers after N bogo operations.--vm-hang N
- sleep N seconds before unmapping memory, the default is zero seconds. Specifying 0 will do an infinite wait.--vm-locked
- Lock the pages of the mapped region into memory using mmap MAP_LOCKED (since Linux 2.5.37).Also, since you are just eating memory, you might want to --vm-madvise hugepage
to use "huge pages" (typically 2MB instead of 4k). This is notably faster when freeing pages after CTRL-C because far fewer pages occupy the pagetable:
]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1 --vm-locked --vm-madvise hugepage
stress-ng: info: [3107579] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor
stress-ng: info: [3107579] dispatching hogs: 1 vm
stress-ng: info: [3107579] successful run completed in 17.15s
real 0m17.186s <<<<<< with huge pages
user 0m2.481s
sys 0m14.453s
]# time stress-ng --vm-bytes 16g --vm-keep -m 1 --vm-ops 1 --vm-locked
stress-ng: info: [3108342] defaulting to a 86400 second (1 day, 0.00 secs) run per stressor
stress-ng: info: [3108342] dispatching hogs: 1 vm
stress-ng: info: [3108342] successful run completed in 36.52s
real 0m36.555s <<<<<< without huge pages
user 0m2.598s
sys 0m33.538s
This program works very well for allocating a fixed amount of memory:
malloc
. Or arguably of tkrennwa's (also 2013) that recommends using a tool. Why do we need another tool?
– Luc
Nov 22 '21 at 09:29