I testing writing data to a RAM drive on my linux machine, and I'm seeing much lower numbers than the RAM speed would suggest. So I would like to ask: why am I seeing slower speeds? Could be that I'm misunderstanding the speed rating, or how RAM drives work, or maybe there is a bottleneck somewhere else. This particular test isn't super important, but understanding the unexpected results here will help me get a better idea of which system resources will bottleneck which operations in the future.
For the test, I mounted a RAM drive, then gave the system two seconds to write as many zeroes to that disk as possible:
mkdir ramdisk
mount -t tmpfs -o size=16G tmpfs ramdisk
timeout 2s bash -c "cat /dev/zero > ramdisk/testfile"
I end up with a file that's about 11 GB (averaged over several runs). However, my computer is running DDR4-3200 RAM, which I've read has a peak transfer rate of 25.6 GB/s, so should theoretically be able to write 51 GB in two seconds. By contrast, when I run the same test on an SSD, I see speeds pretty close to the rated maximum sequential write speed.
cat
is writing directly to RAM in this case, it's going through the Virtual File System (VFS) layer of the kernel, and ending up in RAM. That may be the bottleneck, especially if the numbers you see on the SSD are larger than those you see here. – Andy Dalton Jun 23 '21 at 01:22/usr/bin/time -v
and / orstrace -c
on the RAM vs SSD writes. I.e. byhead -c 4G /dev/zero > ...
– ibuprofen Jun 23 '21 at 01:52free -h
between tests. (I.e.shared
fortmpfs
- ans alos keep an aye on swap) https://www.kernel.org/doc/html/latest/filesystems/tmpfs.html – ibuprofen Jun 23 '21 at 02:02tmpfs
could be backed by swap, so you might be seeing the effects your large file being written to the SSD swap space when using tmpfs. – muru Jun 23 '21 at 06:21