I read from here that I could load file into RAM for faster accessing using the below command.
cat filename > /dev/null
However, I wanted to test if the above statement is really true. So, I did the below testing.
Create a 2.5 GB test file as below.
dd if=/dev/zero of=demo.txt bs=100M count=10
Now, calculated the file access time as below.
mytime="$(time ( cat demo.txt ) 2>&1 1>/dev/null )" echo $mytime real 0m19.191s user 0m0.007s sys 0m1.295s
As per the command suggests, now I needed to add the file to cache memory. So, I did,
cat demo.txt > /dev/null
Now, I assume the file is loaded into the cache. So I calculate the time to load the file again. This is the value I get.
mytime="$(time ( cat demo.txt ) 2>&1 1>/dev/null )" echo $mytime real 0m18.701s user 0m0.010s sys 0m1.275s
I repeated step 4 for 5 more iterations to calculate the time and these are the values I got.
real 0m18.574s user 0m0.007s sys 0m1.279s real 0m18.584s user 0m0.012s sys 0m1.267s real 0m19.017s user 0m0.009s sys 0m1.268s real 0m18.533s user 0m0.012s sys 0m1.263s real 0m18.757s user 0m0.005s sys 0m1.274s
So my question is, why the time varies even when the file is loaded into the cache? I was expecting since the file is loaded into the cache, the time should come down in each iteration but that doesn't seem to be the case.