background
I have written a collection of bash scripts (most of them in German only yet but if you are interested; download the archive not the single scripts) which help users create high-quality OpenPGP keys. These scripts are typically used in a "secure" environment (Linux live CD/DVD). This leads to the problem that these systems have hardly any entropy.
For obvious reasons gpg reads a lot of data from /dev/random which means that my poor users (at worst those with a SSD) have to type a lot on the keyboard in order to create the required entropy.
I have written a simple script which shows the users the current size of the entropy pool (which changes quickly between 0 and 64 while gpg reads data). I would like to also show a kind of progress bar so that the users can see that they have generated e.g. about 50% of the needed entropy. The required amount should always be (nearly) the same (until I change the key size).
question
So the question is: How can I (easily) measure the amount of data which has been read from /dev/random (by a certain process or the whole system)? The only idea I had up to now is attaching strace to gpg and trace the read()s from the respective file descriptor. But maybe there is a much better solution.
/procfor getting the size of the entropy pool?/proc/sys/kernel/random/entropy_avail. – slm Mar 01 '14 at 04:16/dev/urandomused (in security applications too), which would make your concern moot (I think). Gilles answer here kind of backs me up on this, http://unix.stackexchange.com/questions/32988/why-does-dd-from-dev-random-give-different-file-sizes. Just wondering if you were aware of this. So is/dev/randomeven needed here? – slm Mar 01 '14 at 04:27/dev/random. You don't even needstraceto find that out: GnuPG often blocks (on systems with little entropy at least) and it would not block it it used/dev/urandom. They useurandom, too, but for less important data only. These people have a very clear attitude towards entropy quality... Gilles' answers don't help me. – Hauke Laging Mar 01 '14 at 13:17/dev/randomdoes not give you “better quality” entropy than/dev/urandom. Either the machine is freshly installed and they might not have enough entropy (and then/dev/randomtends to block), or the machine has sufficient entropy and then/dev/urandomis as good as/dev/random. What Linux calls the “size of the entropy pool” is not relevant to the security of keys generated from/dev/urandom. – Gilles 'SO- stop being evil' Mar 01 '14 at 15:37