6

background

I have written a collection of bash scripts (most of them in German only yet but if you are interested; download the archive not the single scripts) which help users create high-quality OpenPGP keys. These scripts are typically used in a "secure" environment (Linux live CD/DVD). This leads to the problem that these systems have hardly any entropy.

For obvious reasons gpg reads a lot of data from /dev/random which means that my poor users (at worst those with a SSD) have to type a lot on the keyboard in order to create the required entropy.

I have written a simple script which shows the users the current size of the entropy pool (which changes quickly between 0 and 64 while gpg reads data). I would like to also show a kind of progress bar so that the users can see that they have generated e.g. about 50% of the needed entropy. The required amount should always be (nearly) the same (until I change the key size).

question

So the question is: How can I (easily) measure the amount of data which has been read from /dev/random (by a certain process or the whole system)? The only idea I had up to now is attaching strace to gpg and trace the read()s from the respective file descriptor. But maybe there is a much better solution.

Hauke Laging
  • 90,279
  • Did you see this in /proc for getting the size of the entropy pool? /proc/sys/kernel/random/entropy_avail. – slm Mar 01 '14 at 04:16
  • Also just curious, I've only typically seen /dev/urandom used (in security applications too), which would make your concern moot (I think). Gilles answer here kind of backs me up on this, http://unix.stackexchange.com/questions/32988/why-does-dd-from-dev-random-give-different-file-sizes. Just wondering if you were aware of this. So is /dev/random even needed here? – slm Mar 01 '14 at 04:27
  • This looked related too: http://unix.stackexchange.com/questions/94206/how-to-determine-which-processes-are-using-how-much-entropy-from-dev-urandom – slm Mar 01 '14 at 04:40
  • @slm [comment 1] "I have written a simple script which shows the users the current size of the entropy pool" What you you think where I get this data from? [comment 2] GnuPG makes heavy use of /dev/random. You don't even need strace to find that out: GnuPG often blocks (on systems with little entropy at least) and it would not block it it used /dev/urandom. They use urandom, too, but for less important data only. These people have a very clear attitude towards entropy quality... Gilles' answers don't help me. – Hauke Laging Mar 01 '14 at 13:17
  • 1
    Fair enough, just directing these as potential leads for you. Will keep looking 8-) – slm Mar 01 '14 at 13:26
  • @HaukeLaging /dev/random does not give you “better quality” entropy than /dev/urandom. Either the machine is freshly installed and they might not have enough entropy (and then /dev/random tends to block), or the machine has sufficient entropy and then /dev/urandom is as good as /dev/random. What Linux calls the “size of the entropy pool” is not relevant to the security of keys generated from /dev/urandom. – Gilles 'SO- stop being evil' Mar 01 '14 at 15:37
  • @Gilles That is a strong statement (and discussing that would be OT here...) and I am absolutely sure that the GnuPG development community would not accept that (of course, that alone makes it neither true nor wrong). They even use an algorithm which needs about three bytes of random input for every random byte they actually use. – Hauke Laging Mar 01 '14 at 15:44

1 Answers1

4

If a little redirection is acceptable, then pv is a good way in general to achieve this type of thing, but GPG has (unsurprisingly) /dev/random hard-coded into it, so that's not going to work here without some hackery. On linux, using unshare to temporarily overlay /dev/random is probably the least disagreeable, though it requires root permissions :

mkfifo $HOME/rngfifo
pv -s 300 /dev/random > $HOME/rngfifo

pv will block until there's a reader on the fifo. Then as root or via sudo:

unshare -m -- sh -c "mount --bind $HOME/rngfifo /dev/random && gpg --gen-key [...]"

One obvious possible useful source of data is the random device driver itself (drivers/char/random.c). It supports a "debug" parameter, but sadly in the versions I've checked it's if-defined out (#if 0, 2.6.x and 3.4.x), and has been removed completely in recent kernels in favour of ftrace support. The driver makes an ftrace call (trace_extract_entropy()) each time data is read. For this, it seems overkill to me, as does systemtap, and the other tracing and debugging options (PDF).

A simple (but unappealing to most) option is to use an injected library to wrap the relevant open() and read() calls at the libc interface, similar to the solution to this question: Dynamic file content generation: Satisfying a 'file open' by a 'process execution' . If you wrap open64() are arrange for it to cache the descriptor when /dev/random is opened you can log the size of each read().

To help get the entropy rolling in, I highly recommend asciipacman ;-)

mr.spuratic
  • 9,901