1

The ArchLinux Wiki on dm-crypt advices overwriting new storage devices or partitions with random data before using them for encrypted volumes. There are two ways I have used to achieve this, but I find that one method (using dd) has been much faster than the other (using LUKS format-erase by way of Gnome Disks).

Are there any known conditions where dd if=/dev/urandom is less secure on a fairly modern (for 2013-2023) and typical x86 UEFI PC?

Thanks in advance for your advice.

Repeating my Procedure

The dd command I used was:

sudo if=/dev/urandom of=$NEW_BLKFILE status=progress bs=1M

Where $NEW_BLKFILE is the block device file, like /dev/sdb or /dev/sdb1.

Using the status=progress option on GNU/Linux causes progress information to be output to the terminal, including an estimated write rate.

The method of initiating LUKS format jobs was:

  1. Enter Gnome Disks
  2. Select the "device" from the list on the left
  3. "Format" the disk (creating the partition table) if required (Drive Options > Format Disk), acknowledge the warning
  4. Press the Create Partition button (the plus sign) in the volume layout diagram, set the partition size
  5. Enable the Erase option; choose Ext4 and enable "Password Protect Volume (LUKS)" in the Type options
  6. Enter the password
  7. Run udisksctl info --object-path $JOB_OBJ, where $JOB_OBJ is the format-erase operation job object path, like jobs/42

The Rate attribute from the job object shows the estimated write rate of the process in bytes per second.

Here's my observations on write speeds on a low-end desktop SSD (1TB, 4-bit "QLC") and a low-end server HDD (4TB, "5400RPM class", CLV?):

SSD: >100MB/s for dd, <=33MB/s for format-erase
HDD: >150MB/s for dd, <=14MB/s for format-erase

The format-erase operation was a lot noisier on the HDD, indicating a large number of seeks.

EDIT: Added HDD stats, disclose dd block size.

Links

ArchLinux Wiki. dm-crypt/Drive Preparation.

  • 1
    advices overwriting new storage devices or partitions with random data before using them for encrypted volumes - this is 100% redundant and unnecessary specially if the disk is new and filled with 0x00 zeroes – Artem S. Tashkinov Jun 11 '23 at 08:26
  • no idea what gnome disks does. urandom is fine and fast enough in recent kernels. if you want to go the extra mile, you could verify that writing data was actually successful. on SSD if you intend to enable TRIM you might not need/want it at all, just blkdiscard should do... older answer, Pre-encryption wipe, why? – frostschutz Jun 11 '23 at 10:14

1 Answers1

0

As Artem correctly says in his comment, that overwriting is totally redundant.

And, you're using an SSD – pre-filling it with anything means wear levelling has less unused space to work on, so that's bad for performance and SSD lifetime.

So, ignore the Arch wiki on that.

Why one method is slower than the other? Your dd method might simply be using more adequate block sizes, or using the right system calls to let the copying happen in kernel space rather than through user land, leading to fewer context switches, leading to higher performance. (If people need higher performance, they tend to do things like pv < /dev/urandom > /dev/outdev or the same with cat in place of pv; dd is rarely really useful.)

  • I intended to compare cryptographic weaknesses and strengths between methods of preparing encrypted volumes, regardless of the storage medium. Sorry if my question was not clear on that.

    Still, thanks for pointing out a potential performance pitfall for SSDs.

    I'm still convinced random-filling unused space does have cryptographic advantages, and a NAND Flash-friendly method of achieving this is definitely worth its own discussion.

    – Mounaiban Jun 15 '23 at 05:52
  • There's not much to discuss in that respect - nand memory physically deteriorates with every write, and has to be zeroed first. Gerne the wear leveling, which depends on unused blocks being marked as such and then reading all-zeros when read. – Marcus Müller Jun 15 '23 at 09:18
  • Regarding the general cryptographic advantages: what you leak is exactly where and how much unused space you have. Since that contains, by the definition of good ciphering, zero mutual information on the keys or plaintext of the encrypted blocks, you can quantify the detrimental effect on the cryptography mathematically: there is none. But if size of the encrypted data is a relevant secret, you give that away. It's just that to me that is a very questionable model of what needs to be protected. You could ask the hard drive/SSD to tell your its write usage, and it'd tell you the same. – Marcus Müller Jun 15 '23 at 09:22
  • What I was thinking was a NAND flash device that can be configured to keep unused blocks random-filled, only zeroing such blocks right before writing. This certainly has performance and longevity implications, so it'll be reserved for specific applications (and will very likely use 1-bit cells). – Mounaiban Jul 04 '23 at 04:06
  • Yeah, that would require hardware changes, though, to defer parts of the wear leveling. – Marcus Müller Jul 04 '23 at 08:17