Has the disk been zeroed?
Yes. The output of your dd
command shows that it has written 5000981077504 bytes. Your cmp
command says that it's reached EOF (end of file) after 5000981077504 bytes, which is the same.
Be aware that this only works well with hard drives. For solid-state devices, features such as wear leveling and overprovisioning space may result in some data not being erased. Furthermore, your drive must not have any damaged sectors, as they will not be erased.
Note that cmp
will not be very efficient for this task. You would be better off with badblocks
:
badblocks -svt 0x00 /dev/sdb
From badblocks(8)
, the -t
option can be used to verify a pattern on the disk. If you do not specify -w
(write) or -n
(non-destructive write), then it will assume the pattern is already present:
-t test_pattern
Specify a test pattern to be read (and written) to disk blocks.
The test_pattern may either be a numeric value between 0 and
ULONG_MAX-1 inclusive, or the word "random", which specifies
that the block should be filled with a random bit pattern. For
read/write (-w) and non-destructive (-n) modes, one or more test
patterns may be specified by specifying the -t option for each
test pattern desired. For read-only mode only a single pattern
may be specified and it may not be "random". Read-only testing
with a pattern assumes that the specified pattern has previously
been written to the disk - if not, large numbers of blocks will
fail verification. If multiple patterns are specified then all
blocks will be tested with one pattern before proceeding to the
next pattern.
Also, using dd
with the default block size (512) is not very efficient either. You can drastically speed it up by specifying bs=256k
. This causes it to transfer data in chunks of 262,144 bytes rather than 512, which reduces the number of context switches that need to occur. Depending on the system, you can speed it up even more by using iflag=direct
, which bypasses the page cache. This can improve read performance on block devices in some situations.
Although you didn't ask, it should be pointed out that shred
overwrites a target using three passes by default. This is unnecessary. The myth that multiple overwrites is necessary on hard disks comes from an old recommendation by Peter Gutmann. On ancient MFM and RLL hard drives, specific overwrite patterns were require to avoid theoretical data remanence issues. In order to ensure that all types of disks could be overwritten, he recommended using 35 patterns so that at least one of them would be right for your disk. On modern hard drives using modern data encoding techniques such as EPRML and NPML, there is no need to use multiple patterns. According to Gutmann himself:
In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes.
In your position, I would recommend something along this line instead:
dd if=/dev/urandom of=/dev/sdb bs=256k oflag=direct conv=fsync
When it finishes, just make sure it has written enough bytes after it says "no space left on device".
You can also use ATA Secure Erase which initiates firmware-level data erasure. I would not use it on its own because you would be relying on the firmware authors to have implemented the standard securely. Instead, use it in addition to the above in order to make sure dd
didn't miss anything (such as bad sectors and the HPA). ATA Secure Erase can be managed by the command hdparm
:
hdparm --user-master u --security-set-pass yadayada /dev/sdb
hdparm --user-master u --security-erase yadayada /dev/sdb
Note that this doesn't work on all devices. Your external drive may not support it.
shred
does three passes with pseudo-random data (https://wiki.archlinux.org/title/Securely_wipe_disk#shred) and I added a final zero overwrite, for good measure. Per https://unix.stackexchange.com/questions/626847/check-for-host-protected-area-and-device-configuration-overlay/626848#626848 there's no HPA/DCO. – NoExpert Aug 11 '22 at 20:37