The original comment about 'not check for or report errors' probably came because by default 'dd' does not pad out bad reads, so for block-orientated devices not only the bad block but all subsequent blocks will be incorrect (because they're no longer aligned). As others have said, this is fixable using the 'conv=noerror,sync' option, which tells dd to ensure blocks are remain on block boundaries. It should be joined by a block-size setting matching the filesystem block size, which is often 4096 bytes but can be lower.
I would agree with Johan Myréen's comment about using file backup, because the granularity of the backup is much smaller - an error backing up one file doesn't necessarily affect the others. You could also use a file system that uses error-correction on the file data (such as zfs, btrfs, and some configurations of others), so at the very least you know when errors happen and hopefully can fix them.
Another way to detect bad backups would be to use a message digest hash code, e.g. 'sha256' on the raw device (unmounted!!) and on the dd backup file... they should of course be the same.
Finally, best practice in backups is never to rely on only one backup... keep a minimum of 2!
I think I read somewhere that dd does not check for or report errors
Really!!?? Looks like fake news, about a coreutil piece of software.dd
works well. But unless you are trying to build an exact image, I wouldn't call it a backup tool. – Eduardo Trápani May 17 '21 at 03:29dd if=/dev/sda bs=10M | sha1sum
, and prefer using dd when interacting with block devices (versus reading/writing a stream to a block devices directlysha1sum </dev/sda
), there should be no difference about the information copied but there may be difference in the performance of the hardware, this is more obvious on very slow to react storage devices, on ssd's its somewhat mute. a dd rescue variant is helpful when the hardware is degraded/failed – ThorSummoner May 19 '21 at 00:12