4

Using find, it is easy to shred a directory's contents recursively (as discussed in this question). However, sometimes the filenames on their own carry sensitive information already. Is there a way to shred everything associated with some file/directory, i.e. it overwrites both file contents and file/directory names in each cycle?

David
  • 215

2 Answers2

4

shred is mostly useless. In order to remove the content of a deleted file from the disk image, it isn't enough to overwrite the places where the file was: you need to remove all copies of the file. It's not just a matter of the file having been overwritten in place. With many types of files, there can have been multiple files with mostly the same content, because the file was edited and there are deleted backup copies left around.

Additionally, if the disk becomes damaged, it may be impossible to read the data by software means, but still possible to recover it by hardware means, or by letting the disk cool down (putting a hard disk in a freezer makes it less error-prone for a little while, until it finally gives up the ghost).

The safe way to shred a file is to store it from the start inside an encrypted container protected by a strong password (generate a long enough, random password, and write it down; when you're finished with the file, burn the piece of paper). Scrub your filesystem completely (back up the data, then overwrite it with zeroes), then create an ecryptfs container and restore your files. Don't forget to scrub the backup once you're sure the restoration was successful, or stow the backup in a secure place.

3

shred is a fast random source in Linux, useful for overwriting a block device in a single pass (unlike /dev/*random which are too slow), but just not particularly useful in overwriting single files. The filesystem itself doesn't know where old copies may be stored, so overwriting in place just isn't possible.

I just tried it with ext4, in a 1GB image after creating and deleting 3 files, then filling it up with a zerofile, then extracting a linux source tree on it... every step with sync/remount in between, and the original file names were still there.

So instead I came up with this command:

mkdir filenamescrubbing
cd filenamescrubbing
dd if=/dev/zero bs=1M | split -b 4096 -a 254

This creates lots and lots of files with very long names (if your filesystem allows for filenames with >256 characters, use a bigger value for -a). Until it runs out of either space (use a smaller value for -b) or inodes.

That got rid of the filenames for me, because wherever those filenames are stored, creating a new one for every inode available scrubbed all of them.

Of course this method is just as stupid as filling up free space with zeroes. There is never a 100% guarantee that it actually worked (if it's a large number of files).

What you can do to verify for a single file name is grep -a -b --only-matching yourfilename /dev/yourdevice, assuming the filesystem stores them plainly. I guess you could also do a replacement with sed or similar on the device directly if you want to risk corrupting it.

Full disk encryption can be a blessing.

frostschutz
  • 48,978