14

Filling a drive with /dev/urandom seems to be very slow, so I created a file filled with FF:

dd if=/dev/zero ibs=1k count=1000 | tr "\000" "\377" >ff.bin

I'd like to fill the drive with copies of this file but the following command only writes once:

dd if=ff.bin of=/dev/sdb count=10000

How do I fill the drive with copies of the file, or is there a faster way to fill the drive with 1's?

Manuel Jordan
  • 1,728
  • 2
  • 16
  • 40
linuxfix
  • 381
  • 6
    Why not use zeros? 1-bits don't erase a disk better than 0-bits. – Gilles 'SO- stop being evil' Aug 19 '14 at 23:08
  • 4
    @Gilles I guess zero is special enough that the disk driver could cheat and not really write anything to disk, only marking blocks as empty. I think some virtual hard disk do so. But still, it depends on which reason he's filling a drive for. If it is security, neither 0 nor 1 are safe enough, and also filling with random wouldn't be good if the hard disk is SSD – pqnet Aug 21 '14 at 21:27
  • 2
    @pqnet Zero isn't special for physical storage. With a virtual hard disk, it might be, but filling with anything is unsafe. For SSD, there are specific issues with reallocated blocks, but writing with nonzero values doesn't help with that. – Gilles 'SO- stop being evil' Aug 21 '14 at 21:35
  • @Gilles well, the reason for which he want to write to his disk is not explained so I think it would be great if the question is clarified – pqnet Aug 21 '14 at 21:38
  • The venerable dd has a the option seek=N skip N obs-sized blocks at start of output, so you can write a loop to seek to the correct place k*M on the output block device before repeating a write of the M-sized file. – David Tonhofer Apr 20 '19 at 13:50

8 Answers8

18

Simply do:

tr '\0' '\377' < /dev/zero > /dev/sdb

It will abort with an error when the drive is full.

Using dd does not make sense here. You use dd to make sure reads and writes are made of a specific size. There's no reason to do it here. tr will do reads/writes of 4 or 8 kiB which should be good enough.

  • 1
    you might even get a boost with tr ... | tee - - - > /dev/disk but the likelihood that tr alone doesn't already meet and exceed the write bottleneck is pretty slim. – mikeserv Aug 19 '14 at 23:47
  • 1
    @mikeserv, I think the only way to increase performance from that is to reduce the number of system calls made. tee won't help for that. stdbuf -o1M tr... may help. – Stéphane Chazelas Aug 20 '14 at 05:41
  • Agreed. I sometimes use /dev/zero as a bucket with tr - like for newlines or whatever - and it always consumes a whole cpu. I think the pull from the kernel pumping out all of those bytes is pretty heavy. I was thinking that perhaps the file could just be duped. Maybe it was dumb. – mikeserv Aug 20 '14 at 05:52
  • 4
    @mikeserv, actually, using tee - - - does increase throughput significantly in my testing. That's going from 800MiB/s for tr (950MiB/s with stdbuf) to 2.3GiB/s with tee, like you say in any case way above the rate any current drive can sustain. – Stéphane Chazelas Aug 20 '14 at 06:01
  • Maybe true, but that 2.3g flood will still wind up trickling as soon your pipeline steps outside of job control. stdbuf is smart - I never use it and probably should. And this is the second time in as many days someone's brought that to my attention. I think the other thing is CentOS - I read somewhere about a 64kb kernel buffer. – mikeserv Aug 20 '14 at 06:09
  • @mikeserv, the pipes are 64kB large on current versions of Linux, that may be where you got that 64kB from. – Stéphane Chazelas Aug 20 '14 at 09:06
  • you're a funny guy sometimes – mikeserv Aug 20 '14 at 09:52
  • 1
    sorry, how come you pass in '\0' '\377'? Is it necessary? Thanks! – HCSF Mar 19 '20 at 12:33
  • 1
    @HCSF Those are the parameters to the tr command. tr translates characters - in this case \0 to \377 - both are octal numbers. /dev/null spews out constant \0 and tr is used to translate that stream into constant \377. 377 in octal is the same as 0xff in hexadecimal. – Majenko Apr 28 '21 at 11:30
  • Dear @mikeserv @StéphaneChazelas , as we are only writing to one file /dev/sdb, stdout is not necessary, I do not understand why we need tee, and why tee would boost the speed? – midnite May 12 '22 at 10:04
  • @midnite, the point would be that some implementations of tr are slow to translate 0s to 1s, and tee would reuse the output of tr several times so tr would have to translate fewer 0s into 1s. The extreme version of that would likely be the fastest one: compute one large buffer full of 1s in memory once, and write that over and over. – Stéphane Chazelas May 12 '22 at 10:11
  • @StéphaneChazelas, Thank you very much for explanations. May I ask to the complete script would be tr '\0' '\377' < /dev/zero | tee - - - - - - > /dev/sdb? And the more the dashes in tee, it will be theoretically faster? How did you measure the speeds? I would like to add pv to show progress. Should I just add pv in front of the script? On the other hand, I did suspect tr translation may be slow. If we could use \377 directly, we do not need translation at all. May be printf '\377' | tee - - - - - - - > /dev/sdb in concept? – midnite May 12 '22 at 10:35
5

For a faster /dev/urandom alternative, there is shred -v -n 1 (if pseudorandom is OK), or using cryptsetup with random key and zeroing that (for encrypted zeroes). Even without AES acceleration, it easily beats /dev/urandom speeds.

Not sure how fast tr is, otherwise you could just dd if= | tr | dd of=.

Using a file as a pattern source could be done like this:

(while [ 1 ]; do cat file; done) | dd of=...

Although the file should be reasonably large for that to be remotely efficient.

If the count= is important to you, add iflag=fullblock to the dd command. Partial reads are possible which would result in partial blocks to be counted as full blocks. This is especially when using larger blocksizes (like bs=1M), which you should if you want speed.

frostschutz
  • 48,978
4

I am editing my answer here as I came across a boot disc that didn't even have AWK available (just dd and tr were my only familiar friends available):

dd if=/dev/zero bs=65536 | tr '\0' '\377' | dd of=/dev/sda bs=65536

I found it necessary for speed/performance reasons to choose a 64 kB block size for my disk drive.

The performance of the above command was equal to my first pass of dd if=/dev/zero of=/dev/sda bs=65536 (took about 37 minutes to fill a 75 GiB ATA disk drive, ~35 MiB/s)

4
badblocks -v -w -s -b physical block size -t 0xff /dev/the device

Example to write all ff to 4K native HDD at /dev/sdb plus read it back to verify ff was actually written to every block:

$ sudo badblocks -v -w -s -b 4096 -t 0xff /dev/sdb
AdminBee
  • 22,803
0

To test the SSD speed, I wrote a little Perl program to write "zeros", "ones" or alphanumerics to a block device given on the command line. It doesn't invoke the dd golem, just does direct writes and syncs. May be of help here:

https://github.com/dtonhofer/wild_block_device_filler

Just run it as

wildly_fill_block_device.pl --dev=sdX1 --fillpat 1 --chunksize=1024P --sync

And it will write 0xFFs to /dev/sdX1 (unbuffered, using Perl's syswrite()) in chunks of (in this case) "1024 physical blocks" while synch-ing the data to disk after every write using fdatasync(). On my SSD, this runs at ~70 MiB/s.

It will ask you whether you are SURE before it proceeds to nuke the partition or disk.

0

You can do it pretty efficiently in Bash without external files, except the device to write to, or external commands, except dd.

  1. Use printf to generate a byte with all one bits, and put it in a Bash variable.
  2. Concatenate the Bash variable to itself multiple times.
  3. Use Bash process substitution to fake an infinitely long input file that repeatedly echoes the Bash variable with no trailing newline.
  4. Use that as the input file to dd.

If you have GNU dd, get a nice progress bar with:

sudo -i
bash
ones="$( printf '\xff' )"; for _ in {1..16}; do ones="$ones$ones"; done; dd status=progress bs=65536 if=<( while true; do echo -n "$ones"; done ) of=/dev/whatever

Remove the status=progress if it doesn't work for you. The 65536 is 216, because the initial byte with all one bits was duplicated 16 times.

0

You can simulate a /dev/one without a special device, with a FIFO + yes:

mkfifo ddfifo
dd if=ddfifo of=<file> iflag=fullblock bs=32K status=progress & yes "" | tr '\n' '\1' > ddfifo

tee may be used to double the throughput:

mkfifo ddfifo
dd if=ddfifo of=<file> iflag=fullblock bs=4M status=progress & yes "" | tr '\n' '\1' | tee ddfifo > ddfifo

If you'd like bytes with all bits set to one, swap '\1' for '\377'.

-2

How to wipe with 1's:

while true; do echo 1 ; done | dd of=/dev/sdX status=progress

How to test the wipe:

dd if=/dev/sdX count=1 bs=512
PWP
  • 1
  • That's actually filling with pairs of characters, ASCII/unicode 1 and the subsequent newline. As an aside, you can replace the while loop with yes 1 - but it doesn't address the requirement to fill the drive with the binary bytes 0000001 – Chris Davies Jan 22 '21 at 15:02