25

I have a file that I want to pad until it reaches 16 MiB (16777216 bytes). Currently it is 16515072 bytes. The difference is 262144 bytes.

How do I pad it?

This doesn't seem to be working:

cp smallfile.img largerfile.img
dd if=/dev/zero of=largerfile.img bs=1 count=262144
tarabyte
  • 4,296
  • 3
    @terabyte; do you want physical padding or logical padding? In other words; should the file only show a size of 16777216 (and may contain holes) or shall it also occupy that amount of storage on the disk? - BTW, choosing a bs=1 in dd is in my experience very runtime expensive. – Janis Apr 16 '15 at 22:50
  • 10
    truncate -s 16M thefile – frostschutz Apr 17 '15 at 09:28
  • 5
    @frostschutz that'd be a good answer, were you to post it as an answer. – derobert Apr 17 '15 at 09:40
  • @derobert, What's with StackExchange site users posting legit, simple answers as comments? – user1717828 Apr 20 '15 at 05:29
  • @user1717828 not sure, probably a good question for meta. – derobert Apr 20 '15 at 07:27

6 Answers6

28

Besides the answers to get a physical padding you may also leave most of the padding space in the file just empty ("holes"), by seeking to the new end-position of the file and writing a single character:

dd if=/dev/zero of=largerfile.txt bs=1 count=1 seek=16777215

(which has the advantage to be much more performant, specifically with bs=1, and does not occupy large amounts of additional disk space).

That method seems to work even without adding any character, by using if=/dev/null and the final desired file size:

dd if=/dev/null of=largerfile.txt bs=1 count=1 seek=16777216

A performant variant of a physical padding solution that uses larger block-sizes is:

padding=262144 bs=32768 nblocks=$((padding/bs)) rest=$((padding%bs))
{
  dd if=/dev/zero bs=$bs count=$nblocks
  dd if=/dev/zero bs=$rest count=1
} 2>/dev/null >>largerfile.txt
Janis
  • 14,222
15

Drop the of=largerfile.txt and append stdout to the file:

dd if=/dev/zero bs=1 count=262144 >> largerfile.txt
Outurnate
  • 1,219
  • 10
  • 19
10

Simple answer, courtesy of @frostschutz, posted as an answer

truncate -s 16M thefile

hspil
  • 201
4

The best anwser here is Janis's (above) because it lets you forget about the current file size and pad directly to the desired size with no calculation.

It also takes advantage of sparse files, which appending /dev/zero doesn't.

The answer could be tidier though, because 'count' is allowed to be 0 and you still get the padding:

dd if=/dev/null of=largerfile.txt bs=1 count=0 seek=16777216

(Edit: this is correct for GNU dd, but the behaviour of count=0 is platform-specific, see comments)

PeteC
  • 174
  • You are mistaken: count=0 is unspecified, but typically is the same as when no count parameter was specified. More problems: dd truncates the file to 16777216 bytes, but if you are in hope that this creates a hole at the end, you are mistaken since you would first need to write data after the hole and later truncate it to a size that contains no data. – schily Jul 04 '18 at 14:58
  • count=0 is nothing like specifying no count parameter. Are you saying that dd if=/dev/zero of=somefile is the same as dd if=/dev/zero of=somefile count=0? Try it. – PeteC Jul 06 '18 at 10:32
  • Of course! count=0 is the same as if you did not specify a count parameter at all. This is true at least for all implementations that have been derived from the original sources. Try it, it seems that you did never work with the original dd command. – schily Jul 06 '18 at 10:39
  • I can't find any documentation that specifies 0 as a unique value for count meaning 'ignore this parameter'. Can you find any? Without such documentation, count=0 means 'write zero blocks' and any deviation from this is a bug... (original sources or no). – PeteC Jul 06 '18 at 11:14
  • Why don't you just read the POSIX standard: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/dd.html If you did read my first comment, you did know that the POSIX standard declares count=0 to cause unspecified behavior. If there was not that single deviating clone implementation, POSIX did mention that count=0 means infinity. If you don't know that, you should not write answers to this platform since this is for UNIX. – schily Jul 06 '18 at 11:27
  • You're entirely right; my apologies. I only found this unix manpage which I thought was authoritative. This platform is for Unix and Linux (it's in the title) and I will continue to write answers despite having been corrected on this point, thank you. – PeteC Jul 06 '18 at 11:44
  • 1
    This was the POSIX documentation from before May 2015 when the underspecified text was corrected. – schily Jul 06 '18 at 12:43
1

Do you have to use dd? If you want a file to have a particular (logical) length, just write a zero to the position you want. The bytes between the previous end and the written byte will be displayed as having null bytes. Here's an example using perl.

$ echo Hello > file
$ ls -l file
-rw-r--r-- 1 user group 6 Apr 16 22:59 file
$ perl -le 'open(my $f,"+<","file"); seek($f, 16777216 - 2, 0); print $f "\0"'
$ ls -ln file
-rw-r--r-- 1 user group 16777216 Apr 16 22:59 file

Why the "- 2" in the line? The script will write a byte, so we subtract 1 to seek to the position before that byte. We take off the other because the seek position is zero-indexed.

BowlOfRed
  • 3,772
0

Here is a solution that specifically does not rely on math.

echo 'The secret meetup location is at the seventh stairway at seven.' > message.txt
dd if=/dev/zero bs=1 count=300 > square.rgb
dd if=message.txt of=square.rgb conv=notrunc

I am using this as steganography to encode a message into an image for a Su Square.

  • Funnily at first I had exactly this problem and this solution worked. But not even 5 minutes later I wanted to do exactly the opposite and dd didn't truncate the file, even though there were only null bytes after 2k of data it still copied the entire 2G to the output file. What exactly influences dd to truncatting the file? I also checked for a "conv=trunc", but that didn't exist. I'm a bit confused now to what exactly causes dd to truncate a file. – K. Frank Jan 12 '24 at 15:11