287

I am aware of three methods to delete all entries from a file.

They are

  • >filename
  • touch filename1
  • filename < /dev/null

Of these three I abuse >filename the most as that requires the least number of keystrokes.

However, I would like to know which is the most efficient of the three (if there are any more efficient methods) with respect to large log files and small files.

Also, how does the three codes operate and delete the contents?


1Edit: as discussed in this answer, this actually does not clear the file!

Sandun
  • 115
debal
  • 3,704

3 Answers3

371

Actually, the second form touch filename doesn't delete anything from the file - it only creates an empty file if one did not exist, or updates the last-modified date of an existing file.

And the third filename < /dev/null tries to run filename with /dev/null as input.

cp /dev/null filename works.

As for efficient, the most efficient would be truncate -s 0 filename (see here).

Otherwise, cp /dev/null filename or > filename are both fine. They both open and then close the file, using the truncate-on-open setting. cp also opens /dev/null, so that makes it marginally slower.

On the other hand, truncate would likely be slower than > filename when run from a script since running the truncate command requires the system to open the executable, load it, and then run it.

ash
  • 7,260
  • 12
    So why do you say that truncate is the most efficient? – Stéphane Chazelas Aug 30 '13 at 06:24
  • 9
    The truncate operation uses the ftruncate() or truncate() system call which does not bother to open the file. It also avoids the close() system call that cp and > filename methods need to call. – ash Aug 30 '13 at 06:26
  • 3
    Actually, it (at least the GNU one) does an open+ftruncate+close (in addition to the many system calls it does to load and initialise itself), as anyway, it would have to create the file if it didn't exist and truncate(2) doesn't do that. – Stéphane Chazelas Aug 30 '13 at 08:01
  • If we use touch filename, will the inode remain same (provided there was a file before)? – pMan Aug 30 '13 at 08:30
  • 1
    @pMan yes, you can try it and check with ls -i – terdon Aug 30 '13 at 13:02
  • I get this error truncate: not found.. What is the next best option since I cannot add the truncate function to this environment. – javaPlease42 Apr 24 '14 at 03:49
  • Use the redirect method (>file) if running from a shell script. – ash Aug 28 '14 at 19:00
  • // , Doesn't cp /dev/null filename lose the file ownership and permissions? – Nathan Basanese Dec 09 '15 at 10:08
  • /dev/null is owned by root and has very open file permissions, so that's probably a good thing. It will use the default settings (based on umask) when creating the new file. – ash Dec 09 '15 at 18:02
  • The nice thing about truncate is that you can do it on multiple files in a single command without complicating things with xargs etc – Sridhar Sarnobat Mar 09 '16 at 18:25
  • I feel >filename is the most efficient, because its faster to type. truncate may be technically faster, but it will never really be faster than > because you'll have to type out truncate -s 0 filename. – aalaap Jun 27 '16 at 10:09
  • @aalaap - if we're talking about speed of typing out commands, then I agree with you, that's probably about as fast as you can get. When talking about efficiency of the operation itself, such as when trying to keep down the overhead of truncating very rapidly (such as > 100 times per second), then the answer is very different. – ash Jun 29 '16 at 00:20
  • @ash Yes, I understand that. I was talking about a practical, real-world comparison between the two. – aalaap Jun 29 '16 at 12:18
  • @aalaap your comment is not necessarily practical. If I have to truncate often, I would just function it to a short alias, and the keystroke count would be similar, so not an issue. However, if I do have to truncate very often, as on the order of 10k files, the issue of keystrokes is negligible compared to runtime. – Dani_l Sep 21 '16 at 12:44
  • Interestingly, it does not seem to work with ZSH: ```> log/test.log

    ==> zsh: file exists: log/test.log```

    – Adrien Apr 21 '17 at 15:23
  • There are shell settings, and more advanced syntax, that will prevent overwriting existing files. Look for the NO_CLOBBER option. set -o clobber will enable the file to be clobbered. Also, >| appears to override the noclobber setting – ash May 28 '17 at 02:14
60

Other option could be:

echo -n > filename

From the man page of echo:

-n Do not print the trailing newline character.

8

There is a builtin command ":", which is available in sh,csh,bash and others maybe, which can be easily used with the redirecting output operator > truncate a file:

#!/usr/bin/env bash
:> filename

What I like on this is, that it does not need any external commands like "echo" etc.

One big advantage of truncating files instead of deleate/recreate them is, that running applications which works with this file (e.g. someone makes an tail -f filename or a monitoring software, ...) don't have to reopen it. They just can continue using the filedescriptor and gets all the new data.

  • man bash describes the : shell builtin as having no effect. – Haxiel Dec 03 '18 at 10:06
  • Yes, and you redirect this with > in to the file, which creates the file if it does not exists, and if it exists you truncate it to zero.

    Better said: you use the : to do nothing, and use > to redirect nothing to a file, and truncate it.

    – Mirko Steiner Dec 03 '18 at 10:10
  • 1
    Why would you do that? > file is enough to truncate a file. You don't need any command, just the redirection operator. – terdon Dec 03 '18 at 11:12
  • 3
    sometimes, > filename won't work. for example, in zsh. but : > filename works still. – CS Pei Dec 05 '18 at 16:19
  • Bash and sh seems to like > myfile but e.g. csh errors with: Invalid null command. – Mirko Steiner Dec 12 '18 at 15:33