23

I have heard many times that issuing rm -rf is dangerous since users can accidentally remove the entire system. But sometimes I want to remove a directory recursively without being asked every time. I am thinking about using yes | rm -r instead.

The thing I am thinking is that, is yes | rm -r safer than rm -rf? Or essentially, are they the same?

  • @NiKiZe Actually I read it many times. It says: "ignore nonexistent files and arguments, never prompt". I am not sure how you translate this official statement to your understanding "the same as typing y on every question". –  Aug 29 '21 at 10:29
  • 1
    the issue is that official manual does not make this clear. If I follow your logic then sure, they are exactly the same. However, your argument, while reasonable, is not something documented. –  Aug 29 '21 at 11:43
  • 4
    rm -rf is "considered dangerous", but it is not documented as such. Using rm at all could delete the wrong file, and is equally "dangerous". – NiKiZe Aug 29 '21 at 12:03
  • 1
    As I said, this is your understanding, which I actually agree. But this is not the purpose of this question. The purpose is I want to be sure about the difference behavior between rm -rf and yes | rm -r as documented by manuals or source code. –  Aug 29 '21 at 13:18
  • On both cases, if you are in the wrong directory and run the command, the outcome is the same, you're hooped. So , No. – Ian W Aug 30 '21 at 00:22
  • 17
    After several decades of using Unix, I've taught myself to always always remove my hands from the keyboard at the end of an rm command, look carefully at it, and only then put my hands back down and hit return. Another alternative is first to type echo rm -rf ... if you're using any glob patterns. This will show you the expanded arguments rm will get. If all looks good, type control-p to get the last input line back, delete echo, and hit return. – Dale Hagglund Aug 30 '21 at 07:27
  • 3
    A good "helper" for rm (only possible with some implementations) is to put the flags at the end: rm /some/stuff -rf. Not foolproof, but does at least prevent the rm -rf /etc<enter> type mistakes. – SeamusJ Aug 30 '21 at 22:51
  • Do you mean yes|rm -ir? The -i may come from a bash alias, which some distributions add to the default .bashrc. – allo Aug 31 '21 at 13:47

9 Answers9

35

Short answer

No. However, I really like your creativity of piping commands together. Still, yes|rm -r is a nice example of UUOC (useless use of cat) - an acronym (or better jargon) for command line constructs that only provide a function of convenience to the user.


Long Answer

I really like your question, as it can be answered in more than one dimension.

Technical dimension

The rm command is used to delete files. The option -r or in its longer form --recursive is used to delete the files (as well as directories) recursively. The -f (long: --force) option is used to ignore non-existent files and arguments and to never prompt for confirmation. See the man page for more details, I linked it below.

So in case you want to delete a directory recursively, than rm -rf is the correct way to do it.

Linguistic dimension

You heard many times that issuing rm -rf is dangerous, and that is correct. The missing point is that no one has told why this is the case.

You already provided the answer as part of your initial question: It is not the command itself that is dangerous. It is the user who types something mistaken and then is confronted with deleted files or directories that he should never have wanted to delete in the first place.

So what is meant by this warnings you heard is: Think twice if you really want to delete what you have entered, before you hit the enter button

Your question implies that you did not fully understand it the way it was meant, but you understood instead: "The command itself is dangerous, so I need to find another way to delete my files and directories".

Of course doing so is possible with Unix commands, as there is always more than one way to reach a goal. With the warnings you have heard, however, is meant to truly reflect on, so the (creative) pipeline construct yes|rm -r does not help you thinking.

That's the reason why it is not less dangerous as a rm -rf.

Historical dimension

All Unix commands were built once using the mantra: "Do one job, and do this job perfectly".

However, this is confusing for people who are used to be patronized by their operating system. In Windows, for instance, it is still quite normal for the user to be asked after initiating a action: "Do you really want to do this or that?"

This is not the case if you are using Unix commands, unless you explicitly request to be asked to confirm. The rm command is no exception and knows such a parameter: -i or --interactive.

The trouble started around 2004, when a lot of new Linux users arrived very rapidly, caused by the rise of Ubuntu as easy-to-use operating system. All of them had a relatively small understanding how Linux works under the hood. And many of them made a mistake, which resulted in many systems being deleted due to a mistyped input as rm -rf /. Sad times...

The mistake made back then was the attempt to "protect" those users, much like the way Windows does. So aliases were introduced by distributors like Ubuntu, that automatically turned an rm command into an rm -i. It would have been better, in my opinion, to teach the Unix way to new users, instead of patronizing them.

Nevertheless, the bottom line is that some distributions still use such alias definitions, therefore many people think that interactive querying is default to the rm command. However, this is not the case.

References

  • Hi @Lutz, my understanding on why rm -rfis dangerous is that, say, I want to remove ./*, so I type rm -rf ./*. Accidentally, however, I typed rm -rf /* and get my entire filesystem removed. I think the real issue is that, if there is an option -y, then it would be easier to see that yes | rm -r and rm -ry are the same. But rm names the option -f, so seemingly it can "force" the removal more than simply answering yes and thus the question, if yes is less forceful than --force... –  Aug 29 '21 at 11:40
  • 11
    @Mamsds The effect is the same, because either will just delete all your files. – Michael Hampton Aug 29 '21 at 11:49
  • 2
    The mistake made back then was the attempt to "protect" those users Easily fixed with unalias -a... – Andrew Henle Aug 29 '21 at 20:34
  • 23
    Errr, I hardly think UUOC applies here, since cat isn't being used at all. – Dale Hagglund Aug 30 '21 at 07:28
  • I think issue is that many gui users paste commands in terminal from forums without thinking and some b@start users give them to paste harmful commands. – akostadinov Aug 30 '21 at 10:01
  • 2
    @DaleHagglund UUOP? (Useless Use Of Pipe?) – Aster Aug 30 '21 at 13:56
  • @akostadinov Yes, but there needs to be due diligence on the part of the user, as well. If a user is told to crack open regedit and randomly change a bunch of values, and they do that without some cursory research to find out what the command does, at what point is it Microsoft's fault, as opposed to the user's? – Aster Aug 30 '21 at 14:09
  • 7
    GNU rm also has a -I (--interactive=once) option, which prompts once per directory on the command line, or for the whole command when removing more than 3 files. I find it's a good balance, although sometimes I stiff use -f to override it for directories with lots of read-only files or other cases where it would ask multiple questions. That's fine, though. If I ever want the behaviour of interactively choosing which of the few files matching a glob to delete, I explicitly use rm -i *foo* or something - agree that on by default in an alias is bad for rm, unlike mv and cp. – Peter Cordes Aug 30 '21 at 15:44
  • @PeterCordes, deserves to be an answer – akostadinov Aug 30 '21 at 19:35
  • @akostadinov: ok, since you asked, I turned it into an answer – Peter Cordes Aug 30 '21 at 20:19
  • 2
    One tiny "trick" that already saved me a couple of times is to just place the options at the end, i.e. instead of rm -rf <files>, use rm <files> -rf. I rely on auto-completion a lot, but it happened quite often that accidentally I hit the return key too early before realizing I had the wrong completion. Abiding to an "options last" behavior can save you some trouble :-) – andreee Aug 30 '21 at 20:32
  • 1
    I'd claim yes | rm isn't useless at all, as long as you understand what it does. Which isn't quite what rm -f does because the latter ignores errors while the former does not. – Dale Hagglund Aug 31 '21 at 02:34
  • "However, this is confusing for people who are used to be patronized by their operating system." Yes and thank God that it is different in Unix. I am a pure Linux guy but have to work on Windows for a job. Nothing ever goes smooth in Windows, there is always something not authorized, no permissions, opened by something else, whatnot. Instead of solving problems I am trying to find loopholes for almost anything. In 25 years it never had happened to me that my rm -rf did something else than intended. – Johannes Linkels Aug 31 '21 at 23:46
26

First, as others have already said, yes | rm -r is very similar but not identical to rm -rf. The difference is that the -f option tells rm to continue past various errors. This means that yes | rm -r will exit on the first error unlike rm -rf which continue on and keep deleting everything it can. This means that yes | rm is slightly less dangerous than rm -f, but not substantially so.

So, what do you do to mitigate the risks of rm?

Here are a few habits I've developed that have made it much less likely to run into trouble with rm. This answer assumes you're not aliasing rm to rm -i, which is a bad practice in my opinion.

Do not use an interactive root shell.. This immediately makes it much more difficult to do the worst-case rm -rf /. Instead, always use sudo, which should be a visual clue to look very carefully at the command you're typing. If it's absolutely necessary to start a root shell, do what you need there and exit. If something is forcing you to be root most of the time, fix whatever it is.

Be wary of absolute paths. If you find yourself typing a path starting with /, stop. It's safer to avoid absolute paths, and instead cd to the directory you intend to delete from, use ls and pwd to look around to make sure you're the right place, and then go ahead.

Pause before hitting return on the rm command. I've trained myself to always always always lift my fingers from the keyboard after typing any rm command (and a few other potentially dangerous commands), inspect what I've typed very carefully, and only then put my fingers back to the keyboard and hit return.

Use echo rm ... to see what you're asking rm to do. I often do crucial rm commands as a two-step process. First, I type

$ echo rm -rf ...

this expands all shell globs (ie, * patterns, etc) and shows me the rm command that would have been executed. If this looks good, again after careful inspection, I type ^P (control-P) to get the previous input line back, delete echo, and inspect the command line again, then hit return without changing anything else.

Maintain backups. If you're doing most of the above, the odds of having to restore the entire system are very low, but you can still accidentally delete your own files, and it's handy to be able to get them back from somewhere.

  • 8
    Interesting - I have the opposite take on one item - be wary of relative paths. Absolute paths refer unambiguously to the same file (absent chroots/containers), so it's clear what you're changing. I've been bitten before by redoing commands from history with an intervening change of directory, or by cutting and pasting from one virtual-terminal to another. I agree with all the rest, especially previewing with echo (or with `printf '%q\n' or the like if I'm concerned about argument splitting) though do note that we can't blindly add that to the front of a multi-command pipeline! – Toby Speight Aug 30 '21 at 13:57
  • 1
    GNU rm also has a -I (--interactive=once) option, which prompts once per directory on the command line, or for the whole command when removing more than 3 files. I find it's a good balance, although sometimes I stiff use -f to override it for directories with lots of read-only files or other cases where it would ask multiple questions. That's fine, though. If I ever want the behaviour of interactively choosing which of the few files matching a glob to delete, I explicitly use rm -i *foo* or something - agree that on by default in an alias is bad for rm, unlike mv and cp. – Peter Cordes Aug 30 '21 at 15:44
  • 3
    Note that echo is not a great way to test commands. You can't tell the difference between echo rm 'name with a space' and echo rm 'name' 'with a space', since they both cause the same output. If you use that facility frequently, maybe build a script or shell function that runs printf '%q ' "$@"; echo? That way quoting changes don't get hidden by echo's just-concatenate-it-all-together behavior. – Charles Duffy Aug 30 '21 at 17:13
  • 3
    One tiny "trick" that already saved me a couple of times is to just place the options at the end, i.e. instead of rm -rf <files>, use rm <files> -rf. I rely on auto-completion a lot, but it happened quite often that accidentally I hit the return key too early before realizing I had the wrong completion. Abiding to an "options last" behavior can save you some trouble :-) – andreee Aug 30 '21 at 20:35
  • 1
    @andreee, on GNU, yep. Not as much on other systems where options are only taken if they're first. Another alternative is to hit a # at the start of the line to make it a comment. Well, unless you have fancy completion rules that look at the command to determine what to complete... Another alternative might be something like rm -? ..., i.e. with an invalid option that will cause an error unless removed. – ilkkachu Aug 30 '21 at 20:37
  • @RiaD Thanks for pointing out that difference. – Dale Hagglund Aug 31 '21 at 02:27
  • @CharlesDuffy I don't offer any of these ideas as foolproof. That said, your point is correct, although I rarely use files with spaces in names. In cases where I can't avoid it, I still find echo useful because the act of inspecting the command output makes me think about the question you and compare it to the rm command. If I cared a great deal, I might write a small script that echoed each argument with <<</>>> markers or something. – Dale Hagglund Aug 31 '21 at 02:29
  • @andreee I suppose it works, but as @ ilkkachu mentions, only on gnu style commands, and if works for you that's great. However, I intensely dislike argument parsing that allows options anywhere on the command line, so I'm not sure I could bring myself to do it that way. (insert old guy in rocking chair gif here.) – Dale Hagglund Aug 31 '21 at 02:32
  • @TobySpeight Thanks for the comment: you make a good point. And, in fact, I am often also careful of relative paths, especially if they involve more than a single consecutive ..s – Dale Hagglund Aug 31 '21 at 02:42
  • @CharlesDuffy I just reread your comment, and I missed your recommendation of using printf "%q". I just don't think of printf that often, but it looks like a really good idea. [I just checked the printf(1) man page. Do multiple arguments repeat the format string? Ie, if I type printf "%q " x y "some words" q, do I get output for each argument, or just the first.] – Dale Hagglund Aug 31 '21 at 02:43
  • 1
    @Dale, yes, the format string is re-used - just running your example produces x y some\ words q. The manual for GNU printf says "The FORMAT argument is reused as necessary to convert all the given ARGUMENTs. For example, the command printf %s a b outputs ab", and the POSIX specification says "The format operand shall be reused as often as necessary to satisfy the argument operands". – Toby Speight Aug 31 '21 at 06:55
  • @TobySpeight Apparently I can't read because before asking about the reuse of format arguments and still didn't see the sentence you refer to. – Dale Hagglund Sep 01 '21 at 04:41
10

If you mistype the name of the directory, even rm -r dir will remove the wrong one, and without asking questions unless there's a file in there that's missing write permission. (But even then, everything before that one does get removed.)

The difference between rm -rf dir and yes | rm -r dir is that -f overrides the prompts to start with, while piping from yes answers y to all prompts. Which probably will be taken as confirming the deletion, but it should be influenced by the locale (LC_MESSAGES), so it might be that in some locale, y would not confirm. (I tested in the Finnish locale on Debian, where both the English y and the Finnish k did confirm.)

In the end, it doesn't matter which one you use. If you're using either, it means you don't get individual confirmations, and you get to restore from backups if you delete the wrong files.

I don't think that's specific to just rm, or to the command line in general. It's completely possible to click "delete" on the wrong file or directory on a GUI-based file manager too.

Name your files smartly, keep backups, think before deleting any files.

If you still need to do it, you could wrap rm into a function that counts and lists the affected files before going on with the deletion. But doing that is another question.

ilkkachu
  • 138,973
6

This is pointless. If you just want to have rm not ask about every file, don't use -i in the first place. e.g. disable alias expansion for that command by running \rm -r foobar. (Quoting one or more of the characters of the command with \ or single/double quotes disables alias matching in bash, apparently.) Then rm will still prompt for any read-only files (and a few other special cases), since that's the default. Only if you want to override that should you use -f.

Or better, don't alias rm='rm -i' in the first place, because you get too used to always hitting y when removing even a single file, defeating much of the purpose of the safety check and wasting your time.

GNU rm(1) also has a -I (--interactive=once) option (man page), for the whole command when removing more than 3 files, or if you used -r. This saves you from a glob matching much more than it should, and lets you verify that the count looks like the right number of files for intentional big globs.

I find it's a good balance as a standard rm alias, although sometimes I still use -f to override it for directories with lots of read-only files where it would ask multiple questions. That's fine, though, since that's a rare case for me.

e.g.

### example on Arch GNU/Linux, GNU Coreutils 8.32

$ alias rm alias rm='rm -I'

$ touch foo{1..3} $ rm -r foo rm: remove 3 arguments recursively? ^C $ rm foo # silently works $ touch foo{1..4} $ rm foo rm: remove 4 arguments? y

$ touch foo{1..4} $ chmod 444 foo1 $ rm foo rm: remove 4 arguments? y rm: remove write-protected regular empty file 'foo1'? y # same prompt as \rm foo

$ mkdir dfoo dbar $ touch foo{1..4} $ rm foo rm: remove 5 arguments? y rm: cannot remove 'dfoo': Is a directory # but the regular files got deleted

$ touch foo{1..4} $ rm -r foo[1-3] rm: remove 4 arguments recursively? n # prompts because of -r, not actually checking for directories

$ rm -r foo dbar rm: remove 6 arguments recursively? y # files and directories all gone, one prompt.

If I ever want the behaviour of interactively choosing which of the few files matching a glob to delete, I explicitly use rm -i *foo* or something. -i on by default makes sense for mv and cp where many use-cases don't involve destroying anything, but not for rm.

Other safety techniques:

  • When typing a command involving rm -r, start with ll or ls instead of rm, then go back and edit it to "arm" the command, i.e. take the safety off. (control-a, alt-d in bash to go to the start of the line and kill-backwards-word.)
  • In general, avoid having a dangerous string on the command line in case you fat-finger the enter key at any point. e.g. don't type rm -r ~/some/dir in that order, because you don't want to rm -r ~/some the whole tree. Start with ls -d, especially if using globs, especially if the glob isn't at the end of line where tab-completion can easily show you the expansions. Or just leave out the -rf. (With GNU rm, you can put options anywhere on the command line, including at the end).
  • Don't rely on an rm -i alias - some day you'll be using a shell without aliases (in a recovery shell, or SSHed somewhere, or on a live USB boot). It would suck to hit return on a command, expecting it to ask you which of the files to actually delete, and have that not happen. If you're planning to say n to some files, use -i explicitly. And make a habit of looking / thinking before you press return on an rm in any directory you can't re-generate trivially.
Peter Cordes
  • 6,466
5

No yes | rm -r is not safer than rm -rf.

In all of its dimensions.

The main command provided by Unix meant to erase a file is rm. There are other commands that can also erase a file, like unlink. However unlink works only on one file and can not unlink a directory (in linux).

So, rm is the main tool. And, in the spirit of UNIX, do one thing, do it well, rm could remove many files. The rub is in which files. So, rm (and I mean both of your command line examples) is a perfectly safe command to use (in both cases), from the system point of view.

But it is also a dangerous command to use, as it carry a lot of power.
With greater power comes greater responsibility.


It is not a good idea to try to constrain such command in your computer to be less eager to erase files or to use aliases (rm -i) which reduce the danger it may have. Because doing so you get used to have another level of protection, a kind of helper. That might sound a reasonable thing to do. But think about this: you ssh to another computer or sit on a friend computer to help which do not have such additional barrier and you issue the command being sure that nothing important will happen, but it does.

My word of wisdom (my personal point of view) is:

  • get used to such danger, be careful each and every time you use the rm command and you will be better served in the long term.
1

On the practical side: if you create a temporary workdir in a script and want to remove it at the end, use a dummy file to guard against your own coding errors (most of us have been bitten by those):

tmdir=...
mkdir -p "$tmpdir"
touch "$tmpdir/.my-removable"
...
...
! [ -e "$tmpdir/.my_removable" ] || rm -rf "$tmpdir"
usretc
  • 629
1

One of the risks is in a script where you might write rm -r $dir/d*.

The obvious risk is that if you typo dir, then $dir will be empty, leading to rm -r /d*.

I make a habit of including the trailing / in the variable, and write rm -r "$dir"d* so that at worst I get rm -r d*, which while not ideal, is likely to do much less damage.

The other safeguard is to avoid separating the rm from the mkdir, so that it's visually obvious if you've typoed it. I would have this near the top of the script:

tempdir=/tmp/foo$$/
mkdir    -p "$tempdir"
trap 'rm -r "$tempdir"' EXIT

rest of script here

(If the script is large, I define an at_exit function that allows me to enqueue as many commands as necessary to run upon script exit.)

0

As others have mentioned, just because it's dangerous doesn't mean you should avoid it if it's the right tool for the job; rather, it means you should be careful.

Of late I've been using find /path -name whatever -ls then !! -delete, using the shell's history substitution mechanism. This gives the same "preview" as prefacing rm with echo but is easier (and therefore more reliable) to type.

I've been bitten a few times by hitting the wrong ctrl-key combination and thereby destroying something unintended, so when it's something as critical as deleting files, I prefer to avoid even that hazard.

I used alias rm='rm -i' for my first year of using Unix (a long time ago). Then one day I managed to remove the wrong file because my fingers had become so accustomed to typing rm ... <enter>y<enter> that I didn't have time to think and refrain from doing so.

I immediately removed the alias and committed to being more careful in future. And while I can't say that I've never deleted the wrong file since, it's been a much rarer occurence that the once-a-year that I started with, so on balance I've been better off for removing that alias. The first line of my ~/.bashrc has been unalias rm cp ln mv ever since some sysadmin decided that "for safety" the alias should be added to /etc/bashrc.

Knowing that it's dangerous is what keeps me on my guard, and therefore keeps me safe.

And to answer your question: yes | rm -ri is more dangerous, because it gives the illusion of being safer, while actually exposing you to the same risks. Actually there are some tiny differences in the risks, because it works slightly slower giving you more time to interrupt it, and in limited circumstances will abort and stop, but really, that is completely swamped by the "illusion of safety" effect.

0

No.

I always omit the -rf until last, and have mentally programmed myself to treat typing it as a flag to "now think very hard". Also, consider what might happen were one to accidentally press CR prematurely.

type

rm ./foo -rf

think "remove foo, am I absolutely sure there's nothing in there that I need, am I absolutely certain where I am cd'ed to, yes, -rf, CR."

I often chicken out and do find ./foo first to see what's going to be nuked, and of course one can then use find again with -delete.

nigel222
  • 317