423

I have a directory containing a large number of files. I want to delete all files except for file.txt . How do I do this?

There are too many files to remove the unwanted ones individually and their names are too diverse to use * to remove them all except this one file.

Someone suggested using

rm !(file.txt)

But it doesn't work. It returns:

Badly placed ()'s 

My OS is Scientific Linux 6.

Any ideas?

Braiam
  • 35,991
Kantura
  • 4,705

9 Answers9

471

POSIXly:

find . ! -name 'file.txt' -type f -exec rm -f {} +

will remove all regular files (recursively, including hidden ones) except any files called file.txt. To remove directories, change -type f to -type d and add -r option to rm.

In bash, to use rm -- !(file.txt), you must enable extglob:

$ shopt -s extglob 
$ rm -- !(file.txt)

(or calling bash -O extglob)

Note that extglob only works in bash and Korn shell family. And using rm -- !(file.txt) can cause an Argument list too long error.

In zsh, you can use ^ to negate pattern with extendedglob enabled:

$ setopt extendedglob
$ rm -- ^file.txt

or using the same syntax with ksh and bash with options ksh_glob and no_bare_glob_qual enabled.

Timmmm
  • 586
  • 5
  • 17
cuonglm
  • 153,898
  • 15
    Specifying the directory is good practice (fullpath in this case? or maybe add a warning here that this command deletes every file starting from the current working directory ?). I also usually write any example with rm as echo rm instead, and ask people to only take out the echo when they are really sure it will do what they want. Other than that, +1 for the thorough answer – Olivier Dulac Sep 05 '14 at 09:00
  • 15
    I'd suggest -delete instead of -exec, shorter and easier to remember – Izkata Sep 05 '14 at 15:41
  • 11
    @Izkata: -delete is not defined by POSIX. – cuonglm Sep 05 '14 at 15:42
  • 6
    How to exclude list of files? – B Faley Sep 06 '14 at 04:48
  • 5
    @Meysam - see my answer for a solution that will handle a list of files. Else with Gnouc's find solution you can do ! \( -name one_file -o -name two_file \) and so on. – mikeserv Sep 06 '14 at 15:35
  • 3
    If you use rm -r, you'll remove the directory that contains file.txt, which will effectively remove the file that you wanted to keep. – Barmar Sep 10 '14 at 19:16
  • 2
    rm !(file.txt) working fine if there is no directories in same level with sub-files rm -rf !(file.tx) will solve this problem – Belal mazlom Mar 16 '16 at 12:13
  • 2
    To remove directories with find, change rm to rmdir, not rm -r. rmdir won't delete a directory containing file(s), while rm -r will (thus deleting file.txt). And if you are using GNU rmdir, you can also use its --ignore-fail-on-non-empty option. – cas May 28 '16 at 14:11
  • 1
    @cas: It's about remove directory, not directory contain file.txt. Example remove all directories except directory dir, then the command is find . ! -name 'dir' -type d -exec rm -rf {} +. And also, the search is limited to 1 level only. – cuonglm May 28 '16 at 14:20
  • 3
    @cuonglm ,its better if anyone explain why + at the end because i've often see ; – RaGa__M Nov 09 '16 at 12:29
  • 3
    I have to use find . ! -name 'basejit-cli.sh' ! -name '.' ! -name '..' -exec rm -rf {} + to avoid trying to remove . and .. on macOS. – haxpor Jan 29 '17 at 05:42
  • 1
    @haxpor: rm won't remove . and .., so you can ignore, If you want it explicitly, do as you did, but not sure why .. include there. At least in my El Capital, find . won't list .. – cuonglm Jan 29 '17 at 06:29
  • 1
    To not include sub-directories, add -maxdepth 1. – LoMaPh May 01 '19 at 18:20
  • 1
    @LoMaPh -maxdepth is not POSIX. – cuonglm May 08 '19 at 08:25
  • 1
    How do I remove files recursively, while ignoring a specified directory, such as ./documents? – leetbacoon Dec 14 '19 at 01:50
  • 1
    @leetbacoon See my answer above, using type -d and rm -r. – cuonglm Dec 14 '19 at 15:51
  • 1
    @Belalmazlom How can I exclude multiple files in case where I have files and folders in same current directory ? Something like this rm -rf !(file1.txt|file2.txt|..) is possible ? – Vicky Dev Oct 27 '20 at 15:55
  • 1
    This doesn't work with symlinks. It also deletes the files pointed to by symlinks. – enthusiasticgeek Dec 13 '20 at 14:52
  • 1
    What does -- do after rm? – BadHorsie Jul 13 '21 at 19:17
  • 1
    Could this solution applied for the folders as well? – alper Dec 28 '21 at 22:38
  • 1
    @alper Yes, quote from my answer: "To remove directories, change -type f to -type d and add -r option to rm." – cuonglm Jan 03 '22 at 16:39
  • 1
    Be careful on the copy and past! – Rémy Hosseinkhan Boucher Jul 07 '22 at 13:26
  • I used rm -r !(file.txt). No need for rm -- -r !(file.txt) because the -- is used to indicate the end of command options so that rm can remove a file whose name starts with a -. For example -foo in this command rm -- -foo. Besides rm -- -r !(file.txt) wouldn't work as intended because -r will be treated as a file name, since it follows --. – bit May 03 '23 at 08:48
193

Another take in a different direction (iff there are no spaces in file names)

ls | grep -xv "file.txt" | xargs rm

or (works even if there are spaces in file names)

ls | grep -xv "file.txt" | parallel rm

from man grep:

 -v, --invert-match
          Invert the sense of matching, to select non-matching lines.  (-v is specified by POSIX)

-x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $.

Without the -x we'd keep my-file.txt as well.

Sebastian
  • 8,817
  • 4
  • 40
  • 49
  • 6
    It will not work if file names have a space ... – Matteo Feb 08 '16 at 16:07
  • 2
    Ciao @Matteo, it works for files with spaces too, but you need to surround the grep-pattern by quotes, e.g. ls | grep -v "a file with spaces.bin" | xargs rm. This is normal grep syntax. – Sebastian Feb 08 '16 at 19:14
  • 3
    @Sebastian The problem is not the grep but rm. rm will get a list of space separated arguments. Try touch 'a b'; touch 'c d'; ls | grep -v 'a b' | xargs rm: you will get rm: c: No such file or directory and rm: d: No such file or directory – Matteo Feb 09 '16 at 07:57
  • 1
    Yes, this is a common problem. See the options -print0 in find and -0 in xargs. Here is a workaround, but is is slightly inconvenient. find . -maxdepth 1 -type f | grep -v 'a b' | tr '\n' '\0' | xargs -0 rm. By the way, gnu parallel handles this well (I almost always use it as a substitute for xargs: ls | grep -v 'a b' | parallel rm – Sebastian Feb 11 '16 at 07:10
  • 3
    It's not only spaces, it's all blanks and newlines, but also quoting characters (single, double quotes and backslash) and filenames starting with -. – Stéphane Chazelas Aug 30 '16 at 09:52
  • 1
    It's good solution where shopt or find does not work as in here: http://unix.stackexchange.com/a/153863/35004. Prefer shopt / find with xargs to keep things resource-efficient. – user2067125 Feb 17 '17 at 02:19
  • 1
    Worked for me in OSX with bash and zsh. TY! – theUtherSide Jun 01 '18 at 06:05
  • 21
    This worked for me ls -Q | grep -v file.txt | xargs rm -fr . -Q switch is "enclose entry names in double quotes" – kuzyn Aug 24 '19 at 13:11
  • 2
    @kuzyn: Wow, you learn something new every day. Nice – Jo Mo Nov 01 '19 at 10:07
  • 1
    awesome answer and usage of the -v flag. see the link for not deleting multiple files instead of just one using grep with the -v and -E flag. https://stackoverflow.com/a/5464614/708807 – ipatch Nov 27 '19 at 19:12
  • 2
    The dot character into the grep search pattern is matching any character if it is not escaped like 'file\.txt' to match literal dot only. For example, it will also match a file named filestxt. – thanasisp Jun 07 '22 at 21:22
  • 1
    xargs -d '\n' also works for the spaces in filenames – srs Aug 24 '22 at 08:15
  • With coreutils 8.30 on Kubuntu 20.04, ls automatically quotes filenames that have spaces in them. – MichaelK Jan 23 '23 at 21:44
51

Maintain a copy, delete everything, restore copy:

{   rm -rf *
    tar -x
} <<TAR
$(tar -c $one_file)
TAR

In one line:

{ rm -rf *; tar -x; } <<< $(tar -c $one_file)

But that requires a shell that supports here-strings.

mikeserv
  • 58,310
  • 27
    This is somewhat mind-blowing. – vschum Sep 05 '14 at 02:36
  • 3
    But if this gets interrupted halfway through, or if anything else goes wrong, the file is gone. – kasperd Sep 05 '14 at 08:04
  • 2
    @kasperd - no. Only if the parent shell dies between the time it runs rm and tar -x. rm can fail as much as it wants. – mikeserv Sep 05 '14 at 08:05
  • 3
    Isn't more efficient to move it to another directory and move it back? We don't need to deal with the content of the file, only with its path. – leonbloy Sep 06 '14 at 15:51
  • 2
    @leonboy - probably, but if youre crossing filesystems, it makes no difference – mikeserv Sep 06 '14 at 17:19
  • 2
    Just seen your one line version @mikeserv , wow it's a bit above my head though, but I tried it and it certainly works. – Kantura Sep 09 '14 at 04:40
  • 4
    @Derek - it's really not that crazy. POSIX requires that a shell redirect its input to the command you specify when it encounters a here-document. The command-substitution has to complete before anything else happens. Most shells use temp-files for here-docs - some pipes. Either way tar -c completes and the shell stashes its output before rm runs. Because rm ignores stdin its left hanging for tar -x when rm finishes - and the shell can divest itself of the copy it saved of your file(s). Here-docs can be used like aimed pipes a lot of the time. – mikeserv Sep 09 '14 at 05:27
  • 2
    That would only work with zsh as the output of tar will typically contain NUL bytes. Even then, you'd have trouble with files whose content end in empty lines and align with tar block size. – Stéphane Chazelas Aug 30 '16 at 09:59
  • 6
    What if the file is 64GB in size? Or is there no actual copying involved? – Zimano Jul 13 '18 at 19:04
  • 2
    @Zimano at best guess, it would read the file(s) into ram, and write it(them) back as a new file(s). I would expect this is harder on the hardware than not touching the file at all or just "moving" the file on a modern filesystem – ThorSummoner Jun 19 '19 at 00:02
  • 1
    @mikeserv I was hoping to do something more complex near the rm, for example { git ls-files -z | xargs -0 rm -rf ; tar -x; } <<< $(tar -c $one_file) but i think that causes the tar to not redirect properly, thoughts? – ThorSummoner Jun 19 '19 at 00:03
  • 2
    @thorsummoner yeah. ive heard of thoughts. i think theyre something like this, but ive been wrong before! – mikeserv Jul 26 '19 at 06:03
  • This answer needs more English. – Amit Naidu Mar 12 '24 at 03:11
42

you're all overthinking this.

cd ..
mv fulldir/file.txt /tmp/
rm -rf fulldir
mkdir fulldir
mv /tmp/file.txt fulldir/

Done.

EDIT Actually, easier:

cd ..
ln fulldir/file.txt ./
rm -rf fulldir
mkdir -p fulldir
mv file.txt fulldir/
  • 7
    that's the same thing my answer does. exce[pt it doesn't lose any permissions on the dir. – mikeserv Sep 05 '14 at 12:50
  • 9
    If it's a large file on a separate filesystem from /tmp and you're trying to remove everything from the root of the filesystem down, then moving it somewhere safe may not be an option. – Johnny Sep 06 '14 at 05:24
  • 4
    This is definitely the simplest answer. – Ben Liyanage Oct 18 '16 at 23:10
  • 1
    Both fail if fulldir is a mountpoint. Both result in incorrect settings if fulldir isn't "your" directory with standard permissions – Chris Davies Jul 04 '20 at 10:50
  • This seems like the best answer if you have minimal shell experience. – Devin Rhode Jun 20 '21 at 22:03
  • Would it be possible to wrap this as a simple function that accepts 1 parameter for the files/directories you want to preserve? – Devin Rhode Jun 20 '21 at 22:04
  • 1
    Also, I wonder if someone out there knows how to preserve directory permissions.. feels like there must be a way.. Either way, mving files outside directory, and then obliterating contents in some way, seems like the best, safest, most obvious approach. Maybe everyone on this StackExchange is really good with shell scripts. – Devin Rhode Jun 20 '21 at 22:08
26

On my Scientific Linux 6 OS this works:

shopt -s extglob
rm !(file.txt)

I also have Debian 32bit installed on a Virtual Machine. The above does not work but the following does:

find . -type f ! -name 'file.txt' -delete
Kantura
  • 4,705
  • 1
    You mean the find solution I gave didn't working? – cuonglm Sep 05 '14 at 02:30
  • "didn't work" or " isn't working". Yes your find solution also works. Thanks. – Kantura Sep 05 '14 at 02:43
  • 1
    Can you make it more details? How "didn't work? It does not remove other files, or it removed file.txt or anything else? – cuonglm Sep 05 '14 at 02:45
  • I meant that on the Debian OS "$rm !(file.txt)" returns "Badly placed ()'s". So I tried "$shopt -s extglob", but it returned: "shopt: Command not found". Then I tried the "find" solution. It does work. – Kantura Sep 05 '14 at 02:55
  • $ is shell prompt in my answer, you must remove it in your actual command. And what shell do you use in Debian? – cuonglm Sep 05 '14 at 03:07
  • I did not include the $ prompt. I use a tcsh shell in Debian. – Kantura Sep 05 '14 at 03:11
  • 1
    Oh, so of course it doesn't work. shopt is not a tcsh builtin. – cuonglm Sep 05 '14 at 03:15
13

I find that this approach is very simple, works, and doesn't require any special extensions (that I know of!)

ls --hide=file.txt | xargs -d '\n' rm
Marco
  • 3
2-bits
  • 292
  • 2
  • 8
11

Use rm !("file.txt") instead of rm !(file.txt)

terdon
  • 242,166
  • 13
    That makes absolutely no difference whatsoever. The issue here was that the OP was 1) not using a shell that supports this format and 2) even in bash, you need to enable it with shopt -s extglob. In any case, quoting a simple filename like that would have made no difference. – terdon Sep 06 '14 at 11:15
  • 1
    To keep just utils folder do - rm -rf !("utils") – James Jithin Feb 01 '18 at 18:27
8

Just to give a different answer, you can use the default behavior of rm that it won't delete folders:

mkdir tmp && mv file.txt tmp  # create tmp dir and move files there
rm                            # delete all other files
mv tmp/* . && rm -rf tmp      # move all files back and delete tmp dir
Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255
Nithin
  • 231
  • 1
    What would this do if ./tmp already exists as a file or, worse, as a directory with things already in it? Using dirr=mktemp && mv file.txt "$dirr"; rm; mv "$dirr/*" . && rm -rf "$dirr" would avoid this issue. – Joe Jan 26 '20 at 13:52
0

In my case I needed to remove all files and folder except for zip files, inspired on accepted answer (I gave of course a +1), while from folder I want to clean all except zip files I use find . ! -name '*.zip' ! -name '.' ! -name '..' -exec rm -rf {} +, for example (all files/folders are empty since created just for the example):

my-computer:/tmp/toclean>ll
total 0
-rw-r--r-- 1 user users 0 Oct  4 10:47 aaa.zip
-rw-r--r-- 1 user users 0 Oct  4 10:47 tata
drwxr-xr-x 1 user users 0 Oct  4 10:47 titi
drwxr-xr-x 1 user users 0 Oct  4 10:47 tutu
-rw-r--r-- 1 user users 0 Oct  4 10:47 yoyo
-rw-r--r-- 1 user users 0 Oct  4 10:47 zzz.zip
my-computer:/tmp/toclean>find . ! -name '*.zip' ! -name '.' ! -name '..'  -exec rm -rf {} +
my-computer:/tmp/toclean>ll
total 0
-rw-r--r-- 1 user users 0 Oct  4 10:47 aaa.zip
-rw-r--r-- 1 user users 0 Oct  4 10:47 zzz.zip

Note I exclude '.' and '..' to just to avoid rm: refusing to remove '.' or '..' directory: skipping '.' message.

gluttony
  • 161