About the -delete option above: I'm using it to remove a large number (1M+ est) files in a temp folder that I created and inadvertently forgot to cleanup nightly. I filled my disk/partition accidentally, and nothing else could remove them but the find . command. It is slow, at first I was using:
find . -ls -exec rm {} \;
But that was taking an EXTREME amount of time. It started after about 15 mins to remove some of the files, but my guess is that it was removing less than 10 or so per second after it finally started. So, I tried the:
find . -delete
instead, and I'm letting it run right now. It appears to be running faster, though it's EXTREMELY taxing on the CPU which the other command was not. It's been running for like an hour now and I think I'm getting space back on my drive and the partition gradually "slimming down" but it's still taking a very long time. I seriously doubt it's running 1,000 times faster than the other. As in all things, I just wanted to point out the tradeoff in space vs. time. If you have the CPU bandwidth to spare (we do) then run the latter. It's got my CPU running (uptime reports):
10:59:17 up 539 days, 21:21, 3 users, load average: 22.98, 24.10, 22.87
And I've seen the load average go over 30.00 which is not good for a busy system, but for ours which is normally lightly loaded, it's OK for a couple hours. I've checked most other things on the system and they're still responsive so we are OK for now.
rm -rf *in the folder probably fails because of too many arguments; but what aboutrm -rf folder/if you want to remove the entire directory anyways? – sr_ Apr 26 '12 at 08:01rm -rf? – jw013 Apr 26 '12 at 11:37fsckon it to reclaim the unused disk blocks, but that approach seems risky and may not be any faster. In addition, the file system check might involve recursively traversing the file system tree anyways. – jw013 Apr 26 '12 at 13:27ccachefile tree so huge, andrmwas taking so long (and making the entire system sluggish), it was considerably faster to copy all other files off the filesystem, format, and copy them back. Ever since then I give such massive small file trees their own dedicated filesystem, so you canmkfsdirectly instead ofrm. – frostschutz Jun 15 '13 at 11:43echo "$(getconf ARG_MAX)/4-1" | bc(mine comes to 524287 arguments, which I've tested and found to be correct). – evilsoup Jun 27 '13 at 23:01findwould fail due to running out of memory, since it executesrmimmediately for each matching file, rather than building up a list. (Even if your command ended with+rather than\;, it would runrmin reasonably sized batches.) You would have to have a ridiculously deep directory structure to exhaust memory; the breadth shouldn't matter much. – 200_success Aug 31 '13 at 06:07rsync -a --deleteandfind ... -type f --deleterun at the same speed for me on an old RHEL 5.10 system for that reason. – RonJohn Mar 03 '18 at 19:13mvis always faster than anything else. Justmv folder_to_be_deleted /tmp/trashthen reboot. Files in the/tmpdirectory will be deleted upon your next reboot. – JB Juliano Nov 29 '22 at 14:35