416

I tried to rm -rf a folder, and got "device or resource busy".

In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one. Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)

user123456
  • 5,018
ripper234
  • 31,763
  • 5
    Thanks this was handy - I was coming from Linux to Windows, was looking for the equivalent of lsof - LockHunter. – Sonia Hamilton Sep 04 '13 at 02:28
  • 5
    What the hell? Unix does not prevent you from deleting open files like Windows does. This is why you can delete your whole system by running rm -rf /... it will happily delete every single file, including /bin/rm. – psusi Oct 10 '14 at 15:35
  • 2
    @psusi, that is incorrect. You either have a bad source of information or are just making stuff up. Linux, like Windows, has file and device locking. It's kind of broken, though. http://0pointer.de/blog/projects/locking.html – foobarbecue Jan 10 '15 at 01:05
  • 3
    @foobarbecue, normally those are only advisory locks and the man page at least seems to indicate they are only for read/write, not unlink. – psusi Jan 10 '15 at 23:34
  • Solutions on this page don't work for me, still not be able to delete the file, but in my case i'm bothered by the size the file, so i do this little trick: vim unwanted_file, then simply delete the content inside the file in edit mode, this way i release the disk, but the file is still there. – jack Feb 27 '21 at 13:13
  • why nobody is telling to delete the folder in host machine? I was getting this error in docker container, and it was the fastest way to delete with sudo from host machine but no google first results where telling that, no answer like this here. I would post an answer but I cannot because I need more reputation. And this is ASimpleMethodThatWorks at least in case like my. No need to install things like lsof, tried that - it even gave me erorr that it failed to download or smth. – Darius.V Jul 13 '23 at 17:00

9 Answers9

402

The tool you want is lsof, which stands for list open files.

It has a lot of options, so check the man page, but if you want to see all open files under a directory:

lsof +D /path

That will recurse through the filesystem under /path, so beware doing it on large directory trees.

Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.

camh
  • 39,069
  • 97
    What if there were no results? – marines Feb 04 '14 at 15:04
  • 39
    @marines: Check if another filesystem is mounted beneath /path. That is one cause of hidden "open files". – camh Feb 05 '14 at 09:16
  • 3
    lsof command directly to the path does not work. So basically need to go in the path location and then run lsof busy_file then kill all the process – Nivir Jul 04 '16 at 11:56
  • 8
    lsof seems to do nothing for me: lsof storage/logs/laravel.log returned nothing, and so did lsof +D storage/logs/. umount responded with not mounted. – Ryan May 25 '18 at 01:01
  • 11
    Just to elaborate on @camh answer: Use mount | grep <path>. That shows any /dev/<abc> might be mounted on the the <path>. Use sudo umount -lf /dev/<abc> and then try to remove <path>. Works for me. Thanks @camh – Coder Jun 27 '18 at 23:33
  • @marines use sudo? – frx08 Mar 02 '20 at 19:54
  • umount worked for me – JL_SO Nov 02 '20 at 17:52
  • What worked for me on our remote university server was lsof +D /FULL/PATH followed by kill -9 PID. You can then check by simply rerunning the first command (lsof +D /FULL/PATH) and you should see [1]+ Killed – amc May 25 '22 at 17:10
  • Just encountered another case: loop device even not mounted. Seen in GUI of Gnome disk utility and via losetup. Detaching device solved the issue. (BTW iso file on which loop was "based" was not found by lsof +D /folder_where_iso_was). – Martian2020 Oct 17 '22 at 14:59
  • @camh this solved my issue, I was running inside docker and the host mounted the same directory. – user323774 Oct 03 '23 at 17:12
  • This answer would be much improved if the author explained it. For example, say I get a "device or resource busy" message for /mnt/something... what is it that goes in the "path" oflsof +D /path ?? – Seamus Feb 04 '24 at 23:04
208

sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:

umount /path

kip2
  • 2,181
32

I had this same issue, built a one-liner starting with @camh recommendation:

lsof +D ./ | awk '{print $2}' | tail -n +2 | xargs -r kill -9
  • awk grabs the PIDs.
  • tail gets rid of the pesky first entry: "PID".
  • xargs executes kill -9 on the PIDs. The -r / --no-run-if-empty, prevents kill command failure, in case lsof did not return any PID.
Noam Manos
  • 1,031
  • @ChoyltonB.Higginbottom as you asked for a safer way to prevent kill <no PID> failure (if lsof returns nothing) - Use xargs with -r / --no-run-if-empty. For non-GNU xargs, see this alternative: https://stackoverflow.com/a/19038748 – Noam Manos Jun 10 '20 at 07:29
  • 1
    You can pipe tail -n +2 output through sort -u before killing the jobs IDs – Guillermo Luque y Guzman Saenz Sep 16 '21 at 10:58
  • 1
    kill -9 is a favorite for use but does have serious implications. This signal is "non-catchagable, non-ignorable" to the process. Thus, the process may terminate without saving critical state data. Perhaps a simple kill first, and if that doesn't work, then the -9? Finally, bear in mind that if the process is blocked on I/O, kill -9 isn't going to work. That's not an oversight in this suggestion, just something to keep in mind. – Andrew Falanga Jun 22 '22 at 21:26
20

I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem, since the files are typically named like .nfs000000123089abcxyz.

My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file will have been removed automatically, at which point I am free to delete the directory.

This typically happens in directories where I am installing or compiling software libraries.

  • I also have the same problem with .nfsxxx files dropped seemingly in random places. However, I am not sure how this suggestion can make sense - obviously renaming the parent directory does not work because its contents are locked. Wouldn't get the error in the first instance otherwise. I tried it and simply nothing happens, the renaming refuses to happen. Do you want to elaborate/have any other suggestion? – alelom Aug 31 '22 at 07:27
  • renaming the parent directory always worked for me. No clue why. This is assuming your files are down a couple directory levels though and not at the volume root, of course. Sorry I dont have a better answer than "it just works for me". – user5359531 Aug 31 '22 at 15:59
19

Here is the solution:

  1. Go into the directory and type ls -a
  2. You will find a .xyz file
  3. vi .xyz and look into what is the content of the file
  4. ps -ef | grep username
  5. You will see the .xyz content in the 8th column (last row)
  6. kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the 8th column
  7. Now try to delete the folder or file.
phemmer
  • 71,831
user73011
  • 191
18

I use fuser for this kind of thing. It will list which process is using a file or files within a mount.

BillThor
  • 8,965
13

Riffing off of Prabhat's question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved it, but this

ps -ef | grep name-of-busy-dir

Showed me the process and the PID (column two).

sudo kill -15 pid-here

fixed it.

111---
  • 4,516
  • 3
  • 30
  • 52
bil
  • 131
9

I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and fuser, were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused for ages because I couldn't get rid of it -- I kept getting "Device or resource busy"!

By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the mount command, i.e. sudo umount path

Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder again and I could delete it.

tshepang
  • 65,642
8

If you have the server accessible, Try

Deleting that dir from the server

Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.

I too had this problem where

lsof +D path : gives no output

ps -ef : gives no relevant information