84

By mistake I ran rm * on the current directory where I created many c program files. I had been working on these since morning. Now I can't take out again the time that I spent since morning on creating the files. Please say how to recover. They aren't in recycle bin also!

Braiam
  • 35,991
Ravi
  • 3,823
  • 24
    Linux/Unix doesn't forgive :) – jirib Nov 15 '13 at 09:07
  • 8
    Checkout them from the version control system you use. You use one, right? – choroba Nov 15 '13 at 09:08
  • @jiri Too sad to hear that (though I knew this) – Ravi Nov 15 '13 at 09:08
  • There are some tools, I hope someone with more knowledge of them can help you. To prevent it happening again, may I urge you to find out about revision control e.g. subversion or mercurial. Also consider writing some clean rules (in make or shell aliases) to remove *.o etc. – ctrl-alt-delor Nov 15 '13 at 09:11
  • @choroba No I am not using any version control. (As I am completely new to version control, is it that Ubuntu has it by default) You know I am currently learning version control and I had written these c programs for that only. – Ravi Nov 15 '13 at 09:12
  • 3
    There are SOME ways to recover files/data. But most of them is very hard to do. Be sure you don't write any more to the disk or you are doomed completely. – jirib Nov 15 '13 at 09:12
  • 5
    When I did this, when I was young, it was not as bad as I thought. This is how I discovered that most of the time taken to write is in thinking. The second time around there will be less thinking, and you may even improve it. – ctrl-alt-delor Nov 15 '13 at 09:13
  • 2
    Unmount the file system ASAP to avoid the blocks previously allocated for the deleted files from being overwritten. Assuming the underlying file system is either ext3 or ext4, you might have some luck recovering files using extundelete. – Thomas Nyman Nov 15 '13 at 09:14
  • @richard I am learning VERSION CONTROL WITH SCCS & RSS. Oh! I had to commit the mistake at this time only!!!! Also I typed rm * but I wanted to type rm *.o yes, you are right when you said that much time is spent on thinking on 1st time but not on 2nd. – Ravi Nov 15 '13 at 09:16
  • Yes Ubuntu has revision control (note: version control is something different, though most people mix up the name version control, revision control and configuration management, they are all different.) Just install from package manager. – ctrl-alt-delor Nov 15 '13 at 09:18
  • @ThomasNyman please explain more. See, as per you I have to unmount my home file system which I don't think is possible to do unless I move to single user mode. In other words, I have to log-out of my user-id. So, if I log out or shut down my system, then can the data be recovered by the way you are saying. – Ravi Nov 15 '13 at 09:22
  • 1
    SCCS and RCS are old, real old. Ubuntu has RCS, and cssc, a clone of SCCS. It also has CVS an improved RCS. And subversion (SVN) better than CVS, and mercurial (hg) usually butter than subversion, and other. RCS and cssc (sccs) are local only, CVS and subversion (svn) can have remote repositories. Merculial (hg) can have local and remote repositories. There are also many other to choose from. I would go with mercurial. – ctrl-alt-delor Nov 15 '13 at 09:25
  • 1
    @richard This is the wrong place to discuss such things.

    Ravi: Yes, log out, switch to single user mode or even better shut down the system. Use a live CD and run extundelete but do not write any longer to the same filesystem or it gets even worse and your chances to recover the files get lower and lower.

    – scai Nov 15 '13 at 10:01
  • @richard I thank you very very much for giving the information you wrote in your last comment regarding various version control systems. scai, I thank you too for your assistance but one thing: its too good and beneficial what richard posted here. He has commented w.r.t. what I wanted to know. Had he not posted here, then I wouldn't have got the great information now. It's good that he wrote now and I would install those applications (mercurial) to work on. I would always wish that there shouldn't be much restrictions/limitations on one's post if it's related as in this case. – Ravi Nov 17 '13 at 11:19
  • 1
    @choroba Even when using version control, losing a days work is still quite a big risk unless you commit your changes every hour. – gerrit Jun 14 '16 at 10:25
  • Also, check out ntfsundelete(from ntfs-3g) for NTFS filesystem – Artfaith Mar 16 '19 at 23:50
  • This is the first time I can say "thank god I have a backup." – Sridhar Sarnobat Feb 03 '23 at 00:08

2 Answers2

81

If a running program still has the deleted file open, you can recover the file through the open file descriptor in /proc/[pid]/fd/[num]. To determine if this is the case, you can attempt the following:

$ lsof | grep "/path/to/file"

If the above gives output of the form:

progname 5383 user 22r REG 8,1 16791251 265368 /path/to/file               

take note of the PID in the second column, and the file descriptor number in the fourth column. Using this information you can recover the file by issuing the command:

$ cp /proc/5383/fd/22 /path/to/restored/file

If you're not able to find the file with lsof, you should immediately remount the file system which housed the file read-only:

$ mount -o remount,ro /dev/[partition]

or unmount the file system altogether:

$ umount /dev/[partition]

The reason for this is that as soon as the file has been unlinked, and there are no remaining hard links to the file in question, the underlying file system may free the blocks previously allocated for the deleted file, at which point the blocks may be allocated to another file and their contents overwritten. Ceasing any further writes to the file system is therefore time critical if any recovery is to be possible. If the file system is the root file system or cannot be made read-only or unmounted for some other reason, it might be necessary to shutdown the system (if possible) and continue the recovery from a live environment where you can leave the target file system read-only.

After writes to the file system have been prevented, there is no immediate hurry to attempt the actual recovery. To play it safe, you might want to make a backup of the file system to perform the actual recovery on:

$ dd bs=4M if=/dev/[partition] of=/path/to/backup

The next steps now depend on the file system type. Assuming a typical Ubuntu installation, you most likely have a ext3 or ext4 file system. In this case, you may attempt recovery using extundelete. Recovery may be attempted safely on either the backup, or the raw device, as long as it is not mounted (or it is mounted read-only). DO NOT ATTEMPT RECOVERY FROM A LIVE FILE SYSTEM. This will most likely bring the file system to an inconsistent state.

extundelete will attempt restore any files it finds to a subdirectory of the current directory named RECOVERED_FILES. Typical usage to restore all deleted files from a backup would be:

With older versions:

$ extundelete /path/to/backup --restore-all 

With newer versions (e.g. 0.2.4), don't mount the device you're trying to recover from (thanks to Ryan Lue) :

$ extundelete /dev/<device-file> --restore-all

Instead of --restore-all, you can try options like --restore-file <path> or --restore-directory <path>

AdminBee
  • 22,803
Thomas Nyman
  • 30,502
  • Note for anyone using a recent version of extundelete: the CLI syntax has changed. Don't mount the device you're trying to recover from, and instead use extundelete --restore-file <path> /dev/<device-file>. – Ryan Lue Nov 09 '18 at 03:21
  • use "rm-trash" utility which handles puts them to trash for later retrieval and supports all options of "rm" command. – Natesh bhat Nov 20 '18 at 14:20
  • Don't use "rm" if you wish to restore the files in future .You can use "rm-trash" utility from apt-get : https://github.com/nateshmbhat/rm-trash – Natesh bhat Nov 20 '18 at 14:30
  • could you explain where the number 22 on cp /proc/5383/fd/22 /path/to/restored/file come from? I am stuck on that part, it gives me 5 different number – Adi Prasetyo Aug 12 '19 at 14:50
  • @AdiPrasetyo: As the answer states the fourth column in the lsof output is the number of the file descriptor the process has opened to the file. A process may have multiple open file descriptors to the same file. – Thomas Nyman Aug 13 '19 at 14:46
  • Many thanks. Worked for me when I foolishly removed a file on a remote server via ssh. I still had an error in the /var/cache/apt/archives/lock zone that I removed via this procedure: https://itsfoss.com/fix-ubuntu-install-error/#comments – Trunk Sep 19 '22 at 17:38
15

Yes, I am able to recover my files. I haven't checked yet whether all are recovered or not but yes a few I have checked are recovered. As there are many many files which are recovered via that tool/command I need to grep some text pattern in those files and see which are mine. The files are recovered with different names (may be generated by system). I got the solution from a different forum and the command is photorec

sudo photorec

This will open a text based window. I followed the instructions and yes it's superb.

Ravi
  • 3,823
  • photorec is a bit more of a brute-force approach. It's worth starting with unmounting and turning to lsof and/or extundelete before turning to photorec. Running photorec on a large capacity drive can take many hours. – thomp45793 May 24 '19 at 21:39