2

I'm not sure how I did this. I had to wipe my dedicated server and start again, so I backed up everything to a remote VPS first; during this process, I compressed a folder full of other folders as foldername.gz instead of using tar, so when I extract it now it extracts as a single file and not a folder full of folders. I tried renaming to foldername.tar.gz but when extracting it gives me:

tar: This does not look like a tar archive
tar: Skipping to next header
tar: Exiting with failure status due to previous errors

When I use file foldername.gz it gives me:

foldername.gz: gzip compressed data, was "2a863233-fac4-4611-8bbd-76416e58e5d4.dat", last modified: Thu Dec 9 10:36:04 2021, from Unix, original size modulo 2^32 2629632

Edit, this is what file -z foldername.gz gives: foldername.gz: gzip compressed data, from FAT filesystem (MS-DOS, OS/2, NT) (gzip compressed data, was "2a863233-fac4-4611-8bbd-76416e58e5d4.dat", last modified: Thu Dec 9 10:36:04 2021, from Unix)

Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255
GodsDead
  • 193

1 Answers1

5

I would surmise that unfortunately you've not got an archive, compressed or otherwise, of your files.

So what have you got? It looks like you have a concatenation of all the files in the folder, with each one individually compressed:

gzip -c * > all_files.gz    # DO NOT DO THIS

If you were to uncompress this you would have the equivalent of having done this:

cat * > all_files           # OR THIS

Not very helpful, as there are no archive markers between files. For text files, you can painstakingly go through the resulting all_files file and chop out the relevant parts as files. But as there are no file names or other metadata recorded for each one it's going to be fiddly. Especially as you're talking about a multi-gigibyte file.

In hindsight, what you should have done is something like this, which would have archived the collection of files, compressed it, and written the result to a tarball:

tar czf all_files.tgz *

There is no easy solution to the recovery of files in your current situation, and possibly no solution at all.

I'd suggest trying some of the undelete tools (binwalk, testdisk, photorec, etc.) to see if they have options that might help. Otherwise it might be a case of trying file at each byte offset to see if it identifies items for you. You can then use the successful offsets to split out the data into the constituent files. But again, you'll not have filenames or other metadata because that was never saved with the original data.

Good luck

Chris Davies
  • 116,213
  • 16
  • 160
  • 287
  • yeah, heuristics is what is left to help here. I'll suggest "binwalk" again, which is not that bad about extracting at least those common file types that have an unmistakeable header and their own length information. – Marcus Müller Jan 06 '22 at 13:01
  • @MarcusMüller thank you. Added – Chris Davies Jan 06 '22 at 13:26
  • Ah nuts. I still don't even know how I was able to compress a folder as though it was a single file. Luckily I have parts of it in an off site backup, just not as new and not containing all files, there's thousands and thousands of files in this. Thank you for putting this to rest for me, so now I can figure out what to do from here! – GodsDead Jan 06 '22 at 14:41