You could write a wrapper script that extracts the files to a temporary location, and only moves them to their final destination when they are complete. Something like
tempdir="incomplete/"
mkdir -p "$tempdir"
zipinfo -1 compressed.zip | while read f ; do
test -f "$f" && continue # skip anything extracted by a previous attempt
printf "extracting %s..." "$f"
unzip -p compressed.zip "$f" > "$tempdir/$f"
printf "done!\n"
mv "$tempdir/$f" "$f"
done
rm -r "$tempdir"
If this is interrupted, then you'll still have a partial file, but when you run it again, it will skip complete files (in their correct location) and immediately overwrite the partial one (in the temporary directory). When it finally reaches the end of the archive, it will remove the temporary directory entirely.
There are some limits to my example script. It assumes that the zip doesn't contain a directory structure of its own, and uses the temporary directory incomplete/
inside the working folder. If this is unacceptable, you'd have to
- use another value for
tempdir
, that is somewhere on the same filesystem (to permit atomic mv
) and is guaranteed not to be used by any other process, and
- add an additional
mkdir
step, inside the loop, to reconstruct the extracted directory structure
See also Is mv atomic on my fs?