As sysadmins, we have all encountered the situation where the disk drive has filled up due to a large file. But we can't delete the file because it contains important data. So we need to copy the file to another location first - another host or another volume - before we can delete it.
For huge files, we're sitting around waiting for the copy - time spent during which the host is virtually unusable.
So I pose the question: is there a way that as the copy is occurring, the parts already copied can be removed from the original thereby reducing disk usage so that the host can be brought back into a usable state more quickly? I'm imagining some kind of tool or command-line flag that would do this.
fallocate --punch-hole
(fallocate -p
) is a way, not automatic though. It's your job to know how big hole you can punch at any given moment. – Kamil Maciorowski Aug 14 '22 at 16:51