EDIT — To clarify/summarize, the scenario is the following:
- Context: Large file (1 TB+) on server A, virtually no disk space left on server A, disk utilization on A keeps growing rapidly and that can't be stopped and there's no practical way to add more storage without interrupting production processes
- Goal: Move the "huge file" from A to another machine B, and delete already transferred parts of the file from A's disks while the file is being transferred (the transfer could take a while given the file size, but the disk utilization keeps growing ruthlessly, so we can't just wait for the transfer to finish)
PS: Please note that I'm primarily looking for a mature standard solution, not a bash script / hack. I think it shouldn't be very difficult to come up with something using tools like truncate
. However, if there's no standard solution and someone has an elegant bash script (or similar), I'd still be curious to see it.
Original request:
Is there a standard solution to delete big files (think 1 TB+) as they are being transferred via rsync/scp?
The solutions that I've found require extra disk space to first split the file into pieces. However, what if there is virtually no disk space left for these operations?
In the scp/rsync man pages, I only found switches that delete files after they've been fully transferred.
fallocate -p
. – Kamil Maciorowski Jul 31 '21 at 20:35