I want to write a shell script that accepts a URL and an output markdown file, and adds that URL plus some metadata to the end of that file. It is possible that this script is invoked concurrently, resulting in concurrent echo $some_multiline_thing >> file
s.
Per this question, this can result in corrupt data being written to file
. How do I synchronize the writes so that the appending writes all happen but atomically? (The order of the appends doesn't matter to me.)
Update: I found a half-baked solution
function sync-append() {
local file="$1"
local text="$2"
local lock_fd
{
exec {lock_fd}>>$file
flock -x "$lock_fd"
echo "$text" >> $file
} always {
exec {lock_fd}>&-
}
}
This solution relies on zsh's always
, which may not be invoked (with, e.g., kill -9
).
flock
is the answer, but that question does not tell me how to actually use it to accomplish my goal of synchronous appending writes. – HappyFace May 26 '20 at 12:22kill -9
should matter. the lock is associated with the open file description, and if the process dies, the fd closes and the lock is gone – ilkkachu May 26 '20 at 13:26kill -9
kills the lock as well. Why's that though? Where is the lock living? Shouldn't the lock be in the OS, and not in the file descriptor? Does the OS query all open file descriptors to determine if a file is locked? – HappyFace May 26 '20 at 13:32