I have a script that copies a local file to a remote directory and updates a remote file with a new line that mentions the just copied script.
Some times some of the files in the remote directory need to be deleted and the corresponding file mentioning them should be updated to delete corresponding entries as well.
Problem: The script can run from more than one instances and for more than one directory in the remote server. I.e. there are more than 1 directories in the remote server that store files that we copy based on the type passed to the instance. How can I make the copy of the file and the update of the “log” somewhat atomic?
I don’t have so much concurrent instances running for it to be a serious problem but I was wondering if there is a way to do such changes making sure the file is updated correctly.
E.g would the following work?
scp file.bin remoteserver:/foo/$type/
grep -v "oldfile.bin" entries.log > entries.log.backup && mv entries.log.backup entries.log
echo "$record" >> entries.log
9>entries.log.lock
? – Jim Oct 18 '17 at 08:05scp
inside the subshell, afterflock
. In that case you could perhaps use the last pattern given in the manpage instead, to lock the whole shell script. – Stephen Kitt Oct 18 '17 at 08:139
for file descriptor come from? Could I end up not being able to use that number? Is it arbitrary? – Jim Oct 18 '17 at 08:20This is useful boilerplate code for shell scripts. Put it at the top of the shell script you want to lock...
– Jim Oct 18 '17 at 08:21flock
available on all platforms? I getNo manual entry for flock
when I doman flock
. – Jim Oct 18 '17 at 08:31entries.log
at same time. (though, it may not be required) – RomanPerekhrest Oct 18 '17 at 11:21