Unix systems by and large avoid mandatory locks. There are a few cases where the kernel will lock a file against modifications by user programs, but not if it's merely being written by another program. No unix system will lock a file because a program is writing to it.
If you want concurrent instances of your script not to tread on each others' toes, you need to use an explicit locking mechanism such as flock
lockfile
.
When you open a file for appending, which >>
does, each program is guaranteed to always write to the end of the file. So the multiple instances' output will never overwrite each other, and if they take turns to write, their output will be in the same order as the writes.
The bad thing that could happen is if one of the instances writes several chunks of output and expects them to come out together. Between consecutives writes by one instance, other instances may perform their own writes. For example, if instance 1 writes foo
, then instance 2 writes hello
and only then instance 2 writes bar
, then the file will contain foohellobar
.
A process effectively writes to the file when it calls the write
system call. A call to write
is atomic: each call to write
writes a sequence of bytes that won't be interrupted by other programs. There is often a limit to how much data a single call to write
will effectively write: for larger sizes, only the beginning of the data is written, and the application must call write
again. Furthermore, many programs perform buffering: they accumulate data in a memory area, then write this data out in one chunk. Some programs flush the output buffer after a complete line or other meaningful separation. With such programs, you can expect whole lines to be uninterrupted, as long as they aren't too long (up to a few kilobytes; this depends on the OS). If the program does not flush at meaningful spots, but only based on the buffer size, you might see something like 4kB from one instance, then 4kB from another instance, then again 4kB from the first instance and so on.