When I was reading some materials about Nginx, I noticed that one of the 2 traditional ways to accept incoming connections on a single socket is :
The second of the traditional approaches used by multithreaded servers operating on a single port is to have all of the threads (or processes) perform an
accept()
call on a single listening socket in a simple event loop of the form:while (1) { new_fd = accept(...); process_connection(new_fd); }
Quoted from The SO_REUSEPORT socket option .
Then I noticed that Nginx seems to use that approach too.
As depicted in the figure, when the
SO_REUSEPORT
option is not enabled, a single listening socket notifies workers about incoming connections, and each worker tries to take a connection.
Quoted from Socket Sharding in NGINX Release 1.9.1 :
Aagin, in The Architecture of Open Source Applications (Volume 2): nginx, searching the page with keyword accept
, there it writes:
As previously mentioned, nginx doesn't spawn a process or thread for every connection. Instead, worker processes accept new requests from a shared "listen" socket and execute a highly efficient run-loop inside each worker to process thousands of connections per worker. There's no specialized arbitration or distribution of connections to the workers in nginx; this work is done by the OS kernel mechanisms.
So I was really shocked because nobody had told me that accepting a listening socket among various processes or threads is OK and won't cause race condition.
Because when referring to use shared resources, the first thing that would come into my mind is "Is that function call thread safe"? So I Googled it and found an related question on StackOverflow.
The accpeted answer proved the behaviour again but it didn't give an reference at all and people under the comments were still arguing about from where the offical documentations define that.
By that time, I was thinking that the property given by thread-safe
is not enough, because it's saying multiple threads or processes accept
on a single listening socket. I need something more stronger than that.
So I came to check the book The Linux Programming Interface, in §5.1 Atomicity and Race Conditions, it writes:
Atomicity is a concept that we’ll encounter repeatedly when discussing the operation of system calls. Various system call operations are executed atomically. By this, we mean that the kernel guarantees that all of the steps in the operation are completed without being interrupted by another process or thread.
Atomicity is essential to the successful completion of some operations. In particular, it allows us to avoid race conditions (sometimes known as race hazards). A race condition is a situation where the result produced by two processes (or threads) operating on shared resources depends in an unexpected way on the relative order in which the processes gain access to the CPU(s).
So the word/property I need is atmoic or atomicity.
So my question is:
Is there any authoritative place that says multi processes or threads accepting a listening socket is an atomic operation?
I just couldn't find any references on the Net after hours of searching.