"Everything is a file" is just an overstatement. It was novel in 1970s and it was a primary distinguishing characteristic of UNIX. But it's just a marketing concept, not a real foundation of UNIX, because it's obviously not true. It's not beneficial or sensible to treat EVERYTHING as a file.
Is CPU a file? Does your program read() a CPU to get a new instruction? Is RAM a file? Does your program read() the next byte?
Back then, there were kinds of OS that gave you one API for a floppy disk and a different API for a hard disk, a different API for magnetic tape, and a bunch of different APIs for different terminals and so on. IBM mainframe systems had different types of files on hard disks and gave you a different API to each one of them, believe it or not! So UNIX "it is a file" approach, together with "stdin/stdout/stderr" approach, brought a very elegant abstraction to both users and programmers.
With the network, this particular abstraction just didn't work out. And there's no harm, just slightly less overall elegance and coherence of the OS. But it works. Do you see a file called /dev/myinternetz/www/google/com/tcp/80
anywhere on your system today? Can you open() it, write() a query, and read() the answer in nice HTML? No? This is because this "is a file" abstraction was not very handy for interacting around the network. It wouldn't work too good in practice. Law of leaky abstractions in action.