My application which is based on two Java processes that interchange data over a http connection runs ouf of files and produces this error message:
Aug 14 11:27:40 server sender[8301]: java.io.IOException: Too many open files
Aug 14 11:27:40 server sender[8301]: at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
Aug 14 11:27:40 server sender[8301]: at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
Aug 14 11:27:40 server sender[8301]: at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
Aug 14 11:27:40 server sender[8301]: at org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:455)
Aug 14 11:27:40 server sender[8301]: at java.lang.Thread.run(Thread.java:748)
Both processes are under control of SystemD. I checked the processes using cat /proc/5882/limits
, the limits are defined like this:
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 63434 63434 processes
Max open files 4096 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 63434 63434 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
When I run lsof | grep pid | wc -l
I have less than 2000 entries (I run lsof this way because of information retrieved from Discrepancy with lsof command when trying to get the count of open files per process)
I don't have the slightest idea what I could check or increase further.
lsof -aKp "$pid"
– Wildcard Aug 15 '18 at 05:17