This problem has now appeared twice on my production Ubuntu machine running both a node server (tiny) and a Spring Boot Java server (the workhorse). The first time it happened, grinding my server to a halt I found the file /proc/sys/fs/file-max
had a value of 808286
which seemed totally reasonable to me BUT I increased it anyway to 2097152
and then don't recall what I did after that (this was a good year ago or so?) but I probably either restarted the server or at least my service and dusted my hands of the problem. Well today it just came back to haunt me again. Restarting my Java service has temporarily fixed the problem but I want to understand what is happening to avoid this in the future.
/etc/security/limits.conf
file is the default one installed my Ubuntu and thus is just a file of comments so no need to reproduce here.- Java service is run by a user
tomcat
. - Apache is run by
root
. This directs traffic to either the node or Spring Boot services.
The result of some relevant commands on my system ...
$ ulimit -Sn
1024
$ ulimit -Hn
1048576
$ sudo su tomcat
$ ulimit -Sn
1024
$ ulimit -Hn
1048576
Side question, why is my hard limit half the value of that set in /proc/sys/fs/file-max
?
But now if I look at the limits per process I get the following...
$ cat /proc/<tomcat_java>/limits
Limit Soft Limit Hard Limit Units
Max open files 4096 4096 files
$ cat /proc/<root_apache>/limits
Limit Soft Limit Hard Limit Units
Max open files 8192 8192 files
$ cat /proc/<my_random_process>/limits
Limit Soft Limit Hard Limit Units
Max open files 1024 1048576 files
So, what is going on here? The only processes that I can find that pay attention to the file-max that I have set are my own. (The "random" process I used above was my bash shell). Where are these limits for apache and java coming from. I can certainly see blowing the 4096 limit above (which is probably my problem) but I have no idea how to get it to use the system set limits.
Thanks for any help on this.