13

Is there a way my SSH server, and everything under it (including bash) can be always available under heavy load ?

Maybe it could be some kind of a critical path, all in memory, with a dedicated CPU or something like that.

What is the way I could have an always-available server at minimal cost, to investigate what's going on on my server ?

blue112
  • 617
  • Are you asking about a way of ensuring that you can log in via SSH to trouble-shoot / repair an otherwise unresponsive system? If so, that's an answer I'd like to see. I had previously wondered if there was some way to reserve a percentage of CPU / memory resources for root in the same way that 5% of disk space is reserved for root when an ext2 filesystem fills up. – Anthony Geoghegan Apr 09 '15 at 13:12
  • @AnthonyGeoghegan Yes, that's what I'm asking. And that's what I'm thinking about, but I'm not sure if there's a way to do that (or how). – blue112 Apr 09 '15 at 13:15
  • While looking around, I noticed a similar question was asked a few years ago but didn't get any answers. Here's hoping this question gets a better response. – Anthony Geoghegan Apr 09 '15 at 14:01
  • 1
    The likely answer is no. However, there are things you can do that will aid in remotely diagnosing the problems causing the heavy load, such as configuring remote system monitoring (Scout, NewRelic's server monitoring) and remote syslog logging (PaperTrail, LogStash, rsyslog, etc) – Creek Apr 09 '15 at 16:32
  • 1
    The most common "unresponsive" problems I've seen are running out of file descriptors, pids, and sockets. Even if you had some way for sshd to keep a pool of reserved processes, bash would be unable to fork so you'd end up with a useless shell. Your best chance would be a shell that has debugging tools built in, but if you're out of file handles, you would still have trouble diagnosing issues. – Chris Mendez Apr 09 '15 at 17:52
  • You can pin a program in memory, you can reserve one processor (but that's wasteful), you can set the nice and realtime values, and you can run the entire system in a cgroup with limited CPU shares (which are only yielded when needed), and then break out sshd so it takes priority. As for protecting a limited set of file descriptors, PIDs and socket: cgroups (probably) has the answer. Google shows plenty of results. – Ken Sharp Aug 02 '18 at 03:21

1 Answers1

4

In order to fully utilize the system it presents all services the same resources and the kernel will try to keep all of them running with the same priority. You could set the priority level of the sshd process to the highest level. (As nice goes down priority goes up)

See here: https://serverfault.com/questions/355342/prioritise-ssh-logins-nice

That won't solve your issue with memory. You would need to use cgroups to assign the sshd process its own reserved memory to handle that.

Limit memory usage for a single Linux process

DM.
  • 49