I have SSH remote access to a machine I'd like to use for long-running jobs. What I currently do is simply
ssh user@remote command-to-run
This has several drawbacks:
- I can't simply suspend my local machine - when I do that,
SIGHUP
will be sent to the remote process, effectively killing it. I could use nohup to prevent that. - The output may be long, I'd rather have it redirected to files. Of course, I can do it manually, but it gets clumsy with a series of commands.
- The process may run a really long time. It would be ideal that the submitting program only confirms that the command (script) has been successfully submitted and terminates.
- I'd like to get a mail notification, when the process terminates, with its exitcode. Of course, I could use a shell script and a terminal command to send it manually, one more hack.
- I want to be able to schedule multiple scripts at once safely. In particular, I want to be able to push multiple scripts with the same name without manual renaming. I don't want to worry about possible files which already exist on the file system.
This is very similar to what SLURM does, but I don't have any administrative rights on the remote side. Besides, when I have the access to all cores of the remote machine, it makes no sense to declare, how many cores I need.
Is there anything I could use for this? What I described seems like a common usecase.
nohup
. This type of question has been asked over and over and over again. – Julie Pelletier Jan 20 '17 at 17:40nohup
I've already known about (just forgot to mention that) and to mention the extra requirements I realized I want. – marmistrz Jan 20 '17 at 21:36