I'm on a laptop with intermittent internet connectivity. (i.e. I sometimes don't have network for a week.) I want the output of a process on my laptop to end up on my server. It all needs to get there, eventually, through SSH, and without me having to think about it.
How can I do this?
Test-case
# print current date to FIFO every second
while true; do sleep 1; date; done > magic-fifo
Leave that running, disconnect from the internet for a week (or sufficiently long to be convincing), then reconnect. All data should be sent immediately while connected, but buffered until reconnection whenever not.
An attempt
mkfifo magic-fifo
cat magic-fifo \
| pv --buffer-size 1g --buffer-percent \
| AUTOSSH_POLL=10 AUTOSSH_PORT=50000 autossh user 'cat >> log'
pv
is just here to buffer up to 1 GiB of data, in case a week's data fills the kernel pipe buffer.
autossh
wraps ssh
and keeps it running by killing/resurrecting it if the network is down.
This drops some data at disconnect, but works otherwise. I presume the reason for data loss is that ssh
reads it, realises it cannot send it, then gets killed by autossh
.
I don't necessarily expect the data to persist across reboots, though that would be a nice bonus.
ssh
shell sessions, but to my understanding, they don't help when piping data throughssh
as here. – Anko Sep 10 '15 at 18:36