3

I'm on a laptop with intermittent internet connectivity. (i.e. I sometimes don't have network for a week.) I want the output of a process on my laptop to end up on my server. It all needs to get there, eventually, through SSH, and without me having to think about it.

How can I do this?


Test-case

# print current date to FIFO every second
while true; do sleep 1; date; done > magic-fifo

Leave that running, disconnect from the internet for a week (or sufficiently long to be convincing), then reconnect. All data should be sent immediately while connected, but buffered until reconnection whenever not.

An attempt

mkfifo magic-fifo

cat magic-fifo \
| pv --buffer-size 1g --buffer-percent \
| AUTOSSH_POLL=10 AUTOSSH_PORT=50000 autossh user 'cat >> log'

pv is just here to buffer up to 1 GiB of data, in case a week's data fills the kernel pipe buffer.
autossh wraps ssh and keeps it running by killing/resurrecting it if the network is down.

This drops some data at disconnect, but works otherwise. I presume the reason for data loss is that ssh reads it, realises it cannot send it, then gets killed by autossh.


I don't necessarily expect the data to persist across reboots, though that would be a nice bonus.

Anko
  • 4,526
  • Have a look at screen an at Mosh. – sebix Sep 10 '15 at 18:20
  • @sebix I've used both. They're useful for interactive ssh shell sessions, but to my understanding, they don't help when piping data through ssh as here. – Anko Sep 10 '15 at 18:36

1 Answers1

3

Save output to local file, then rsync --partial --append on that file to keep pushing it up to the server?

thrig
  • 34,938