So, do I get it right that you want to be able to view the last N bytes at any time, without having to store significantly more than that?
If you want that in actual real-time, as in byte-by-byte, you'll probably need some dedicated software for it. Often, one would store the latest data in a ring buffer, but you probably couldn't find software that could read data arranged like that from a file. Another option would be to just write all the data to a file the normal way, and periodically tell the OS to discard the earlier data that's no longer needed. (I think I saw a comment suggesting that earlier, but maybe it was deleted.)
You could probably do that with a Perl script that fallocate
to do the discarding.
If you don't need the very latest data, a quick and dirty shell solution would store fixed chunks in a file, renaming the file to another name between each chunk. The latest complete chunk would then be always available in the second file.
E.g.
i=0
while true; do echo $i; sleep .1; i=$((i+1)); done |
while true; do
head -10 > out1
mv out1 out2
done
The first loop produces some test output, while the second reads chunks of 10 lines so that the last complete chunk is in out2
and the last partial chunk in out1
. Change the argument to head
as needed.
Note that with a small chunk like that, output buffering in head
will make out1
always appear empty. Use stdbuf -o0 head ...
instead if that's an issue.
tail
and redirect the output just like you're piping topv
here. – muru Oct 28 '22 at 04:15tail -f
wouldn't work because-f
appends all new data to the output – bcattle Oct 28 '22 at 13:24