0

production.log

[2019-02-11 10:18:18] GET  /
[2019-02-11 11:18:19] POST  blavadasdasdgd
...
... <--- A lot of in between logs and other data
...
[2019-02-12 11:18:20] INFO  ::HTTPServer#start: pid=21378 port=4567
[12/Mar/2019:11:18:25 +0200] "GET / HTTP/1.1" 404 458 0.0086
[12/Mar/2019:11:18:26 EET] "GET / HTTP/1.1" 404 458 - -> /
[12/Mar/2019:11:18:27 +0200] "GET /" 200 18893 0.0020
[2019-03-12 11:18:28] GET /
[2019-03-12 12:18:29] POST  blablabla
...
... <--- A lot of in between logs and other data 
...
[13/Mar/2019:11:18:30 +0200] "GET / HTTP/1.1" 404 458 0.0086
[13/Mar/2019:11:18:31 EET] "GET / HTTP/1.1" 404 458 - -> /
[13/Mar/2019:11:18:32 +0200] "GET /" 200 18893 0.0020
...
... <--- A lot of in between logs and other data
...
[2019-03-14 11:19:18] GET /

The content of this file is fake (but the timestamps are in the correct order older to newer)

I have a webserver that is running through nohup and outputting everything to a file called production.log and is writing around (+10GB) of data and info to it and I want to truncate it in a way to maintain some good amount of recent logs and data inside it, but getting rid of the old data. So I'm taking an approximate guess of tailing the last 30,000 lines outputting them into a new file called production.log.1 and then move it back and replace it with production.log

Example:

tail -30000 production.log > production.log.1 && mv production.log.1 production.log

Now when I try to tail -f production.log it never outputs anything new from the webserver but instead it only keeps showing the last timestamp log before i managed to replace the file. The webserver just stops writing into it.

is there a better or a good way to do this without writing into a different file? I need to get rid of the old data from this site while keeping 2>&1 outputting to it from the webserver.

Viktova
  • 285
  • 1
    What you've done there is to leave the webserver writing to a file that no longer exists in the filesystem. (This means you can't read copy it, and as it no longer exists you can't even truncate it.) – Chris Davies Mar 12 '19 at 10:08
  • If you tell us what the webserver is, we might be able to tell you how to get it to start writing to a new logfile. – Chris Davies Mar 12 '19 at 10:08

2 Answers2

3

Make sure to append the output to the file, i.e. use >> and not >.

Next you can use "copy and truncate" to preserve the log:

cp production.log production.log.1 && cp /dev/null production.log

There is a small time window where the copy command is completed and the truncation of the logfile is performed so that you may lose a bit of the log, but that is not to be avoided.

Note that the utility logrotate has a special directive to do just this: copytruncate.

The point of using append and not simply redirect is that otherwise the program writing to the logfile may remember its write offset, so that if the logfile was e.g. 2MB in size, and you truncate the file, and the program writes again, you may end up with a sparse logfile that has 2MB of null blocks and then the logs again.

wurtel
  • 16,115
1

Ensure that it logs to standard error, or at least standard output, and have its standard output/error be the write end of a pipe, at the read end of which you run a tool that maintains strictly size-capped, automatically rotated, rotateable-on-demand, log file sets in a directory that one specifies::

./thing-to-be-logged 2>&1 | cyclog logs/

You won't have the lost log entries that logrotate can give you. And you won't have to touch the server at all in order to get the log writer to switch to a new output file.

Such tools include:

Further reading

JdeBP
  • 68,745