121

I'm looking for a way, to simply print the last X lines from a systemctl service in Debian. I would like to install this code into a script, which uses the printed and latest log entries. I've found this post but I wasn't able to modify it for my purposes.

Currently I'm using this code, which is just giving me a small snippet of the log files:

journalctl --unit=my.service --since "1 hour ago" -p err

To give an example of what the result should look like, simply type in the command above for any service and scroll until the end of the log. Then copy the last 300 lines starting from the bottom.

My idea is to use egrep ex. egrep -m 700 . but I had no luck since now.

sourcejedi
  • 50,249
user3191334
  • 1,479

5 Answers5

210
journalctl --unit=my.service -n 100 --no-pager
sourcejedi
  • 50,249
46

If you want to see the last n number of lines and see new messages as they are printed to the log, try this:

journalctl -u <service name> -n <number of lines> -f

Where -n indicates the number of lines you'd like to see from the tail of the log, and -f specifies that you'd like to follow the log as it changes.

33

Just:

journalctl -u SERVICE_NAME -e

Parameter -e stands for:

-e --pagerend; Immediately jump to the end of the journal inside the implied pager tool. This implies -n 1000 to guarantee that the pager will not buffer logs of unbounded size. This may be overridden with an explicit -n with some other numeric value, while -nall will disable this cap.

Daniel
  • 431
16

You could pipe the output to tail:

journalctl --unit=my.service | tail -n 300

The tail command prints the last lines (10 by default) received in stdin to stdout.

Edit: as noted in the comments, this is inefficient for very large logs.

dr_
  • 29,602
  • 2
    Totally forgot about tail - great idea, thank you very much! – user3191334 Feb 06 '18 at 08:37
  • 20
    Tail can be painfully slow for large logs. The built-in -n of journalctrl is what you want. e.g. journalctl -n 300 – Drakes Jan 24 '19 at 00:41
  • 13
    A singularly bad idea. Why would you pipe 30 terabytes of data into tail to show the last 10 records? – Rick O'Shea Apr 28 '20 at 22:54
  • 3
    This is an especially bad answer. Don't do this! – gd1 Apr 10 '21 at 17:36
  • 1
    @Drakes It's journalctl that's slow, not tail. time journalctl > logs.txt; ls -sh logs.txt; time cat logs.txt | tail -1 on my system shows that systemctl takes 81 seconds to produce 647MB of logs, and tail takes about 0.4s to get the last line. Tail directly on the text file (like we used to do before binary logs) uses further optimizations and takes just 1 millisecond. journalctl is just glacial. – marcelm Jan 22 '24 at 15:55
  • @marcelm this is inefficient regardless of the speed of journalctl; just use journalctl -n 300 to get the last 300 lines rather than getting ALL the logs to then throw everything away except the last 300 lines. – bfontaine Mar 08 '24 at 16:42
0

since tail command solution aleady provided.I tried by using sed commmand and its worked fine

Below command will display last 300 lines

journalctl --unit=my.service | sed -e :a -e '$q;N;301,$D;ba' 
  • 1
    That’s an interesting answer because: it shows what you can do with "sed", but it’s unhelpful for this question because: it’s extraordinarily less efficient than asking "journalctl" for the last ‘n’ lines directly. And it’s more verbose than using "tail". And the "sed" command is left unexplained—and without explanation, it is incomprehensible! But you’ve reminded me that I really should read up on "sed" some time! – andrewf Sep 25 '23 at 08:48