I am wondering are there any drawback (performance wise) when output from some software is not logged to a file, but instead to console?
Specific case would be running Docker containers, as some of processes are intentionaly configured to log on stdout.
With files, only problem that can cross my mind is space used by logs, and some IO depending how aggressive logging is.
To give an example, let say I have a webserver app in Docker container that logs access logs to stdout(console), and let's imagine that container stays running for 1y constantly, would that kind of large buffer stay in memory (kernel one?) , for all the time, or kernel would eventually wipe it after some limit?
Should I be potentially affraid of some memory exhaustion in this case and kernel panics, or I misunderstand concept?
( i understand that when app|container|node is down, console is flushed, same as when dmesg
is cleared )