77

I'm deploying a 3rd-party application in compliance with the 12 factor advisory, and one of the points tell that application logs should be printed to stdout/stderr: then clustering software can collect it.

However, the application can only write to files or syslog. How do I print these logs instead?

kolypto
  • 10,738
  • 12
  • 51
  • 66

6 Answers6

152

An amazing recipe is given in the nginx Dockerfile:

# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
    && ln -sf /dev/stderr /var/log/nginx/error.log

Simply, the app can continue writing to it as a file, but as a result the lines will go to stdout & stderr!

kolypto
  • 10,738
  • 12
  • 51
  • 66
  • 5
    This. Is. Just. Brilliant. And versatile. – spacediver Nov 15 '14 at 23:24
  • 15
    The one thing that's a problem with this is when the thing you're running insists on forking itself to the background as a non-root process. I'm having this with `squid3`, and it then has trouble with permissions on `/dev/stdout`. – mikepurvis Dec 17 '14 at 03:51
  • 8
    This doesn't always work - for example if the application attempts to seek the file, this method will generally not work. – mixja May 24 '16 at 11:45
  • Link is down... – Babken Vardanyan Jun 20 '17 at 10:23
  • 6
    Everything is a file for the win. – RubberDuck Jul 15 '17 at 20:12
  • softlink isn't required with the latest Nginx. You can simply redirect to stdout and stderr. – user169015 Nov 16 '18 at 10:16
  • 4
    Please note that this works for most cases however if your application doing the logging is not running as PID 1, then the logs will not redirect to the stdout that Docker reads from. – leeman24 Oct 31 '19 at 15:34
  • I have multiple access.log and error.log(access log per server block) how would this even work?? – AATHITH RAJENDRAN Dec 22 '19 at 05:27
  • Actually I'm struggling with MySQL to get the logs to the stdout of the container, and even after having linked the log files to `/dev/stdout` in the `Dockerfile` or in the entrypoint script, and even with mysqld running as PID 1, it doesn't work for me. – ZedTuX Dec 10 '20 at 07:12
40

For a background process in a docker container, e.g. connecting with exec to /bin/bash I was able to use.

echo "test log1" >> /proc/1/fd/1

This sends the output to the stdout of pid 1, which is the one docker pick's up and logs.

Pieter
  • 678
  • 6
  • 10
  • 5
    This is the correct answer. The logs should be sent to stdout for PID1 and if your application is running under another PID, it won't be seen by `docker logs` or `kubectl logs`. I have a container that schedules task through crontab which isn't running as PID1. – leeman24 Oct 31 '19 at 15:33
  • @leeman24 this is not correct: any process forked from PID1 will inherit all open FDs including `stdout` and `stderr`, i.e. the `stdout` of forked processes will be collected by `docker logs` as well. Only if a forked process closes or changes its `stdout` to something else it won't be available in `docker logs` anymore. _Then_ you can use `/proc/1/fd/1` to "reconnect" the output to `docker logs`. – acran Apr 14 '22 at 12:39
  • Simplest, easiest, reliable. – Rax Aug 15 '22 at 07:43
19

In another question, Kill child process when the parent exits, I got the response that helped to sort this out.

This way, we configure the application so it logs to a file, and continuously tail -f it. Luckily, tail can accept --pid PID: it will exit when the specified process exits. We put $$ there: PID of the current shell.

As a final step, the launched application is exec'ed, which means that the current shell is completely replaced with that application.

Runner script, run.sh, will look like this:

#! /usr/bin/env bash
set -eu

rm -rf /var/log/my-application.log
tail --pid $$ -F /var/log/my-application.log &

exec /path/to/my-application --logfile /var/log/my-application.log

NOTE: by using tail -F we list filenames, and it will read them even if they appear later!

Finally, the minimalistic Dockerfile:

FROM ubuntu
ADD run.sh /root/run.sh
CMD ['/root/run.sh']

Note: to workaroung some extremely strange tail -f behavior (which says "has been replaced with a remote file. giving up on this name") i tried another approach: all known log files are created & truncated on start up: this way I ensure they exist, and only then -- tail them:

#! /usr/bin/env bash
set -eu

LOGS=/var/log/myapp/

( umask 0 && truncate -s0 $LOGS/http.{access,error}.log )
tail --pid $$ -n0 -F $LOGS/* &

exec /usr/sbin/apache2 -DFOREGROUND
kolypto
  • 10,738
  • 12
  • 51
  • 66
  • Especially for Apache server I'm using Piped logs (http://httpd.apache.org/docs/2.4/logs.html#piped) and looks like it works. – php-coder Oct 09 '15 at 09:49
  • The solves similar issues with alpine and seems to work fine on BusyBox `tail` without the --pid option. – Ryan Dec 10 '18 at 02:12
  • the workaround worked for me. extreme useful, thank you. – Tamas Kalman Mar 18 '19 at 23:49
  • While this works pretty great with Docker, it doesn't with Kubernetes! I don't know why but the `tail` process is missing in the pod's container... – ZedTuX Dec 10 '20 at 08:56
7

for nginx you can have nginx.conf pointing to /dev/stderr and /dev/stdout like this

user  nginx;
worker_processes  4;
error_log  /dev/stderr;
http {
    access_log  /dev/stdout  main;
...

and your Dockerfile entry should be

/usr/sbin/nginx -g 'daemon off;'
Muayyad Alsadi
  • 171
  • 1
  • 3
3

In my case making a symbolic link to stdout didn't work so instead I run the following command

ln -sf /proc/self/fd/1 /var/log/main.log 
1

I've just had to solve this problem with apache2, and wrestled with using CustomLog to try redirecting to /proc/1/fd/1 but couldn't get that working. In my implementation, apache was not running as pid 1, so kolypto's answer didn't work as is. Pieter's approach seemed compelling, so I merged the two and the result works wonderfully:

# Redirect apache log output to docker log collector
RUN ln -sf /proc/1/fd/1 /var/log/apache2/access.log \
    && ln -sf /proc/1/fd/2 /var/log/apache2/error.log

Technically this keeps the apache access.log and error.log going to stdout and stderr as far as the docker log collector is concerned, but it'd be great if there were a way to separate the two outside the container, like a switch for docker logs that would show only one or the other...