30

I'm just getting started with Docker and richt now I'm trying to figure out how to set up my first dockerized Apache 2 / PHP environment. Up to now I have been using full Linux VMs, where I used log-files being written to /var/log/apache2, then use "logrotate" to hop to a new file each day.

Logfiles were mainly used for immediate error detection (i.e. log on to the server and use less to open the current access.log and error.log files) and for fail2ban.

If I'm correct that is not practicable in an Docker environment - mainly because you usually cannot log in to containers to have a look at the logs. Also logs will be lost if the container is removed.

So: What is the most common method to work with/emulate/replace access.log/error.log in that situation? What are common solutions for both production and development environments?

My ideas so far include using a NFS share (slow and may cause filename collisions if not careful), and logstash (not sure if it is worth the effort and practicable for smaller sites or even dev environments?) but I'm sure smart people have come up with better solutions?

Not sure if it makes a difference, but currently I'm basing my Docker image on php:5.6-apache.

BlaM
  • 3,816
  • 5
  • 26
  • 27

6 Answers6

15

You can still use docker exec -it <your container name> /bin/bash command to get into your container and do your regular job. Or maybe you can change /bin/bash into your command or script .sh of your command to execute it.

To get your file out of container use docker cp <container name:/path/to/file> </your local machine/path/>

And for your daily job, you can use cron to cronjob those commands. I highly reccommend you to have alias your frequent docker commands. So that I can use docker happily with a few key.

The docker logs <container name/id> command is for viewing log from execution of the docker image. It shows redirect output to stdout.

Fony Lew
  • 273
  • 1
  • 7
  • In addition, `docker attach ` is the good way to see stdout from your container. But please beware that If you do ctrl+d or ctrl+c, it will TERMINATE (sigkill) your ongoing task. So you have to detach it properly by using escape key `ctrl+p+q`. If you just want to shell into your container, I prefer using `exec` command above. – Fony Lew Jan 09 '18 at 04:43
  • I *was* running docker inside virtualbox, but I wasn't able to detach properly using the above escape keys `Ctrl+p+q`. `attach` only showed me what was already going into docker logs so I'd be very careful. Ctrl+C did kill the container, which `docker start mycont` resolved. – John Jul 26 '22 at 14:47
  • 1
    The comment on `docker attach` and safe detaching only applies if the container was started with both `-i -t`. https://serverfault.com/a/1025870/977734. As I found out it wouldn't work when started with neither, but `docker start` soon got it up again. – John Jul 26 '22 at 15:23
  • So it's probably safer to just use `docker logs` for stdout or `docker exec -it` instead of attach then. Normally, I use `exec` together with `tmux` to keep the session running in the background. – Fony Lew Jul 26 '22 at 16:01
9

How about writing access and error log to stderr and stdout?

https://mail-archives.apache.org/mod_mbox/httpd-users/201508.mbox/%3CCABx2=D-wdd8FYLkHMqiNOKmOaNYb-tAOB-AsSEf2p=ctd6sMdg@mail.gmail.com%3E

https://gist.github.com/afolarin/a2ac14231d9079920864

RUN ln -sf /dev/stdout /var/log/nginx/access.log

RUN ln -sf /dev/stderr /var/log/nginx/error.log

Centralized logging with ELK would allow for more proactive monitoring though. But you already thought of that one yourself.

JayMcTee
  • 3,763
  • 12
  • 20
7

In the apache configuration file you can add:
CustomLog /dev/stdout
ErrorLog /dev/stderr

and to see the logs use the command below:
docker logs container_id

S.Bao
  • 111
  • 1
  • 4
  • 3
    This is a nice solution, but [CustomLog](https://httpd.apache.org/docs/current/mod/mod_log_config.html#customlog), needs a format parameter too. For example, it could be `CustomLog /dev/stdout combined`. – mlissner Oct 25 '20 at 02:45
3
root@my_docker:~ # ls -l /var/log/apache2/
total 0
lrwxrwxrwx 1 root root 11 Jul 17 04:55 access.log -> /dev/stdout
lrwxrwxrwx 1 root root 11 Jul 17 04:55 error.log -> /dev/stderr
lrwxrwxrwx 1 root root 11 Jul 17 04:55 other_vhosts_access.log -> /dev/stdout
root@my_docker:~ #

the docker image I chose just linked all *.log files to /dev/stdout and /dev/stderr, so I couldn't read them.

after removing the files and restarting apache I can get the logs from /var/log/ in the docker.

docker-compose exec apache bash -c "tail -f /var/log/apache2/*.log"
Ohad Cohen
  • 131
  • 2
3

Maybe this feature did not exist when the question was asked, but with run's -v argument you can mount a directory on the host onto a directory in the container.

docker run -v [host_dir]:[container_dir]

This way the log (or other) files will survive when the container is deleted and you can access the files as if apache were installed on the host rather than in a container.

Alternatively, you could somehow push modified log files to a central location. The Kibana stack uses filebeat to achieve this, but it should be possible to run filebeat independently if you do not care for the rest of the stack.

1

So far I have found "docker logs" being mentioned several times.

I'm an absolute Docker newb, so that might hold the solution to my problem - but so far I haven't fully understood the concept behind that command.

Docker seems to keep all stdout output in JSON files in /var/lib/docker/containers/ and gives me a chance to acess them through the logs command.

So far I'm not sure how to actually use the output.

BlaM
  • 3,816
  • 5
  • 26
  • 27