8

I would like to know what is the easiest way to forward my docker container logs to an ELK server, so far the solutions I have tried after having searched the internet didn't work at all.

Basically I have a docker image that I run using docker-compose, this container does not log anything locally (it is composed of different services but none of them are logstash or whatever) but I see logging through docker logs -tf imageName or docker-compose logs. Since I am starting the containers with compose I cannot make use (or at least I don't know how) of the --logs-driver option of docker.

Thus I was wondering if someone may enlighten me a bit regarding how to forward that logs to an ELK container for example.

Thanks in advance,

Regards

SOLUTION:

Thanks to madeddie I could achieve to solve my issue in the following way, mention that I used the basic ELK-stack-in-containers which madeddie suggested in his post.

First I update the docker-compose.yml file of my container to add entries for the logging reference as madeddie told me,I included an entry per service, a snippet of my docker-compose looks like this

   version: '2'
    services:
      mosquitto:
        image: ansi/mosquitto
        ports:
          - "1883:1883" # Public access to MQTT
          - "12202:12202"  #mapping logs
        logging: 
           driver: gelf
           options: 
             gelf-address: udp://localhost:12202
    redis:
     image: redis
     command: redis-server --appendonly yes
     ports:
       - "6379:6379" # No public access to Redis
       - "12203:12203" #mapping logs
     volumes:
       - /home/dockeruser/redis-data:/data
     logging: 
       driver: gelf
       options: 
         gelf-address: udp://localhost:12203 

Secondly, I had to use a different port number per sevice in order to be able to forward the logs.

Finally,I updated my elk container docker-compose.yml file to map each of the upd port where I was sending my logs to the one that logstash listens to

logstash:
  build: logstash/
  command: logstash -f /etc/logstash/conf.d/logstash.conf
  volumes:
    - ./logstash/config:/etc/logstash/conf.d
  ports:
    - "5000:5000"
    - "12202:12201/udp" #mapping mosquitto logs
    - "12203:12201/udp" #mapping redis logs

This configuration and adding the entry of gelf {} in logstash.conf made it work, it is important as well to set up properly the IP address of the docker service.

REgards!

ndarkness
  • 193
  • 1
  • 7
  • This is an interesting solution, you now have a port forward on all other containers to the logstash container. Normally you would not have all those 1220* ports on the other containers, only the 12201/udp on the logstash one and the logging would point to `gelf-address:udp://ip_of_dockerhost:12201` (if all the containers are running on the same host, then 172.17.0.1 would work as ip_of_dockerhost, otherwise use the main IP address of the host the logstash container is running on. – madeddie Aug 12 '16 at 13:31
  • I tried to do so, but while running `docker-compose up` I saw a complaint that the port was already in use. – ndarkness Aug 22 '16 at 10:05
  • yes, because you open a port for logstash on _all_ your containers, while you should only open it for the logstash container. there is no need for all the other containers to listen for logstash connections, therefor, they don't need to open a port for it. – madeddie Aug 22 '16 at 10:21
  • I don't understand it completely,do you mean that my `logging` option should be just at the same level of `services` tag instead per container? – ndarkness Aug 25 '16 at 12:14
  • the logging keyword doesn't actually open a port to listen on, it configures where docker connects _to_. I'm talking about the port mappings you make on each container, you can only _open_ a port once, but you shouldn't open a port for logging on _each_ container, just the logstash one. The problem lies in your usage of "localhost" as host of the geld endpoint, instead of localhost you should use the IP of the machine where logstash container is running or 172.17.0.1 if all the containers are on the same host or use "logstash" if all containers are started with the same compose file. – madeddie Aug 25 '16 at 12:35
  • I know what you mean now, I think I tried that before getting the error that the port was already in use, but I can give a second try. Thanks again – ndarkness Aug 25 '16 at 13:05
  • @madeddie I am facing a small issue with that elk stack, I can see that kibana or elasticsearch reboots around 2 hours or so, have you seen this behaviour as well? If I set my images to the latest instead of the build ones, would this stack still working? – ndarkness Sep 22 '16 at 11:13
  • @madeddie in order to have less load in my server, I wanted to change the port fowarding, however, I cannot maneage to make it work. I added a line like this `gelf-address: udp://172.17.0.1:12201` in each service, getting error when I start up the services due to the port in usage `ERROR: for redis driver failed programming external connectivity on endpoint ttnbackend_redis_1 : Bind for 0.0.0.0:12201 failed: port is already allocated ERROR: for broker driver failed programming external connectivity on endpoint ttnbackend_broker_1 : Bind for 0.0.0.0:12201 failed: port is already allocated` – ndarkness Sep 22 '16 at 13:05
  • Do you also have ports configured per service? The gelf-address doesn't actually bind a port – madeddie Sep 22 '16 at 20:17
  • Yes I have ports per service – ndarkness Sep 29 '16 at 08:50
  • do you configure the 12201 port per service? because you shouldn't – madeddie Oct 04 '16 at 16:06
  • I think i did, should I only add a line like `gelf-address: udp://172.17.0.1` ? Then it will complain that it needs a port – ndarkness Oct 06 '16 at 07:13
  • the gelf line needs a port, I meant a "ports" configuration – madeddie Oct 08 '16 at 20:24
  • Nowadays I keep using " the port forwarding" solution... – ndarkness Oct 10 '16 at 07:46

1 Answers1

5

Docker compose has the logging keyword: source

logging: 
  driver: syslog
  options: 
    syslog-address: "tcp://192.168.0.42:123"

So if you know where to go from there, go for it.

If not, I can advice you to look into the gelf logging driver for docker and the logstash gelf input plugin

If you would for instance use this basic ELK-stack-in-containers setup, you would update the docker-compose file and add port - "12201:12201/udp" to logstash.

Edit the logstash.conf input section to:

input {
    tcp {
        port => 5000
    }
    gelf {
    }

}

Then configure your containers to use logging driver gelf (not syslog) and the option gelf-address=udp://ip_of_logstash:12201 (instead of syslog-address).

The only magic you will have to take care of is how Docker will find the IP address or hostname of the Logstash container. You could solve that through docker-compose naming, Docker links or just manually.

Docker and ELK are powerful, flexible, but therefore also big and complex beasts. Prepare to put in some serious time, in reading and experimentation.

Don't be afraid to open new (and preferably very specific) questions you come across while exploring all this.

madeddie
  • 418
  • 2
  • 6
  • Thanks for the answer, as far as I've tested I have to add that entry per service running in my `docker-compose.yml` file right? I have done it for two services and now those ones don't log into the terminal so I guess their logs is being forwarding. I have a bit experience on it but mostly from literature(Iǘe read that filebeat is better), could you put those configs here too? – ndarkness Aug 11 '16 at 20:31
  • I can't help you with filebeat (don't use it myself), but I'll add a sample config snippet for logstash. Although I expect you to read up on how to use the ELK stack, because that's a bit beyond the scope of this question (and also enough to fill a week of introductory training :)) – madeddie Aug 11 '16 at 20:39
  • Thanks again, it is fine if you can share the config snippet for logstash, I have downloaded an `sebp/elk`image where elk is set up and I am planning to forward that syslog to that container. – ndarkness Aug 11 '16 at 20:42
  • I have notice that I can only one of my services attached to one port due to the fact that is tcp, can I use udp too for the syslog? – ndarkness Aug 11 '16 at 20:47
  • UDP can also only be opened by one listening process. The `syslog-address: "tcp://192.168.0.42:123"` option means "send logs to this syslog server", so it doesn't itself set up and listen to syslog messages. – madeddie Aug 11 '16 at 20:53
  • thanks again, I will try it tomorrow, as quick though If I set my `docker-compose.yml` like this, would it work since now I forward the logs? `yourapp: image: your/image ports: - "80:80" links: - elk elk: image: sebp/elk ports: - "5601:5601" - "9200:9200" - "5044:5044" - "5000:5000" ` – ndarkness Aug 11 '16 at 21:14
  • no, my explanation explained how to use syslog or gelf, the container you're using is not configured to receive either, so you're basically forwarding your logs nowhere at the moment. once you configure gelf and open that port on the elk container, and configuring your containers to send their logs there, it should work – madeddie Aug 11 '16 at 21:17
  • thanks again, I changed the compose file to use port 5000 in the gelf address, since "my" elk container should listen to witht the config I post before? I will with your elk container tomorrow morning once I am fresh again! Thanks – ndarkness Aug 11 '16 at 21:24
  • yeah, the elk container listens to port 5000, but it's currently not configured to expect gelf traffic on port 5000. it actually expects 'lumberjack' type logs there, so logstash forwarder type. `https://github.com/spujadas/elk-docker/blob/master/01-lumberjack-input.conf`. if you're serious about using this, i really suggest reading the documentation (minimally the getting started) for elasticsearch, logstash, kibana and docker logs – madeddie Aug 11 '16 at 21:27
  • I am trying with your suggested image of elk and I don't see traffic either into kibana. Something that I don't understand is why in logstash.conf says to use as input tcp, but my gelf uses udp to send the data – ndarkness Aug 12 '16 at 08:05
  • I have some traffic of just one of my services, I had to enable the port 12201 in my service as well to be able to log, however, I cannot reuse that port for the other services... Do you know if I can reuse that port for the other services or I need to have a different per service? – ndarkness Aug 12 '16 at 09:11
  • I finally managed it, I will mark your solution as the one that solved it and update my post with the current implementation – ndarkness Aug 12 '16 at 11:46