2

I log all events on a system to a JSON file via syslog-ng:

destination d_json { file("/var/log/all_syslog_in_json.log" perm(0666) template("{\"@timestamp\": \"$ISODATE\", \"facility\": \"$FACILITY\", \"priority\": \"$PRIORITY\", \"level\": \"$LEVEL\", \"tag\": \"$TAG\", \"host\": \"$HOST\", \"program\": \"$PROGRAM\", \"message\": \"$MSG\"}\n")); };

log { source(s_src); destination(d_json); };

This file is monitored by logstash (2.0 beta) which forwards the content to elasticsearch (2.0 RC1):

input
{
  file
  {
    path => "/var/log/all_syslog_in_json.log"
    start_position => "beginning"
    codec => json
    sincedb_path => "/etc/logstash/db_for_watched_files.db"
    type => "syslog"
  }

}

output {
    elasticsearch {
        hosts => ["elk.example.com"]
        index => "logs"
    }
}

I then visualize the results in kibana.

This setup works fine, except that kibana does not expand the message part:

enter image description here

Is it possible to tweak any of the elements of the processing chain to enable to expansion of messages (so that its components are at the same level as path or type?

EDIT: as requested, a few lines from /var/log/all_syslog_in_json.log

{"@timestamp": "2015-10-21T20:14:05+02:00", "facility": "auth", "priority": "info", "level": "info", "tag": "26", "host": "eu2", "program": "sshd", "message": "Disconnected from 10.8.100.112"}
{"@timestamp": "2015-10-21T20:14:05+02:00", "facility": "authpriv", "priority": "info", "level": "info", "tag": "56", "host": "eu2", "program": "sshd", "message": "pam_unix(sshd:session): session closed for user nagios"}
{"@timestamp": "2015-10-21T20:14:05+02:00", "facility": "authpriv", "priority": "info", "level": "info", "tag": "56", "host": "eu2", "program": "systemd", "message": "pam_unix(systemd-user:session): session closed for user nagios"}
WoJ
  • 3,365
  • 8
  • 46
  • 75
  • Can you post a few lines from `/var/log/all_syslog_in_json.log`? Have you tried running the `message` field through the `json` filter to expand it? – GregL Oct 21 '15 at 18:10
  • @GregL: I edited my question with your request. I am new to logstash, so how exacltly can I "*run the `message` field through the `json` filter to expand it*"? – WoJ Oct 21 '15 at 18:20
  • 1
    You'd need to read the [docs](https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html) on the `json` filter, create a `filter` stanza in the config and add an appropriate `json` stanza inside it. – GregL Oct 21 '15 at 18:23
  • @GregL: I see, it seems to be exactly what I was looking for, thank you. It seems that it is not possible to dynamically map the fields but I will be required to specifically add one `add_field`entry per field in my JSON, right? – WoJ Oct 21 '15 at 18:29
  • No, it'll do it all for you. a config like this `filter { json { source => "message" } }`, will result in fields called `facility`, `priority`, etc.. – GregL Oct 21 '15 at 18:35
  • 1
    Have you tried using the `json_lines` codec instead of `json`? I've got log sources that look just like the example lines you gave, and `json_lines` works Just Fine to parse them correctly. – womble Oct 21 '15 at 19:27

1 Answers1

2

I believe you are using the wrong codec on your input, you need to use json_lines, from the docs:

If you are streaming JSON messages delimited by \n then see the json_lines codec.

Use this codec instead. Alternatively, you could ignore the codec on the input and send these through a json filter, which is how I always do it.

filter {
    json {
        source => "message"
    }
}
Rumbles
  • 915
  • 1
  • 12
  • 27