1

I'm looking through some syslog logs files in my ELK stack and noticed that all the syslog_severity fields are 'notice', when I can verify in the log files that they are not 'notice'. Seems like Logstash is defaulting syslog_severity to notice. I have this in my logstash filter configuration:

filter {
 if [type] == "syslog" {
    grok {
      match => { "message" => "<%{NONNEGINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }

    syslog_pri { }

    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

I've looked at the solution mentioned here but don't think that applies in my case, if you look at my filter config file. I've tried the solution mentioned here and restarted my logstash service with

sudo service logstash restart          

I've also tried restarting the rest of the services in my ELK stack, still getting notice for all of my syslog_severity fields. Any idea what needs to be changed in the filter?

My log messages are of this format:

<134>1 2015-01-01T11:12:23.180242-02:00 message
Celi Manu
  • 161
  • 1
  • 1
  • 5
  • The first thing I notice is that your grok pattern and the log line format don't match up. I get a `_grokparsefailure` when trying it in the [grokdebugger](https://grokdebug.herokuapp.com/). Second, all your events are tagged with `notice` level because that's the default for the `syslog_pri` filter when no `syslog_pri` field exists in the event, as outlined in the [docs](https://www.elastic.co/guide/en/logstash/current/plugins-filters-syslog_pri.html). – GregL Feb 07 '17 at 17:01

0 Answers0