0

Good Evening. I have a ELK stack as follows:

Clients with logbeat (windows 2003, 2008 2012, and Linux Ubuntu 16.04) logstash (FreeBSD 11.0) elasticsearch5.2/kibana/nginx 10(Ubuntu 16.04)

The problem is that when configuring it I created an index named logstash following a tutorial, and mixed within Windows Eventlogs,linux Syslogs, and Squid access logs(most important to the managers).

The problem is I need to show in a Kibana visualization squid logs collecting info such as most browsed domains, time spen in the internet per user, etc I' ve read the tuts on the Internet and all say that must filter with grok in logstash before sending files to elasticsarch.

But I need the info wich is already there when I search within discovery in kibana.but filtered out from the general logstash* index.

Any lights on this, I deeply apreciate it.

Thans so much in advance.

My ELK configs are as follows:

Logstash:

input {

        file {
                type => "syslog"
                # path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
                path => "/var/log/messages"
                start_position => "beginning"
        }


       beats {

               port => 5044
       }




}

filter {
# An filter may change the regular expression used to match a record or a field,
# alter the value of parsed fields, add or remove fields, etc.
#
#       if [type] == "syslog" {
#               grok {
#                       match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} (%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}|%{GREEDYDATA:
syslog_message})" }
#                       add_field => [ "received_at", "%{@timestamp}" ]
#                       add_field => [ "received_from", "%{@source_host}" ]
#               }
#
#               if !("_grokparsefailure" in [tags]) {
#                       mutate {
#                               replace => [ "@source_host", "%{syslog_hostname}" ]
#                               replace => [ "@message", "%{syslog_message}" ]
#                       }
#               }
#               mutate {
#                       remove_field => [ "syslog_hostname", "syslog_message" ]
#               }
#               date {
#                       match => [ "syslog_timestamp","MMM  d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601" ]
#               }
#               syslog_pri { }
#       }
}

output {
        # Emit events to stdout for easy debugging of what is going through
        # logstash.
        # stdout { codec => rubydebug }

        # This will use elasticsearch to store your logs.
        elasticsearch {
                        hosts => [ "172.19.160.24:9200" ]
                       # manage_template => false
                       # index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
                       # document_type => "%{[@metadata][type]}"manage_template => false
                      }
       }

=======================================================================

Kibana:

=======================================================================

server.port: 5601
server.host: "127.0.0.1"
server.name: "kibana-xxxxxx"
elasticsearch.url: "http://172.19.160.24:9200"
elasticsearch.preserveHost: true
kibana.index: ".kibana"
kibana.defaultAppId: "discover

======================================================================= Elasticseacrch:

=======================================================================

cluster.name: dnc-srv-logcollector
node.name: node-1-sm
node.attr.rack: r1
network.host: 172.19.160.24
http.port: 9200
index.codec: best_compression

=========================================================================

Eddy
  • 7
  • 1
  • 10

1 Answers1

0

If it's the squid logs you're looking to present, you're in luck as those are using logstash.

file {
  path => [ '/var/log/squid/access.log' ]
  type => "squid"
}

This allows you to build dashboards with

type:"squid"

As one of your search-terms, which will filter everything to just the squid logs.

But this is only the start. You can make it even easier to search by more directly indexing the squid entries. One of squid's logging output styles mimics Apache's access-log style. This way, you can use a filter {} block.

if [type] == 'squid {
  grok {
    match => {
      message => [
        "%{IPORHOST:clientip} %{HTTPDUSER:ident} %{HTTPDUSER:auth} \[%{NUMBER:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{WORD:squid_result}"
      ]
    }
  }
}

Doing it this way will allow you to build a dashboard using a terms lookup on request, which will give you your most-accessed-sites list more reliably.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • Thanks a lot, I didn' t realized that type in proxy filebeat was set to "squid" ,so I can make visualizations work. again thanks a lot. – Eddy Apr 07 '17 at 19:19