@sysadmin1138's answer was half the problem. The other half of the problem is that, for reasons that must have seemed good at the time but seem incredibly shortsighted now, the logstash patterns for haproxy don't provide explicit data types for fields. E.g., HAPROXYTCP
is defined as:
HAPROXYTCP (?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}
\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termin
ation_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}
Since, for example, bytes_read
is defined as %{NOTSPACE:bytes_read}
, it's a string
data type and thus not available for visualizations. Fixing this means creating custom mappings in an index template before you populate late it with any data, so (a) toss all your existing data, and (b) figure out a list of all the fields you want to use that are mis-typed.
(NB: This also appears to be true for the httpd
patterns, like %{HTTPD_COMMONLOG}
. And probably for everything else as well.)