1

I can't understand, why the firewall throughput of my server is significant increasing during the last few days while everything else, including traffic, remains on it's normal level.

What scenario can result in stable traffic but increasing firewall throughput?

I show you two monthly munin graphs:

monthly firewall throughput monthly traffic

The peeks at aprox. 4 a.m. every day come from a nightly job. They are normal. Not normal is the increase of firewall throughput from former less than 15 Packets/second to now 30+ Packets/second while the traffic stays on it's normal level.

My server is a virtual root-server in a remote server farm. OS is Linux Ubuntu 14.04 LTS. Webserver is Apache. Mailserver Postfix. All modules in its newest version.

I did not change any settings within the last weeks. Last reboot (after updating all packages) was on 4th Feb. The strange behavior begann in the morning of 9th Feb. I updated all packages yesterday again (without reboot), but this did not influence the high firewall throughput.


Edit (answering a comment):
Here is the result of `iptables -nvL':

root@myServer:~# iptables -nvL
Chain INPUT (policy ACCEPT 10M packets, 2883M bytes)
 pkts bytes target     prot opt in     out     source               destination         
3860K  499M fail2ban-apache-nohome  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 80,443
3860K  499M fail2ban-apache  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 80,443
77714   24M fail2ban-postfix  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 25,465,587
3860K  499M fail2ban-apache-noscript  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 80,443
 3344  714K fail2ban-ssh  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            multiport dports 21101

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 7652K packets, 5305M bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain fail2ban-apache (1 references)
 pkts bytes target     prot opt in     out     source               destination         
3860K  499M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain fail2ban-apache-nohome (1 references)
 pkts bytes target     prot opt in     out     source               destination         
3860K  499M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain fail2ban-apache-noscript (1 references)
 pkts bytes target     prot opt in     out     source               destination         
3860K  499M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain fail2ban-postfix (1 references)
 pkts bytes target     prot opt in     out     source               destination         
77714   24M RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain fail2ban-ssh (1 references)
 pkts bytes target     prot opt in     out     source               destination         
 3344  714K RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Fail2ban did not ban any connection within the last 2 weeks. It banned 4 connections in January, 3 in Dec'15 and 3 in Nov'15. It never ever banned anything else than connections to postfix with one exception: In August 2015 it banned ssh, which was my own connection: I three times tried to login with a wrong password.

2 Answers2

1

If traffic levels are the same but packet counts have increased (and these counters are accurate) then the simple answer is that your packet sizes have decreased on average over time, but this is not normal behavior per se. I would run tcpdump or tcpstat and get a capture of the packets on the wire. Even if you don't have historical (i.e. before packet count increases) data to compare, you'll get a real look at the packets and what's in them.

vigilem
  • 559
  • 2
  • 7
  • Shrinking packet size was also my first idea, but then I have to ask: Why are packets shrinking since 3 days? There is no reason for packets doing this. – Hubert Schölnast Feb 12 '16 at 13:46
  • I agree - there's definitely no reason. That's why I suggest getting a real trace for the wire. If I were in your position, that's where I'd start - so I can see a) if this is the case, and b) if it is, what's sourcing these packets? – vigilem Feb 12 '16 at 14:15
0

The reason was found (and the problem was solved):

I told you in my question, that my server is a virtual root-server in a remote server farm. Another virtual server who was rent by another customer of this server farm, and who was located in the same segment as my own server, has been hacked. What I did see on my firewall was tcp-packets from this hacked server. But none of this packets entered my own realm, so none of them could increase my own traffic.

The team from the server farm took the hacked machine offline, and at the same moment my own firewall throughput jumped back to its normal level.