0

On my freebsd box:

#uname -rimp
9.1-STABLE amd64 amd64 GENERIC

flow_tools:

> pkg_info -x flow
Information for flow-tools-0.68_7:

Comment:
Suite of tools and library to work with netflow data


Description:
Tools to capture, replicate, print, filter, send and other works
on Cisco's NetFlow Export.

WWW: http://www.splintered.net/sw/flow-tools/

Collector is ng_flow, started with

    /usr/sbin/ngctl mkpeer ipfw: netflow 30 iface0
    /usr/sbin/ngctl name ipfw:30 netflow

    /usr/sbin/ngctl msg netflow: setdlt {iface=0 dlt=12}
    /usr/sbin/ngctl msg netflow: setifindex {iface=0 index=5}
    /usr/sbin/ngctl msg netflow: settimeouts {inactive=15 active=150}
    /usr/sbin/ngctl mkpeer netflow: ksocket export inet/dgram/udp
    /usr/sbin/ngctl msg netflow:export connect inet/127.0.0.1:9995

And ipfw rule:

02750  59239017674  33111253913522 ngtee 30 ip from any to any via em0

Exported with flow_fanout for flow_capture.

# ps axww | grep flow
15106 ??  Ss        2:50,08 /usr/local/bin/flow-fanout -p /var/run/flow-capture/flow-fanout.pid 127.0.0.1/0.0.0.0/9995 127.0.0.1/127.0.0.1/9556
16367 ??  Ss       11:28,63 /usr/local/bin/flow-capture -n 95 -N 3 -z 5 -S 5 -E270G -w /var/netflow -p /var/run/flow-capture/flow-capture.pid 127.0.0.1/0.0.0.0/9556

For unknown for me reason flow_capture reported in logs:

Mar 27 10:20:00 rubin flow-capture[16367]: STAT: now=1364358000 startup=1364227269 src_ip=127.0.0.1 dst_ip=103.247.29.1 d_ver=5 pkts=1 flows=30 lost=0 reset=0 filter_drops=0
Mar 27 10:20:00 rubin flow-capture[16367]: STAT: now=1364358000 startup=1364227269 src_ip=127.0.0.1 dst_ip=116.115.58.13 d_ver=5 pkts=1 flows=30 lost=0 reset=0 filter_drops=0
Mar 27 10:20:00 rubin flow-capture[16367]: STAT: now=1364358000 startup=1364227269 src_ip=127.0.0.1 dst_ip=186.85.188.1 d_ver=5 pkts=1 flows=30 lost=0 reset=0 filter_drops=0
Mar 27 10:20:00 rubin flow-capture[16367]: STAT: now=1364358000 startup=1364227269 src_ip=127.0.0.1 dst_ip=186.84.72.1 d_ver=5 pkts=2 flows=60 lost=480 reset=0 filter_drops=0
Mar 27 10:20:00 rubin flow-capture[16367]: STAT: now=1364358000 startup=1364227269 src_ip=127.0.0.1 dst_ip=186.85.212.1 d_ver=5 pkts=1 flows=30 lost=0 reset=0 filter_drops=0
Mar 27 10:20:00 rubin flow-capture[16367]: STAT: now=1364358000 startup=1364227269 src_ip=127.0.0.1 dst_ip=190.149.28.1 d_ver=5 pkts=2 flows=60 lost=0 reset=0 filter_drops=0
Mar 27 10:20:00 rubin flow-capture[16367]: STAT: now=1364358000 startup=1364227269 src_ip=127.0.0.1 dst_ip=190.149.4.1 d_ver=5 pkts=1 flows=30 lost=0 reset=0 filter_drops=0
 ....
Mar 27 10:25:00 rubin flow-capture[16367]: STAT: now=1364358300 startup=1364227269 src_ip=127.0.0.1 dst_ip=0.0.0.0 d_ver=5 pkts=8253374 flows=246879659 lost=71012 reset=0 filter_drops=0
  ....
Mar 27 10:25:28 rubin flow-fanout[15106]: ftpdu_seq_check(): src_ip=127.0.0.1 dst_ip=0.0.0.0 d_version=5 expecting=895162410 received=895162440 lost=30

I cannot understand: why all this ips here?

I have not any configurations about ips like 190.149.4.1 and 186.85.212.1 and any other.

Also flow_fanout and flow_capture both eat more and more memory. Last time before i restarted those demons it was about 3Gb of memory.

Help me please with this strange ips in logs. Did I some misconfiguration? Or its a known bug?

UPD: My questions about flow_capture logs was poorly worded. Once more try:

On other my server with similar configuration about flow_capture I see in logs only one records each 15 mins:

Mar 28 08:55:00 flow-capture[45410]: STAT: now=1364439300 startup=1356501324 src_ip=127.0.0.1 dst_ip=127.0.0.1 d_ver=5 pkts=41948436 flows=1256544938 lost=0 reset=0 filter_drops=0 

Look at dst_ip=127.0.0.1.

When i return to my first server with same config, I see in log

Mar 28 09:05:00 rubin flow-capture[16367]: STAT: now=1364439900 startup=1364227269 src_ip=127.0.0.1 dst_ip=65.121.97.1 d_ver=5 pkts=1 flows=30 lost=0 reset=0 filter_drops=0                                                                        
Mar 28 09:05:00 rubin flow-capture[16367]: STAT: now=1364439900 startup=1364227269 src_ip=127.0.0.1 dst_ip=255.127.0.0 d_ver=5 pkts=1458 flows=43711 lost=21989 reset=1395 filter_drops=0                                                           
Mar 28 09:05:00 rubin flow-capture[16367]: STAT: now=1364439900 startup=1364227269 src_ip=127.0.0.1 dst_ip=109.112.100.32 d_ver=5 pkts=446 flows=13380 lost=15933 reset=401 filter_drops=0                                                          
Mar 28 09:05:00 rubin flow-capture[16367]: STAT: now=1364439900 startup=1364227269 src_ip=127.0.0.1 dst_ip=12.79.228.1 d_ver=5 pkts=4 flows=120 lost=0 reset=3 filter_drops=0                                                                       
Mar 28 09:05:00 rubin flow-capture[16367]: STAT: now=1364439900 startup=1364227269 src_ip=127.0.0.1 dst_ip=105.110.100.44 d_ver=5 pkts=465 flows=13950 lost=16443 reset=411 filter_drops=0                                                          
Mar 28 09:05:00 rubin flow-capture[16367]: STAT: now=1364439900 startup=1364227269 src_ip=127.0.0.1 dst_ip=8.0.0.0 d_ver=5 pkts=88 flows=2611 lost=210 reset=85 filter_drops=0                                                                      
Mar 28 09:05:00 rubin flow-capture[16367]: STAT: now=1364439900 startup=1364227269 src_ip=127.0.0.1 dst_ip=82.111.119.115 d_ver=5 pkts=449 flows=13412 lost=11044 reset=409 filter_drops=0                                                          
Mar 28 09:05:00 rubin flow-capture[16367]: STAT: now=1364439900 startup=1364227269 src_ip=127.0.0.1 dst_ip=0.0.0.0 d_ver=5 pkts=14965070 flows=447566910 lost=130355 reset=0 filter_drops=0

Look at all this dst_ip, 8.0.0.0 for example. Its looks like bug or misconfiguration. But i dont know how to fix that.

Korjavin Ivan
  • 2,230
  • 2
  • 25
  • 39

1 Answers1

1

Some years back there were complaints of memory leaks under BSD in 0.68, but I don't know if they've been fixed since then.

I do notice that you're using the -E tag with a very large number. If you try with something much smaller (say, -E1M) and the memory footprint stays under control, I'd be inclined to put the blame for memory use there.

I'm not sure what you're asking in your other question. Might it just be matching against all sessions for which one endpoint is 127.0.0.1?

Edit: I think I see what you're saying now. If it were NetFlow v9 I'd say there was a likely template error (which can lead to reading the wrong bytes out of the structure) but I haven't come across that for NetFlow v5. I figure there's four possibilities:

  1. flow_capture is misreporting what it receives
  2. flow_fanout is somehow mangling the flows it relays
  3. your NetFlow exporter is misreporting traffic in the first place
  4. there's some kind of (malicious?) application on your network opening sessions to random IP addresses.

I would use a packet capture utility like Wireshark to inspect the NetFlow records and make sure they're being correctly relayed by flow_fanout and reported correctly on the command-line. I've never had a problem like that with flow_fanout, though, so I would personally be looking at #3. There are a number of free flow exporters you can download to run alongside and compare with what's coming out of your current exporter.

John Murphy
  • 186
  • 5