There has been an enormous amount of research into using machine learning techniques for anomaly detection, i.e., to scan network traffic and detect intrusions. However, this research has had very little practical impact. These techniques have seen little deployment and are rarely used in practice.
Why not? There are a number of reasons.
First, these systems tend to have a high false alarm rate. They often raise multiple spurious alarms per day (sometimes even dozens per day), which takes up system administrators' time. This is a fundamental challenge for anomaly detection systems, because they suffer from the "needle in a haystack" problem: billions of packets traverse your network every day, and almost all of them are benign. If the algorithm has a false alarm rate as low as 0.1%, that's still thousands of packets being spuriously flagged. To be practical, the anomaly detection algorithm needs to have an exceptionally low false alarm rate, which is very challenging to do well -- for the same reason that it is very difficult to detect terrorists in airport screening, without introducing a lot of false alarms that cause everyday folks to have to be searched.
Second, anomaly detection systems tend to be not very robust. They focus on detect unusual or novel patterns in your network traffic: anything out of the ordinary. The consequence is that, any time something changes about your network, no matter how benign, they tend to raise alarms. Did your website just get slashdotted? Blam, spurious alarms go crazy. Did some user install a new application that plays novel NAT traversal games? Blam, here come the spurious alarms. Did someone just install IPv6 for the first time? Blam. Someone connect a new mobile phone with a wonky TCP/IP stack, that sends out broken packets? Blam. You get the idea.
If you want to read more about the challenges of innovation in this area, I would recommend the following research papers: