3

How long should IOCs be monitored even thou they might be "outdated"? Are there best practices or other reasonings?

I would think monitoring IOCs indefinitely is not ideal. Perhaps 90 days would suffice?

Example: An IP IOCs will not be effective after some time as the IP address might have been changed. Likewise for domains IOCs.

Lester T.
  • 1,263
  • 1
  • 9
  • 21
  • You can't make a blanket statement. You need to check if an IoC is still valid. Some servers keep their IPs forever. – schroeder Sep 10 '18 at 16:24

4 Answers4

2

Malware analysts are often asked to correlate IoCs retroactively. Recently correlated IoCs that went back to Chinese code from 2002 in 2016. Just a few months ago, we found meaningful IoCs that tied a threat commnunity to its predecessors from 2006. Without the ability to track all of these IoCs across the years, there would not have been a correlation.

There are also ways to grade IoCs:

Level 1 : SHA2 hashes, BGP ASNs, hostnames
Level 2 : MD5+SHA1 hashes, IPv4/IPv6 Prefixes
Level 3 : Mutex names, Imphashes
Level 4 : Yara rules
Level 5 : File, class, or block similarities (e.g., Icewater, GCluster.py, TLSH or ssdeep fuzzy hashes)
Level 6 : Apiscout and master-level malware analyst techniques

Malware tends to bypass all of the above in different manners. Some malware can even lie to or bypass all of the above, simultaneously. You definitely want to scale your IoCs when working against that threat space. However, you’ll also want to take into account factors that computers cannot scale —- the human element, surprise, and indications.

Perhaps you mean, how long should you monitor IoCs on the network? Or do you mean in sweeps across a fleet?

If you are looking for a way to manage IoCs on the network, check out the Bro Intelligence Framework. For fleet management of IoCs there are a few tools, such as Viper to database and view classic IoCs, YaraGuardian for organizing and searching Yara rules, Timesketch to play with the data, as well as ways of gathering artifacts and mining/monitoring log data.

There are many ways to approach artifact and log collection, in addition to monitoring, analysis, and synthesis. I like how these professionals outline 3 approaches —- https://posts.specterops.io/thoughts-on-host-based-detection-techniques-21d9c97082ce —- especially through their toolsets such as Automated Containment and Enrichment (ACE Server), Hunting ELK (HELK), and UpRoot. You’ll see the techniques from these toolsets in commercial platforms, such as Infocyte, Splunk Enterprise Security, and Carbon Black Cb Response, respectively. If you really want to scale fleet sweeps of IoCs, then the ACE Server, InfoCyte, and maybe even the PowerForensics or PowerShell Kansa tools are my first-round suggestions.

atdre
  • 18,885
  • 6
  • 58
  • 107
  • 1
    That's a nice reminder about differentiating IoCs. From the rest of your answer, are you implying that a network defender should monitor for IoCs indefinitely? Should the different levels be treated differently? – schroeder Sep 08 '18 at 14:27
  • 1
    Bro wasn’t built in a day. There are some commercial Bro platforms such as Reservoir Labs and CoreLight that scale extremely-well. Most shops tend to roll their own if they even have it at all. Without Bro, your org is dependent on probably some vendor, say, RSA SA/NW. They can’t scale because these commercial platforms don’t grow with your org as your org grows, so IoCs you want to monitor on the network will be limited by them. Some vendors, such as NIKSUN, store Network Forensics metadata for years-to decades, through various methods of shrinking IPv4 data bits into tighter, smaller bits. – atdre Sep 08 '18 at 14:35
  • 1
    By all means, though, yes — indefinitely. In fact, I would say that there must be an onus to collect all malware and phishes an org detects in central stores — again, indefinitely. There must be a Priority Information Requirement (PIR) to collect. This is a central tenant of the NIST CSF, but it’s also common sense. However, boys will be boys and you’ll see these types of PIRs in probably 1 or 2 orgs in the Fortune 1000 / Global 2000. – atdre Sep 08 '18 at 14:38
  • Can you roll some of that into your answer? – schroeder Sep 08 '18 at 14:41
  • Priority Information Requirement (PIR) is not mentioned in NIST CSF. That appears to be a Mil Intel term? – schroeder Sep 08 '18 at 14:45
  • Let’s get some clarity from the questioner whether the intention was for monitoring IoCs on the network, across a fleet, or both and I’ll fit some more into the answer. It’s difficult to discuss scaling both without addressing both. Perhaps he means something completely-different, like ThreatConnect or his personal VirusTotal account. – atdre Sep 08 '18 at 14:45
  • NIST CSF, “The organization adapts its cybersecurity practices based on previous and current cybersecurity activities, including lessons learned and predictive indicators. Through a process of continuous improvement incorporating advanced cybersecurity technologies and practices, the organization actively adapts to a changing threat and technology landscape and responds in a timely and effective manner to evolving, sophisticated threats“ – atdre Sep 08 '18 at 14:47
  • Right, I know that part. But PIR is not a term mentioned. Just saying, as people might get confused and the term is defined in other contexts. and it appears to be "Priority Intelligence Requirement"? – schroeder Sep 08 '18 at 14:49
  • All of these terms are seen in the infosec literature on occasion. 1) Syngress’ Introduction to Cyber Warfare, 2) Packt Pub’s Practical Cyber Intelligence, 3) Syngress’ Building an Intelligence-led Security Program – atdre Sep 08 '18 at 14:53
1

WannaCry came out 90 days after the EternalBlue vulnerability was announced. If you stopped looking for the EternalBlue IoCs after 90 days, you would have missed seeing WannaCry.

Also, some malware or techniques persist for years because networks can be slow to adapt, so they still work. If you stop looking for them, you'll miss valuable threat information.

If you have the correct tooling, you should be able to manage long-term IoC maintenance. Some IoCs are inherently time-limited (like Internet-dependent IoCs), so those might have a short lifespan.

Otherwise, it seems to me that if you have automated tooling to detect and respond to IoCs, or you can block them before they manifest (anti-malware, firewalls, IDS, etc.), then you can (maybe) stop actively looking for them on the layers of detection that occur after those technical controls.

schroeder
  • 123,438
  • 55
  • 284
  • 319
0

Depends what you're aiming to detect, really. That might mean that you only search an environment for existing indicators, if you're pretty sure something has been compromised, and you want to know if the attacker got any further, or it might mean you keep watching for indicators on an ongoing basis, if you're trying to proactively detect intrusions.

For example, if a particular string being sent to a HTTP server allows an attacker to cause it to crash, as a temporary fix, it may be worth configuring a firewall to block that string to HTTP servers. However, once you've fixed the HTTP server, so the string no longer causes a problem, you probably don't care, other than from purely academic interest, so you can stop monitoring for it. The indicator is still valid (it says "someone knows about this string, and is trying to cause a crash"), but it no longer applies to your environment.

Similarly, in a slightly less contrived manner, if you're monitoring for traffic from a specific country as being likely to be malicious, you may need to adjust this rule if your company expands into that country, meaning legitimate traffic is also coming from the same source.

There are some cases where you do need to keep monitoring essentially forever: any system can become the target of a DDoS attack with very little notice, so picking up on the early stages and responding appropriately requires ongoing monitoring. You can't state "we're never going to be a target for DDoS attacks", while you can confidently say "we're not running SSH, so don't need to monitor for potentially dangerous traffic on SSH ports", as long as you're willing to update your monitoring if changes occur.

Matthew
  • 27,233
  • 7
  • 87
  • 101
0

If the quality of your intelligence is high, there's no reason (aside from storage/performance-related issues) that you shouldn't keep indicators indefinitely.

Ensure there is some kind of validation of indicators before you action them and you'll maintain decent threat intel hygiene.

Attacker infrastructure does get re-used so if you're going down the route of removal after 90 days, you will miss notable events eventually.

Doomgoose
  • 736
  • 4
  • 8