4

The company where I work at is evaluating different risk analytics solutions to purchase, but one of the security guys introduced the idea to actually build our own internal platform/engine.

I have done lots of research to understand it better, but I am still confused. How would one go about developing its own platform that would receive data feeds, correlate/analyze and then produced actionable alerts/risk scores/reports? What I am really struggling with is how does it know if one risk is greater than the other?

Do you have to create your own algorithm? Is there a framework that organizations use to build it? Or does a database do the heavy lifting?

I am sorry for all the questions, it is just that I am with the compliance department and never had a chance to work with the risk analytics solutions.

Thank you so much!

Fox2020
  • 51
  • 1
  • 1
    As a consultant to a large number of organizations I'll say I am seeing an increasing number of organizations doing this themselves. I'll also mention that the companies I see with the best security are all doing this themselves. – Trey Blalock Mar 26 '17 at 21:38
  • I think this also depends on your definition of risk analysis. If you only want to manage static product/project risks spreadsheets are not uncommon. If you talk more about SIEM thensolution can be quite complex to build yourself. – eckes Jul 25 '17 at 01:43

3 Answers3

4

You should look into different SIEM solutions (If the company has necessary budget to integrate SIEM Solutions) & then see what risks are worth solving & prioritizing. If not, more traditional practices which is:

  • To determine the attack surface
  • Then determine the threats which could be possible to the organization
  • And then having a penetration test, a vulnerability assessment
  • Hence, defining what are the potential risks & if these risks mean any threat to your specific organization.

I suggest you go through these frameworks which handles risks in-depth:

enter image description here

but one of the security guys introduced the idea to actually build our own internal platform/engine.

This might be possible if your company has prioritized what risks are most grave for them. In generic terms, one could use NIST as well. That's just an example. If the company is more into Application Security, I would suggest WASC or OWASP.

For OWASP, a Risk Rating Methodology does specify what risks are grave in nature (generically) which means these threats were to be solved in priority.

Shritam Bhowmick
  • 1,602
  • 14
  • 28
4

There two tactics to measure and analyze risk and uncertainty in cybersecurity risk.

The first are the sets of foresight tools based on analytical techniques such as Multiple-Scenarios Generation which relies on Quadrant Crunching. The more well-known technique is Cone of Plausibility. There may be ways to incorporate time-series data, such as from SIEMs or log management/archival systems, along with forecasting or backcasting approaches as well.

If you use a foresight tool, you will need a panel of cybersecurity experts. This is not something you can buy off-the shelf or outsource. These will need to be domain experts for the business and most will need to organize with an ontology such as the UCO, VERIS, as well as understand the MITRE taxonomies (e.g., CAPEC, ATT&CK, CWE, CVE, MAEC, STIX, CybOX, CPE, CCE) in order to utilize them as a common language. Two models will also be of use as references and guides during this process: the Diamond Model of Intrusion Analysis (N.B., the Diamond Model is taught even in the CCNA Cyber Ops SECOPS 210-255 material from Cisco Systems) and the F3EAD (Find, Fix, Finish, Exploit, Analyze, Disseminate) process to map security operations / incident response teamwork to threat intelligence cross-functional needs.

By using a time-series database (TSDB) such as RRD (Round Robin Database, an older, yet still-relevant standard) or Graphite Whisper (more-modern), you can can perform data smoothing and forecasting operations. While I haven't seen these in cybersecurity products, the concepts can be put into practice with rrdtool (I am familiar with Ganglia) and for Graphite visualization (also familiar with Grafana) and monitoring (familiar with both zabbix and Icinga 2, but obvious Nagios and others) platforms. As seen here -- https://blog.pkhamre.com/visualizing-logdata-with-logstash-statsd-and-graphite/ -- Graphite can take data from common log pushers and metrics aggregators such as LogStash and StatsD. However, when this approach is used, it most likely has to target an on-going campaign from a single adversary during a non-dynamic attack paradigm (e.g., DDoS that occurs every day/week/month) and should deal with outliers and other anomalies (familiar with Etsy Skyline and Twitter BreakoutDetection).

The second set of tools rely heavily on statistical techniques. At least three outcomes can be surveyed: 1) a LEF (Loss-Event Frequency) and LM (Loss Magnitude) can be calculated best by canvassing a panel of domain-specific and internal-org cybersecurity experts by gathering calibrated-confidence intervals in order to produce the probability of damage in dollar (or equivalent) amounts using an Exceedance-Probability (EP) curve against a series of scenarios, 2) another curve, based on the risk tolerance of the organization can be compared against the EP curve to determine the effectiveness of controls, i.e., how effective a mitigation would be and how it fits with the lines-of business, 3) the use of Bayes' Rule to formulate how a Positive-Penetration Test producing Remotely-Exploitable Vulnerabilit(ies) affects the probability of a Major Data Breach. These outcomes are analyzed in the book How To Measure Anything in Cybersecurity Risk. For method 3, it is suggested that multiple methods of penetration testing be utilized in order to provide optimal coverage. There are many theories on how this should be done, but the best I've found is the work from -- http://www.sixdub.net -- and -- http://winterspite.com/security/phrasing/ -- although perhaps there is room for crowdsourced bug-hunting programs in addition to adversarial emulation, blue, red, and purple team activities. Again, by leveraging the Diamond Model (as sixdub describes) and F3EAD process, more-ideal conclusions can be reached.

atdre
  • 18,885
  • 6
  • 58
  • 107
  • There has been on-going development of ontologies in the cyber domain. Here is a starting point for more-research -- https://scholar.google.com/scholar?cites=14094928535659423944&as_sdt=805&sciodt=0,3&hl=en – atdre Aug 24 '17 at 17:47
  • Humio is a time-series text db -- http://gotocon.com/dl/goto-berlin-2016/slides/KrestenKrabThorup_HumioAFastAndEfficientWayToUnderstandLogData.pdf – atdre Oct 06 '17 at 21:16
  • https://www.insaonline.org/wp-content/uploads/2018/10/INSA-Framework-For-Cyber-Indications-and-Warning.pdf – atdre Oct 24 '18 at 20:28
  • https://www.rms.com/blog/2019/12/17/toward-a-science-of-cyber-risk/ – atdre Dec 20 '19 at 16:56
  • https://www.cyentia.com/iris/ – atdre Mar 30 '20 at 21:42
  • https://caseontology.org – atdre Apr 13 '20 at 13:24
0

My company have its own tool and is very good. So, not sure about the possibilities of commercial products for this. Anyway self-made solutions always fit better to the company but usually require more time to be developed and deployed. Everything depends of available resources, targets and time window.

OscarAkaElvis
  • 5,185
  • 3
  • 17
  • 48