3

My background is in compilers/code optimization, and I'm wondering whether there might be any interesting applications of extremely aggressive runtime code specialization towards improving security applications. So: suppose we have a JIT compiler that can perform aggressive code optimizations based on runtime constants. (These runtime values are not known at compile time and so the corresponding optimizations can't be done at compile time.) Are there interesting security-related problems that could benefit from something like this?

debray
  • 133
  • 2

1 Answers1

0

This is probably far-fetched (and admittedly speculative), but I'll post to hopefully stimulate ideas in others:

In traditional software design, code paths dependency was clear and not dynamically determined. As such, any security [static] analysis that relied on source-sink trace analysis was "easy" to do. (E.g. finding a code path via static analysis that uses user input in a SQL without having passed through a cleansing function; or user input reflecting back in output opening to XSS; or password making it to log files in cleartext).

However, with modern design patterns like IoC as well as dynamic nature of some languages like JavaScript, source-sink analysis becomes inherently "hard" since sources and sinks aren't [yet] connected at compile time.

So one thing that a JIT compiler could, theoretically at least, do is identify new source-sink paths that are created, and either perform at least some rudimentary type analysis, or perhaps generate an event that a security monitoring app could trap and perform in-depth analysis at runtime.

The hurdle here is not just JIT-based source-sink identification in inexpensive way, but also that security type analysis on top of it is [currently] is prohibitively slow (AFAIK). But perhaps some rudimentary type analysis may be done at runtime in at a reasonable performance expense.

LB2
  • 420
  • 2
  • 8
  • Thanks. Your answer made me think about runtime monitoring techniques like Control-Flow Integrity (CFI) [[Abadi et al. 2005](http://research.microsoft.com/apps/pubs/?id=69217), [Zhang et al. 2013](http://bitblaze.cs.berkeley.edu/papers/CCFIR-oakland-CR.pdf)]. Given that CFI was proposed close to a decade ago, is the reason for its lack of adoption the runtime overhead involved, or are there other deeper issues? – debray May 08 '14 at 23:12
  • @debray I'm not familiar with CFI (I just took a cursory look at the paper - interesting, but a bit over my head :( ), and thus don't know if there are deeper issues preventing its adoption. If I understand it correctly though, CFI imposes a "permanent penalty" on performance - that is throughout lifetime of execution of the code. Security check with source-sink analysis (as opposed to tainting approach) I think can be done more like JIT compiler approach - there is penalty on first time use, but zero cost on successive passes - that may be more palatable than CFI strategy. Maybe. A guess... – LB2 May 09 '14 at 14:26
  • The problem with CFI is that it requires the code to be _correct_. Even small issues like type casting problems results in CFI being impossible. The vast majority of programs require _significant_ overhaul to be compatible with true, full CFI that protects forward and backwards edges. – forest Nov 02 '18 at 03:17