1

What kind of vulnerability in the implementation of a vision-based machine learning system (object recognition, for example) would enable an attacker to achieve remote code execution?

The only instance I can think of is a vulnerability (e.g., buffer overflow) exploited in an ML-based text recognition system via malicious text input.

As far as I can tell, the only work that has been done to exploit AI/ML algorithms involves "adversarial" inputs that fool classifiers into an incorrect classification. These seem primarily useful for bypassing ML-based authentication and confusing systems like self-driving cars, but not for attacking the computers doing the classification.

Is there research being done on this topic, or is it unlikely until more standardized ML-based systems are deployed in the wild?

jstrieb
  • 153
  • 8

1 Answers1

1

Buffer overflows don't just occur with text, so it may be possible to pass an 'image' that exploits something in the image processing. Bad code always exists though...using OWASP has a good example page, and we can construct a simple function which could overflow:

int main(int argc, char **argv)
{
    char items = imageprocessor(); // process image seen
    char buf[8]; // buffer for eight characters
    strcpy (buf, items); // cause the overflow
    printf("Items seen are: %s\n", buf); // print out data stored in buf
    return 0; // 0 as return value
}

Now, let's suggest the computer vision algorithm is identifying items and returns a char array with those items separated by a comma. If our algorithm identifies a car, a box, and an elephant, we cause an overflow. If our algorithm identifies text characters we could potentially control the input to that overflow.

You already identified adversarial inputs and software such as Apple's FaceID has been bypassed in the past, but these exploit the verification method used by Windows, such as WBF in Windows, rather than any ML method.

LTPCGO
  • 965
  • 1
  • 5
  • 22