0

As part of our software procurement process, we're evaluating the security of applications, e.g. by checking if there are non-addressed vulnerability reports and if the vendor embedded secure requirements into their software development lifecycle (e.g. scanning for vulnerable libraries, static code analysis, secure coding practices, etc.). This is to reduce the risk of data confidentiality, integrity, and availability by using non-maintained and vulnerable software. Security through connected cloud services is checked differently, so my questions refer more to standalone (utility/productivity) applications.

Apple already does some sort of review for all applications published on their Mac App Store as described in https://developer.apple.com/app-store/review/guidelines/, but doesn't go into detail on what is being checked.

How should a company review desired apps from the Mac App Store so that they meet the company's security requirements?

schroeder
  • 123,438
  • 55
  • 284
  • 319
T. B.
  • 1
  • Asking for a survey of what others do is not a good question format on StackExchange. I've changed the question to be about how to devise a process. – schroeder Sep 24 '19 at 11:46
  • Thanks a lot @schroeder, makes a lot of sense, and I wasn't aware. – T. B. Sep 27 '19 at 16:12

1 Answers1

1

The most basic review is stuff already provided by Apple, such as "what permissions does the app request" and "who published it". You can also instruct the OS to not give the app some of the permissions it may want. All apps published through the store are digitally signed, so you can be sure that the version you install is the one that was uploaded to Apple. However, Apple's review is not guaranteed to catch all the ways somebody might do something malicious with the app.

For completely black-box testing an app, the best option is network review; set it up behind an intercepting proxy such as Burp Suite, set the proxy on the device, and start capturing traffic. Verify that the app is using HTTPS or other encrypted protocols. For HTTPS, verify that it is validating TLS certificates both by signer and by host. If the app uses certificate or key pinning, you won't be able to decrypt the data (though you can still tell what hosts it tries to connect to), but if not you can decrypt the data after you install the proxy's root CA certificate on the device. This will let you see if the app is sending any data it shouldn't or to anywhere that it shouldn't.

If you want to actually inspect the app's on-device data and files, you'll need a jailbroken device. Using a privileged file browser, you can retrieve the install files and check them for any libraries from untrusted authors or known-vulnerable versions. If you really want to, you can throw the files into a disassembler / decompiler and figure out exactly what they do, though this is a skilled and somewhat laborious process even if the files aren't obfuscated at all. You can also retrieve any data files (including database files) the app stores in its sandboxed part of the file system. You can check if any sensitive data is encrypted (although its of limited use to do so unless the app itself demands a password at startup; there's nowhere to store the key that an attacker with root access couldn't access with enough effort), or if it's storing data you don't expect it to. You can also check the app's keychain, and see what's stored there.

CBHacking
  • 40,303
  • 3
  • 74
  • 98
  • Thanks @CBHacking for your advice. Does Apple provide information also for macOS apps "what permissions does the app request"? E.g. on https://apps.apple.com/us/app/slack/id803453959?mt=12 I could not find this information. – T. B. Sep 27 '19 at 16:18
  • @T.B. Assuming they require a sandbox on the apps at all, which I forget whether or not they do, I would hope they list the permissions that the sandboxed app has. It might be visible from the app store itself (as opposed to the website). – CBHacking Sep 27 '19 at 17:29