2

Various apps and services available on modern smartphones constantly listen with the mic (e.g. Siri or Google Assistant listening for wake word, "Now Playing" feature on Pixel phones). To settle user privacy concerns, most of these services promise to only process the relevant bits of recorded audio (e.g. the voice command following the wake word, audio signatures of songs detected by "Now Playing").

How can users be sure that other captured sounds, such as private conversations, are not processed and transcribed locally on the device and sent to their servers in the form of encrypted text or audio signatures? Through compression, timing, obfuscation and encryption, they could make it hard or even impossible to detect such behaviour via traffic analysis.

My question is: Do users ultimately have rely on their trust, or are there any effective ways to verify the privacy-related promises these services make?

I’m grateful for any ideas and insights you can share on this!

Estarossa
  • 121
  • 3

2 Answers2

2

Users can only rely on the word of the distributor of the app, which has proven problematic recently.

There is no easy way to technically prove, that no audio is recorded, analyzed or transmitted.

  • Okay, so users have to rely on their trust. But iOS and Android also try to check if the apps on their respective app market behave as advertised. Do you think they are effective in detecting and prohibiting such behavior in third-party apps? – Estarossa Feb 23 '19 at 12:56
  • @Orochi1992 some versions allow you to ban microphone access, etc, and those should be effective – Natanael Feb 23 '19 at 13:43
  • But let's say the malicious third-party app has microphone access (as many apps do). My question is whether the malware scanning tools of iOS and Android would detect such behavior (transcribing private conversations to text and leaking the content to their servers) within the app's source code, for example. – Estarossa Feb 23 '19 at 18:33
  • If the app has microphone access it can record whenever it likes to do so. I would also assume that apps provided by google/apple do not strictly follow the restrictions, that you can control for other apps. – Euphrasius von der Hummelwiese Feb 24 '19 at 10:16
2

This is a really good question and, ultimately, there’s definitely no way to know. At the end of the day, as a user, you need to make the decision to place trust in the developer, the infrastructure (Apple, Google, etc.), the hardware you’re using, and so on. Furthermore, there’s no way to detect that a device is listening or recording.

You can place bets on what you think - has it asked for microphone access? Is it from a reputable developer? etc. - but that is the extent of your assurance. It’s only ever a risk analysis, and most users tend to choose acceptance.

securityOrange
  • 913
  • 4
  • 12
  • Now say a third-party app wanted to secretly transcribe conversations and send that information to their servers - while it would be impossible for the user to tell, do you think Android or iOS would detect this kind of malicious behavior? Through scanning the app's source code, for example? – Estarossa Feb 23 '19 at 18:27
  • 1
    It’s hard to say with certainty. Good (or in some cases, mediocre) obfuscation techniques would make it pretty difficult for the ecosystem provider to detect, and things like this happens quite a bit. This is the risk associated with running any app store - regulation of one’s own software is difficult enough, let alone someone else’s. So, short answer would be: it depends on the ecosystem, but sophisticated user surveillance can be pretty darn hard to detect. Ecosystem providers have sophisticated detection, though, so I think that as always it’s a cat-and-mouse game of details. – securityOrange Feb 23 '19 at 18:30
  • Great answer, thank you! I'm sure sophisticated obfuscation can be very effictive ... But do you really think something as bulky as a speech recognition or hotword detection algorithm can be hidden from ecosystem providers? – Estarossa Feb 23 '19 at 18:41
  • 1
    Absolutely. Who says it has to be bulky? All you need to do is exfiltrate a recording file, and there’s plenty of ways you could do that with a lot of slight of hand. Granted, I should add the caveat this is just my speculation, but things of this nature happen quite a bit across a variety of ecosystems. (Consider, for example, the recent discovery that you can compromise Android devices using crafted PNGs. Take a malicious PNG, have it send a file to another server, and boom, you’re golden. Would Android catch this? I’m not sure, but if not then I think something subtler would probably pass.) – securityOrange Feb 23 '19 at 18:53