50

There have been ads on the radio recently for a wifi enabled toy called Talkies, which are advertised as being able to communicate with app enabled phones, with a "trusted circle" that other phones can be added to.

(Obligatory photo of a cute wifi enabled critter) enter image description here

Especially considering the Krack vulnerability, and the known process of "grooming" a child that a sexual or other predator goes through to gain their trust and exploit them (Here is a story about how Snapchat was used), is this a toy that I should steer away from for my child? (3 years old currently)

Machavity
  • 3,766
  • 1
  • 14
  • 29
JohnP
  • 611
  • 1
  • 6
  • 11
  • [Probably about as secure as IP cameras depending on the user of the device.](http://www.securityinfowatch.com/article/12227605/hackers-use-25k-cameras-to-carry-out-botnet-attack) – MonkeyZeus Oct 31 '17 at 16:37
  • 6
    somewhat related: https://iot.stackexchange.com/questions/1074/why-was-the-internet-connected-my-friend-cayla-doll-banned-as-a-hidden-espionag/1083#1083 – cbeleites unhappy with SX Oct 31 '17 at 19:48
  • 2
    Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/68063/discussion-on-question-by-johnp-how-safe-are-wifi-enabled-talking-toys). – Rory Alsop Nov 02 '17 at 07:38
  • A couple of news stories to consider: - [The Bright-Eyed Talking Doll That Just Might Be a Spy](https://mobile.nytimes.com/2017/02/17/technology/cayla-talking-doll-hackers.html) - ['Smart' children's toys vulnerable to hack by strangers](http://www.kristv.com/story/36707236/6-investigates-smart-childrens-toys-vulnerable-to-hack-by-strangers) - [FBI Is Warning Parents About the Risks of Internet-Connected Toys Spying on Kids](http://www.slate.com/blogs/future_tense/2017/07/19/fbi_is_warning_parents_about_hacking_internet_connected_toys.html) - [Smart toy flaws make hacking kids' info child's pl – ashleedawg Nov 04 '17 at 04:42

6 Answers6

92

Be very, very careful. It's not KRACK that is the problem, it is a lax attitude to security and privacy in general. So called "smart" consumer products can often be hijacked, accessed from the internet, or monitored. As a customer, it is hard to know if any specific product is safe or not.

The Norwegian Consumer Council has been on the case for a while, and produced a few horror stories. From a report, aptly titled #ToyFail, on three "smart" dolls:

When scrutinizing the terms of use and privacy policies of the connected toys, the NCC found a general disconcerting lack of regard to basic consumer and privacy rights. [...]

Furthermore, the terms are generally vague about data retention, and reserve the right to terminate the service at any time without sufficient reason. Additionally, two of the toys transfer personal information to a commercial third party, who reserves the right to use this information for practically any purpose, unrelated to the functionality of toys themselves.

[I]t was discovered that two of the toys have practically no embedded security. This means that anyone may gain access to the microphone and speakers within the toys, without requiring physical access to the products. This is a serious security flaw, which should never have been present in the toys in the first place.

And from an other of their reports, again aptly named #WatchOut, on "smart" watches for kids:

[T]wo of the devices have flaws which could allow a potential attacker to take control of the apps, thus gaining access to children’s real-time and historical location and personal details, as well as even enabling them to contact the children directly, all without the parents’ knowledge.

Additionally, several of the devices transmit personal data to servers located in North America and East Asia, in some cases without any encryption in place. One of the watches also functions as a listening device, allowing the parent or a stranger with some technical knowledge to audio monitor the surroundings of the child without any clear indication on the physical watch that this is taking place.

And the FBI agrees:

Smart toys and entertainment devices for children are increasingly incorporating technologies that learn and tailor their behaviours based on user interactions. These features could put the privacy and safety of children at risk due to the large amount of personal information that may be unwittingly disclosed.

So unless you have a real need (other than "this is cool") for these kinds of products, I would say that your best approach is to simply stay away from them.

Anders
  • 64,406
  • 24
  • 178
  • 215
  • 26
    Recently, one such doll was found to be so insecure that it was classified as "hidden espionage device" in Germany (and banned), see e.g. discussion over at IoT https://iot.stackexchange.com/questions/1074/why-was-the-internet-connected-my-friend-cayla-doll-banned-as-a-hidden-espionag/1083#1083 (one of the two referred to in the 3rd paragraph of the #toyfail quote) – cbeleites unhappy with SX Oct 31 '17 at 19:52
  • 6
    ...and if you absolutely **must** have them, do the sane thing and at the very least *segregate them on a network segment all of their own*, ideally with no access (beyond that of any untrusted device on the big bad Internet) to anything else. At least that way, if something happens, you are limiting the potential damage somewhat. – user Oct 31 '17 at 19:52
  • 9
    @Michael Kjörling, no, just NO. Anders just explained that this "doll" is a spy device, that needs to have access to the internet to work. Your privacy seems to be the last thing on the doll makers list to take into account. The sane thing here is to not have one! – Flummox - don't be evil SE Nov 02 '17 at 08:18
  • 3
    @Flummox Why do you think I started out my comment the way I did, including the formatting? I agree with this answer, and there's no way I'd buy one myself (for this and for other reasons as well), but **if and only if** someone has a genuine need for something like this, then there still are steps that can be taken to mitigate the risks a little bit. In such a situation, *not at least doing that* would be the height of irresponsibility, both toward yourself, others in your household, and others on the Internet. – user Nov 02 '17 at 08:26
15

It really depends on your threat model. I wouldn't be particularly worried about a particular sexual predator in your local area having the technical skills necessary to utilize Krack to inject voice into the toy. Unless it uses the vulnerable Linux driver, the key clearing won't work and the partial nature of the compromise for a general reset would make voice injection nearly impossible.

Similarly, as a client device, it doesn't offer a whole lot of security risk other than possibly as a listening device, depending on if it is always on or activated by pushing a button. Krack wouldn't make it usable as an entry point in to your network directly, so I don't see it as a particularly riskier device than any other IOT device.

As always in security, it comes down to your risk aversion though. Personally, if I thought it would be valuable to my child (who is also 3) I don't think I would consider the local security implications as a reason not to get it for my home environment. I'd be more concerned about the controls and security on the web side.

My main concern for IOT devices isn't the local compromise so much as the web connected remote compromise. The chances of a sufficiently skilled and motivated malicious individual in your direct proximity is pretty low. The chances of a motivated and malicious user on the Internet trying to remotely access the IOT device is significantly higher and it's important to understand what holes the devices punch in your network protections.

Also, as Michael was kind enough to point out, the interests of such a broad hacker are much less likely to be concerned with your privacy and much more likely to either be interested in attacks on your other computers or on the computational capabilities of the device as an attack bot.

AJ Henderson
  • 41,816
  • 5
  • 63
  • 110
  • 3
    Also, keeping in mind the distinction about what a remote attacker is trying to access. Assuming you aren't a high-profile target (and if you are, you probably know better than to trust random strangers on the Internet for security advice), it's fairly unlikely that a remote attacker who gains access to an IoT device (even one that can remotely be turned into an active microphone and speaker, or camera, in your home) is particularly interested in *you specifically*. It's all the more likely that they are interested in the device itself, e.g. for its computational power or network connectivity. – user Oct 31 '17 at 20:16
  • In response to your edit, in fairness, I wasn't necessarily thinking of using the device as an attack bot, though that's an obvious thing that could happen (think Mirai and the DNS overload attack against Twitter and friends recently, for one). Think also e.g. bitcoin mining, or sending spam e-mails. Sure, a toy like the one exemplified in the question probably doesn't have enough computational power to make a major difference, but if the cost to the attacker is next to zero, it doesn't need to do much in order to turn a profit, especially if you've got thousands of them working for you... – user Oct 31 '17 at 20:21
  • 1
    There was a recent documentary (BBC if memory serves me correctly) showing the story of an elderly woman who answered the phone to a scammer and was convinced to transfer them a large sum of money. Key to the scam was their use of the camera and microphone on her laptop during the call. If such a device as this "toy" is easier to break into than a (presumably) windows machine, then this becomes just one example where an exploit of this type increases the risk surface area – Darren H Nov 02 '17 at 04:47
12

Welcome to the Internet of Things(IoT). This is a... thing. Therefore, it can be assimilated

Mirai is a type of malware that automatically finds Internet of Things devices to infect and conscripts them into a botnet—a group of computing devices that can be centrally controlled.

And

One reason Mirai is so difficult to contain is that it lurks on devices, and generally doesn't noticeably affect their performance. There's no reason the average user would ever think that their webcam—or more likely, a small business's—is potentially part of an active botnet. And even if it were, there's not much they could do about it, having no direct way to interface with the infected product.

The problem is that security is seldom a consideration when making toys like this. The technology to make all this work is fairly simple, but the companies aren't paid to think about this. It's a child's toy. It's meant to be cheap and easy. And you get what you pay for.

Earlier this year, it was found that a similar child's toy had no security at all (emphasis mine)

A maker of Internet-connected stuffed animal toys has exposed more than 2 million voice recordings of children and parents, as well as e-mail addresses and password data for more than 800,000 accounts.

The account data was left in a publicly available database that wasn't protected by a password or placed behind a firewall, according to a blog post published Monday by Troy Hunt, maintainter of the Have I Been Pwned?, breach-notification website. He said searches using the Shodan computer search engine and other evidence indicated that, since December 25 and January 8, the customer data was accessed multiple times by multiple parties, including criminals who ultimately held the data for ransom. The recordings were available on an Amazon-hosted service that required no authorization to access.

I'm going to be honest. These things are scary powerful in what they can do. Even if it doesn't expose your messaging, it could still be used for something malicious like a DDOS attack. If I were you, I'd pass on anything like this unless there's something explicit about security.

Machavity
  • 3,766
  • 1
  • 14
  • 29
  • 1
    Never mind the threats to National Security. If *every* conversation in a large enough sample of homes is recorded and digitised, it does not really bear thinking about that data being used in future prosecutions for crimes against the state. If it is being sent upstream for AI language learning (sold to Google or AWS, say) there may be some value in preserving languages that die out in future? – mckenzm Nov 01 '17 at 03:21
7

This is pretty much the same kind of toy as CloudPets. Those were toys that allowed talking with the children (toy) by using a mobile app. The security was terrible. It turned out that both the user details and the pet recordings were available on databases with no password. And the company didn't even answer to the mails alerting them of the vulnerabilities.

You can view about this terrifying story on Troy Hunt blog: https://www.troyhunt.com/data-from-connected-cloudpets-teddy-bears-leaked-and-ransomed-exposing-kids-voice-messages/

Now, Talkies may actually have made the right choices (it's hard to do so many things wrong as CloudPets did!), but this showcases the level of security of this sector.

So no, I wouldn't trust this toy with my kids data. Not to mention how the toy itself could be compromised (eg. like Hello Barbie).

In fact, Germany went so far to ban the internet connected Cayla dolls over fears they could be exploited to target children: https://www.telegraph.co.uk/news/2017/02/17/germany-bans-internet-connected-dolls-fears-hackers-could-target/

Ángel
  • 17,578
  • 3
  • 25
  • 60
5

Internet-enabled anything poses a risk. As a rule, security is an expense and consumers as a whole really don't consider product security when making purchasing decisions. For example, there was a thread on Reddit recently about a couple who got divorced and she didn't change the password on the Nest thermostat. So while she was out, he would crank up the air conditioning or heat and cause massive utility bills. We also know of baby monitors that have been used to listen in on neighbors without their consent. I've attended IT security demos of internet-connected light switches, showing how easy it was to attack them.

krack is important, definitely, but when compared to a non-existent security posture is irrelevant. Quite simply, if someone is concerned about security, I'd suggest not purchasing networkable anything unless they can identify a need for it to have a network connection and they have the skills to properly secure both it and their network.

WRT your trusted circle of phones: how often would you plan on managing that list? What is the means to join that trusted circle? Do you know when your friends sell their phones back so you can decommission them from your circle? (If your answer is not "no" to the last one, you're probably not being realistic with yourself.)

Encourage creativity. Build skills. Get the kid a bunch of building blocks or a train. Get a Spirograph. Play cards/games with them. Find something that they will play with for hours that doesn't require your constant attention.

baldPrussian
  • 2,768
  • 2
  • 9
  • 14
3

This puts the "IOT" in "IDIOT!".

Most of the companies that make these have no clue how to prevent hackers from taking them over, sometimes programming comically stupid/obvious exploits into them.

The KRACK exploit might be irrelevant half the time since most of these manufacturers wouldn't figure out how to implement some form of encryption.

Any type of internet-enabled voice recording is potentially creepy and downright dangerous invasion of privacy. These devices likely use the cloud for sound processing and storage considering they are almost certainly based on low-grade ARM chips and minimal cheap flash storage at most.

Even if the device is properly made, there's no similar guarantee on the cloud app it uses. You'd be surprised how often researchers happen to stumble on valuable leftover data in the cloud that the previous user of a logical machine instance failed to clean up.

user1258361
  • 420
  • 2
  • 12