When working with Internet of Things devices, is it recommend to obfuscate or encrypt firmware images pushed to clients? This to make reverse engineering harder.
(They should be signed of course)
When working with Internet of Things devices, is it recommend to obfuscate or encrypt firmware images pushed to clients? This to make reverse engineering harder.
(They should be signed of course)
No. You should not rely upon the obscurity of your firmware in order to hide potential security vulnerabilities that exist regardless of whether or not you encrypt/obfuscate your firmware.
I have a radical suggestion: do the exact opposite. Make your firmware binaries publicly available and downloadable, freely accessible to anyone who wants them. Add a page on your site with details on how to contact you about security issues. Engage with the security community to improve the security of your product.
Doubtful it would be beneficial. It is by far a better option to push it open-source than closed source. It might seem silly and even controversial at first, but opening up a project to the public has plenty of benefits.
While there are people with malicious intents, there are also people wanting to help and make the internet a better place. Open source allows more eyes to look over the project, not only to view about potential features, bugs, and issues but also increase security and stability of the "thing"
And to agree with Polynomial's answer, engaging in a community and building a base of people that help you out with security, will increase the client base by a significant margin.
A well-designed firmware should rely on the strength of its access key rather than relying on the attacker's ignorance of the system design. This follows the foundational security engineering principle known as Kerckhoffs's axiom:
An information system should be secure even if everything about the system, except the system's key, is public knowledge.
The American mathematician Claude Shannon recommended starting from the assumption that "the enemy knows the system", i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them".
You may be interested to know that prior to the late Nineteenth Century, security engineers often advocated obscurity and secrecy as valid means of securing information. However, these knowledge-antagonistic approaches are antithetical to several software engineering design principles — especially modularity.
Some people argue that code which is open source can be audited by many and therefor contains little bugs. On the other hand, attackers have the same easy access and also look for these same vulnerabilities. There is definitely a tradeoff here which is not correctly described in previous answers.
Others mention that code should be inherently secure and therefor requires no obfuscation/encryption/hiding. It is true that a system should be designed to be secure even if you know how it works. This doesn't mean that this is always the case AND the implementation is flawless. In practice, code is never 100% secure. (Take a look at web app security: Why do we need security headers to protect us against XSS and CSRF attacks if there are no vulnerabilities in the web application?) Additional security measures can be taken by trying to hide the code through encryption and obfuscation. In the mobile world, reverse engineering is even seen as a serious risk: OWASP Mobile Top 10 risks.
As no system is 100% secure we can only try to increase the effort required to break it.
So now, the tradeoff between open source/easily available code VS encrypted and obfuscated code. Allowing public review on you source code can help reduce the number of bugs. However, if you are a small company where the public has little incentive to freely audit your code, there is no benefit from publishing your code as nobody will look at it with good intentions. However, it becomes much easier for attackers to discover vulnerabilities. (We are not talking about the newest iOS version which every security researcher is trying to crack)..
In this case we aren't even talking about open sourcing the code for public review. We are talking about encrypting the firmware in transit. Security researchers are not likely going to buy your device to obtain the code to discover and publish vulnerabilities. Therefor the chance of having the good guys finding the vulnerabilities VS the bad guys finding them decreases.
Are you sure you are not confusing two cryptographic methods?
You should certainly sign your firmware updates for security reasons. This allows the device to verify they are from you.
To encrypt them adds a little bit of obscurity and that's it. Since the decrypting device is not under you control, someone sooner or later will hack it, extract the decryption key and publish it on the Internet.
You need to ask yourself this question. Is someone clever enough and interested enough to download your firmware and start looking for vulnerabilities going to be deterred by an additional firmware encryption layer where the key must be revealed?
This is just another hoop to jump through, no different than figuring out what disk format your firmware image is in. This isn't even a particularly difficult hoop to jump through. Keep in mind that all FAR more sophisticated methods of what amounts to DRM have all been broken.
Odds are someone determined enough to hack your internet connected coffee maker/Dishwasher isn't going to be deterred by an additional encryption layer.
In the sense of "will the encryption of firmware prevent the detection of vulnerabilities in my code?" other answers have addressed the core of it: although it may discourage some attackers, security through obscurity leads to a false feeling of invulnerability that is counterproductive.
However, I'd like to add an extra bit on the basis of my experience. I have seen that the firmware packages are sometimes encrypted, but the motivation for that is only to preserve a company's intelectual property, rather than be a control against attackers.
Of course, hackers often find ways around this "control", but that's a different story.
Several people said you shouldn't rely on obfuscating the code by encrypting it, but to just make it secure. Quite recently the encryption of some rather critical software in Apple's iPhones was cracked (meaning that hackers can now see the actual code, no more). It stopped anyone from examining the code for three years, so the time from release to first crack has been increased by three years. That seems like some very successful obfuscation.
And encryption goes very well together with code signing. So when your device is given new firmware, it can reject any fake firmware. Now that part isn't just recommended, that is absolutely essential.