January 15, 2020•824 words
(this post was imported from my old blog)
We are not in a time short of products that claim to "encrypt" your personal data with "military standards" and thus keeping them safe from leakage or deliberate attacks. This can really work in convincing a lot of non-tech-savvy people and even some with rudimentary computer knowledge about how secure those products are -- until some leakage events happens out of nowhere and everybody gets screwed.
The problem here is that things involving encryption does not necessarily imply security of the particular data you are concerned about. Encryption is a broad term that can be applied to anything that includes some algorithm to prevent part of the population from accessing some data. Anything from simple dictionary-based cipher to modern cryptography all fall into this category, but I am not even talking about the vulnerabilities concerning different ciphers here. What I am talking about is the question of which part of the population exactly do you want to block from accessing, in other words, the threat model.
The word "secure" itself is vague unless the context specifies a well-defined threat model. What are you afraid of? Who should be able to see your data and who should not? How do we ensure you are you, not someone else faking your identity? Of course, encryption is a powerful tool to achieve any sort of security, but any implementation cannot be said to be secure under all threat models. You are using military-grade encryption right now to browse this post because my blog is using HTTPS protocol which encrypts all plain-text traffic, but to my server, to me, the content still needs to be decrypted, and nothing prevents me from publishing all your IP addresses in some log format. No matter whether you consider IP address privacy-sensitive, I think it is pretty obvious that in this case, if I were to claim my website being secure from such leakage, it would be bogus. The HTTPS protocol defends against people spying on your Internet connection, but does absolutely nothing about both ends of communication. It is secure under the threat model where nobody in the communication channel except both endpoints can be trusted, but nothing else. One cannot imply that such use of encryption is secure under any other circumstances.
It is how encryption is used in a product that matters, not whether it is used. My previous absurd example is laughable, but when such claims come from some more complex or even "commercial" software products, somehow, many of us forget what being secure actually means. And I know that it is a stretch to assume everybody can learn these basics, but frankly, in the age of Internet, one must have such knowledge to "survive" -- I mean, to keep private data safe. There are a lot of resources out there about introductory cryptography, and I am 100% sure every single one of them will talk about the definition of "secure" and threat models at the very beginning. To be fair, many of them are not intended for people without a technical background, so we definitely need some of such resources in simpler language. But what we also need is people that actually try to learn about how to ensure their own security.
Not surprisingly, it also lies upon developers and, in the case of commercial software, companies, to stop throwing buzzwords that are not even well-defined in the first place (trust me, even some open-source projects do this). Do not claim your product to be secure just because you, somehow, used some encryption, somewhere, without mentioning what you are defending against, and how all the buzzwords contribute to this. Do not ever try to imply your product being more secure just because of encryption -- explain what it is for, and what adversary it could prevent. And, of course, you should always know, for yourself, what you are defending against, because some developers really do not. It is not whether actually being secure that matters here -- after all, the word is not well-defined by itself -- but the false sense of security you might implant on your users. The feeling of "I'm secure" without knowing what the hell it even means is much, much more dangerous than any security vulnerability.
(This article was partly motivated by some Magisk module here in China that saves your payment password and auto-fills upon fingerprint authentication to some payment apps. It claimed to be somehow more secure due to its use of encryption, but it actually just encrypts the keys with ANDROID_ID, which, though is not the same across all applications anymore, is still not intended for security purposes and can be predicted, given that the adversary can read the data files. It defends against no extra adversaries compared to not encrypting in the first place, but somehow people do believe its claims, and maybe the developer also does really think so.)