Earlier in the week I blogged about mobile banking security, and I said that in design terms it is best to assume that the internet is in the hands of your enemies. In case you think I was exaggerating…
The thieves also provided “free” wireless connections in public places to secretly mine users’ personal information.
Personally, I always use an SSL VPN when connected by wifi (even at home!) but I doubt that most people would ever go to this trouble or take the time to configure a VPN and such like. Anyway, the point is that the internet isn’t secure. And actually SMS isn’t much better, which is why it shouldn’t really be used for securing anything as important as home banking.
The report also described how gangs stole mobile security codes – which banks automatically send to card holders’ registered mobile phones to verify online transactions – by using either a Trojan virus in the smartphone or a device that intercepted mobile signals up to a kilometre away.
Of course, no-one who takes security seriously ever wanted to do things this way in the first place (which is why, for example, we used a SIM Toolkit application for M-PESA). This is hardly a new opinion or me going on about things with the wisdom of hindsight.
I saw Charles Brookson, the head of the GSMA security group, make a very interesting point recently. Charles was talking about the use of SMS for mobile banking and payment services and he made the point that SMS has, to all intents and purposes, no security whatsoever.
In case you’re interested, that blog post comes from 2008 and if I remember correctly I’d made a presentation around that time drawing on a story from 2007 to illustrate that the mass market use of SMS for secure transactions might prove to be unwise despite the convenience.
Identity theft and a fraudulent SIM swap cost a children’s charity R90 000.
These are all symptoms of the fact that nobody listens to me about mobile banking security. Well, sort of. I’m sure other people have made the same point about keeping private keys in tamper-resistant hardware so that all bank-customer communications are securely encrypted and digitally-signed at all times, but since I’ve been making the same point for two decades (back to the days of the proposed “Genie Passport” at BT Cellnet) and despite the existence proof of M-PESA nothing much seems to be happening. Or at least it wasn’t. But perhaps this era is, finally, coming to an end. Here is what the US Department of Commerce’s National Institute of Standards and Technology (NIST) say about out-of-band (OOB) text messaging in their latest Digital Authentication Guideline (July 2016):
OOB using SMS is deprecated, and will no longer be allowed in future releases of this guidance.
I looked up “deprecated” just to make sure I understood, since I assumed it meant something other than a general disapproval. According to my dictionary: “(chiefly of a software feature) be usable but regarded as obsolete and best avoided, typically because it has been superseded: this feature is deprecated and will be removed in later versions”. So: as of now, no-one should be planning to use SMS for authentication.
The NIST guideline goes on to talk about using push notifications to applications on smart phones, which is how we think it should be done. But how should this work in the mass market? The banks and the telcos and the handset manufacturers and the platforms just do not agree on how it should all work. But surely we all know what the answer is, which is that all handsets should have a Trusted Execution Environment (like the iPhones and Samsungs do) and third-parties should be allowed access to it on open, transparent and non-discriminatory terms. The mobile operators should use the SIM to offer a basic digital identity services (as indeed some are beginning to do with the GSMA’s Mobile Connect). The banks should use standard identity services from the SIM and store virtual identities in the TEE. There you go, sorted.
[Note: there’s no need to read this paragraph if you don’t care what happens under the hood] Now, when the Barclays app loads up on my phone it would bind the digital identity in my SIM to my Barclays identity and use the TEE for secure access to resources (e.g. the screen). Standard authentication services via FIDO should be in place so that Barclays can request appropriate authentication as and when required. Then when Barclays want to send me a message they generate a session key and encrypt the message. Then they encrypt the session key using the public key in my Barclays identity. Then they send the message to the app. The only place in the world where the corresponding private key exists is in my SIM, so the app sends the encrypted session key to the SIM and gets back the key it can then use to decrypt the message itself. In order to effect the use of the private key, the SIM requires authentication, so the TEE takes over the screen and the fingerprint reader and I swipe my finger or enter a PIN or whatever. (You could, of course, in true Apple style simply ignore the SIM and put the private key in the TEE, but I don’t want to get sidetracked.)
Why is this all so hard? Why don’t I have a secure “Apple Passport” or “Telefonica Passport” or “British e-Passport” on my iPhone right now with secure visas for all the places I want to visit like my bank and Manchester City Football Club and Waitrose?
It seems to me that there is little incentive for the participants to work together so long as each of them thinks that they can win and control the whole process. Apple and Google and Samsung and Verizon and Vodafone all want to charge the bank a dollar per log in (or whatever) and the banks are worried that if they pay up (in what might actually be a reasonable deal at the beginning) then they will be over a barrel in the mass market. Is it possible to find some workable settlement between these stakeholders so that we can all move on? Or a winner?