Will 2022 start to drive the future of Interoperability and Inclusion?

close up shot of a calendar

Our overriding theme of this year’s Live5 is interoperability which will lead to inclusion. Whether this is in payments or transit, identity or as a generalised trend what we’re seeing is a collapsing of the barriers between silos. In some areas this is happening more quickly than in others.

Will the UK identity framework support decentralised identity?

question mark on paper crafts

In our Live 5 for 2021, we said that governance would be a major topic for digital identity this year. Nowhere has this been more true than in the UK, where the government has been diligently working with a wide set of stakeholders to develop its digital identity and attribute trust framework – the rules of road for digital identity in the UK. The work continues but with the publication of the second iteration of the framework I thought it would be helpful to focus on one particular aspect – how might the framework apply to decentralised identity, given that is the direction of travel in the industry.

Digital Identity Wallets are coming

I was delighted to be asked to present a keynote at the FIDO Authenticate Summit and chose to focus on digital identity governance, which is something of a hot topic at the moment. Little did I know that the day before my session was recorded the European Commission would propose a monumental change to eIDAS, the Europe Union’s digital identity framework – one of the main examples I was planning to refer to. I hastily skimmed the proposed new regulation before the recording but have since had the time to take a more detailed look.

Safer Internet Day 2021 – are my children actually safer?

internet screen security protection

Today marks the 10th anniversary of Safer Internet Day in the UK. Each year Industry, Educators, Regulators, Health & Social Care workers and Parents rally to raise awareness and put into action, plans to tackle findings from significant research on the topic of trust and safety on the internet. This year one of the research pieces talks of the challenge ‘An Internet Young People Can Trust’. As a mum of two school age children, I am sat here wondering if the internet will ever be safe … for them or me.

If I think about life BC (before COVID), my eldest used social media for broadcast communications to her friends. She was guided on the appropriateness of certain apps and our acid test on the content she was posting, was always ‘would you go up to a stranger in the street and give him your name, age, location and a photo of you in a bikini’ … her reaction was always ‘err, no’. My youngest had never been online apart from BBC Bitesize for homework assignments. We’re not online gamers so have never had constant nagging to go online. Additionally, you have to remember the internet (and mobile internet) has been significant in my work world since 1990 so I have a heightened understanding of the pitfalls and have seen many fall foul of their online reputation, tarnishing their in-person reputation.

Malware Wolves in Developer Sheep’s Clothing

internet screen security protection

When consumers install software on their devices, they often perform some sort of risk evaluation, even if they don’t consciously realise it.  They might consider who provides the software, whether it is from an app-store, what social media says, and whether they have seen any reviews.  But what if once a piece of software had been installed, the goalposts moved, and something that was a genuine software tool at the time of installation turned into a piece of malware overnight.

This is what happened to approximately 300,000 active users of Chrome ad blocking extension Nano Adblocker.  You see, at the beginning of October, the developer of Nano Adblocker sold it to another developer who promptly deployed malware into it that issued likes to hundreds of Instagram posts without user interaction.  There is some suspicion that it may have also been uploading session cookies.

Counterintuitive Cryptography

Greyscale backing image

There was a post on Twitter in the midst of the coronavirus COV-19 pandemic news this week, that caught my eye. It quoted an emergency room doctor in Los Angeles asking for help from the technology community, saying “we need a platform for frontline doctors to share information quickly and anonymously”. It went on to state the obvious requirement that “I need a platform where doctors can join, have their credentials validated and then ask questions of other frontline doctors”.

This is an interesting requirement that tell us something about the kind of digital identity that we should be building for the modern world instead of trying to find ways to copy passport data around the web. The requirement, to know what someone is without knowing who they are, is fundamental to the operation of a digital identity infrastructure in the kind of open democracy that we (ie, the West) espouse. The information sharing platform needs to know that the person answering a question has relevant qualifications and experience. Who that person is, is not important.

Now, in the physical world this is an extremely difficult problem to solve. Suppose there was a meeting of frontline doctors to discuss different approaches and treatments but the doctors wanted to remain anonymous for whatever reason (for example, they may not want to compromise the identity of their patients). I suppose the doctors could all dress up as ghosts, cover themselves in bedsheet and enter the room by presenting their hospital identity cards (through a slit in the sheet) with their names covered up by black pen. But then how would you know that the identity card belongs to the “doctor” presenting it? After all the picture on every identity card will be the same (someone dressed as a ghost) and you have no way of knowing whether it was their ID cards or whether they were agents of foreign powers, infiltrators hellbent on spreading false information to ensure the maximum number of deaths. The real-world problem of demonstrating that you have some particular credential or that you are the “owner” of a reputation without disclosing personal information is a very difficult problem indeed.

(It also illustrates the difficulty of trying to create large-scale identity infrastructure by using identification methods rather than authenticating to a digital identity infrastructure. Consider the example of James Bond, one of my favourite case studies. James Bond is masquerading as a COV-19 treatment physician in order to obtain the very latest knowledge on the topic. He walks up to the door of the hospital where the meeting is being held and puts his finger on the fingerprint scanner at the door… at which point the door loudly says “hello Mr Bond welcome back to the infectious diseases unit”. Oooops.)

In the virtual world this is quite a straightforward problem to solve. Let’s imagine I go to the doctors information sharing platform and attempt to login. The system will demand to see some form of credential proving that I am a doctor. So I take my digital hospital identity card out from my digital wallet (this is a thought experiment remember, none of the things actually exist yet) and send the relevant credential to the platform.

The credential is an attribute (in this case, IS_A_DOCTOR) together with an identifier for the holder (in this case, a public key) together with the digital signature of someone who can attest to the credential (in thsi case, the hospital the employs the doctor). Now, the information sharing platform can easily check the digital signature of the credential, because they have the public keys of all of the hospital and can extract the relevant attribute.

But how do they know that this IS_A_DOCTOR attribute applies to me and that I haven’t copied it from somebody else’s mobile phone? That’s also easy to determine in the virtual world with the public key of the associated digital identity. The platform can simply encrypt some data (anything will do) using this public key and send it to me. Since the only person in the entire world who can decrypt this message is the person with the corresponding private key, which is in my mobile phone’s secure tamper resistant memory (eg, the SIM or the Secure Enclave or Secure Element), I must be the person associated with the attribute. The phone will not allow the private key to be used to decrypt this message without strong authentication (in this case, let’s say it’s a fingerprint or a facial biometric) so the whole process works smoothly and almost invisibly: the doctor runs the information sharing platform app, the app invisibly talks to the digital wallet app in order to get the credential, the digital wallet app asks for the fingerprint, the doctor puts his or her finger on the phone and away we go.

Now the platform knows that I am a doctor but does not have any personally identifiable information about me and has no idea who I am. It does however have the public key and since the hospital has signed a digital certificate that contains this public key, if I should subsequently turn out to be engaged in dangerous behaviour, giving out information that I know to be incorrect, or whatever else doctors can do to get themselves disbarred from being doctors, then a court order against the hospital will result in them disclosing who I am. I can’t do bad stuff.

This is a good example of how cryptography can deliver some amazing but counterintuitive solutions to serious real-world problems. I know from my personal experience, and the experiences of colleagues at Consult Hyperion, that it can sometimes be difficult to communicate just what can be done in the world of digital identity by using what you might call counterintuitive cryptography, but it’s what we will need to make a digital identity infrastructure that works for everybody in the future. And, crucially, all of the technology exists and is tried and tested so if you really want to solve problems like this one, we can help right away.

NSTICy questions

Greyscale backing image

I’ve been reading through the final version of the US government’s National Strategy on Trusted Identities in Cyberspace (NSTIC). This is roughly what journalists think about:

What’s envisioned by the White House is an end to passwords, a system in which a consumer will have a piece of software on a smartsphone or some kind of card or token, which they can swipe on their computers to log on to a website.

[From White House Proposes A Universal Credential For Web : The Two-Way : NPR]

And this is roughly what the public think about it

Why don’t they just put a chip in all of us and get it over with? What part of being a free people do these socialists not understand?

[From White House Proposes A Universal Credential For Web : The Two-Way : NPR]

And this is roughly what I think about it: I think that NSTIC isn’t bad at all. As I’ve noted before I’m pretty warm to it. The “identity ecosystem” it envisages is infinitely better than the current ecosystem and it embodies many of the principles that I regard a crucial to the online future. It explicitly says that “the identity ecosystem will use privacy-enhancing technology and policies to inhibit the ability of service providers (presumably including government bodies) to link an individual’s transactions and says that by default only the minimum necessary information will be shared in transactions. They have a set of what they term the Fair Information Practice Principles (FIPPs) that share, shall we say, a common heritage with Forum friend Kim Cameron’s laws (for the record, the FIPPs cover transparency, individual participation, purpose specification, data minimisation, use limitation, data quality and integrity, security and accountability and audit).

It also, somewhat strangely, I think, says the this proposed ecosystem “will preserve online anonymity”, including “anonymous browsing”. I think this is strange because there is no online anonymity. If the government, or the police, or an organisation really want to track someone, they can. There are numerous examples which show this to be the case. There may be some practical limitations as to what they can do with this information, but that’s a slightly different matter: if I hunt through the inter web tubes to determine that that the person posting “Dave Birch fancies goats” on our blog comes from a particular house in Minsk, there’s not much I can do about it. But that doesn’t make them anonymous, it makes the economically anonymous, and that’s not the same thing, especially to people who don’t care about economics (eg, the security services). It’s not clear to me whether we as a society actually want an internet that allows anonymity or not, but we certainly don’t have one now.

The strategy says that the identity ecosystem must develop in parallel with ongoing “national efforts” to improve platform, network and software security, and I guess that no-one would argue against them, but if we were ever to begin to design an EUSTIC (ie, an EU Strategy for Trusted Identities in Cyberspace) I think I would like it to render platform, network and software security less important. That is, I want my identity to work properly in an untrusted cyberspace, one where ne’erdowells have put viruses on my phone and ever PC is part of a sinister botnet (in other words, the real world).

I rather liked the “envision” boxes that are used to illustrate some of the principles with specific examples to help politicians and journalists to understand what this all means. I have to say that it didn’t help in all cases…

The “power utility” example serves as a good focus for discussion. It expects secure authentication between the utility and the domestic meter, trusted hardware modules to ensure that the software configuration on the meter is correct and to ensure that commands and software upgrades do indeed come from the utility. All well and good (and I should declare an interest a disclose that Consult Hyperion has provided paid professional services in this area in the last year). There’s an incredible amount of work to be done, though, to translate these relatively modest requirements into a national-scale, multi-supplier roll-out.

Naturally I will claim the credit for the chat room “envision it”! I’ve used this for many years to illustrate a number of the key concepts in one simple example. But again, we have to acknowledge there’s a big step from the strategy to any realistic tactics. Right now, I can’t pay my kids school online (last Thursday saw yet another chaotic morning trying to find a cheque book to pay for a school outing) so the chance of them providing a zero-knowledge proof digital credential that the kids can use to access (say) BBC chatrooms is absolutely nil to any horizon I can envisage. In the UK, we’re going to have to start somewhere else, and I really think that that place should be with the mobile operators.

What is the government’s role in this then? The strategy expect policy and technology interoperability, and there’s an obvious role for government — given its purchasing power — to drive interoperability. The government must, however, at some point make some firm choices about its own systems, and this will mean choosing a specific set of standards and fixing a standards profile. They are creating a US National Project Office (NPO) within the Department of Commerce to co-ordinate the public and private sectors along the Implementation Roadmap that is being developed, so let’s wish them all the best and look forward to some early results from these efforts.

As an aside, I gave one of the keynote talks at the Smart Card Alliance conference in Chicago a few weeks ago, and I suggested, as a bit of an afterthought, after having sat through some interesting talks about the nascent NSTIC, that a properly implemented infrastructure could provide a viable alternative to the existing mass market payment schemes. But it occurs to me that it might also provide an avenue for EMV in the USA, because the DDA EMV cards that would be issued (were the USA to decide to go ahead and migrate to EMV) could easily be first-class implementations of identity credentials (since DDA cards have the onboard cryptography needed for encryption and digital signatures). What’s more, when the EMV cards migrate their way into phones, the PKI applications could follow them on the Secure Element (SE) and deliver an implementation of NSTIC that could succeed in the mass market with the mobile phone as a kind of “personal identity commander”.

These opinions are my own (I think) and presented solely in my capacity as an interested member of the general public [posted with ecto]

Paleo-crypto

Greyscale backing image

In some of the workshops that I’ve been running, I’ve mentioned that I think that transparency will be one of the key elements of new propositions in the world of electronic transactions and that clients looking to develop new businesses in that space might want to consider the opportunities for sustained advantage. Why not let me look inside my bank and see where my money is, so to speak? If I log in to my credit card issuer I can see that I spent £43 on books at Amazon: if I log in to Amazon I can that I spent £43 but I can also see what books I bought, recommendations, reviews and so on. They have the data, so they let me look at it. If I want to buy a carpet from a carpet company, how do I know whether they will go bankrupt or not before they deliver? Can I have a look at their order book?
Transparency increases confidence and trust. I often use a story from the August 1931 edition of Popular Mechanics to illustrate this point. The article concerns the relationship between transparency and behaviour in the specific case of depression-era extra-judicial unlicensed wealth redistribution…

BANK hold-ups may soon become things of the past if the common-sense but revolutionary ideas of Francis Keally, New York architect, are put into effect. He suggests that banks be constructed with glass walls and that office partitions within the building likewise be transparent, so that a clear view of everything that is happening inside the bank will be afforded from all angles at all times.

[From Glass Banks Will Foil Hold-Ups]

I urge you to clink on the link, by the way, to see the lovely drawing that goes with the article. The point is well made though: you can’t rob a glass bank. No walls, no Bernie Madoff. But you can see the problem: some of the information in the bank is confidential: my personal details, for example. Thus, it would be great if I could look through the list of bank deposits to check that the bank really has the money it says it has, but I shouldn’t be able to see who those depositors are (although I will want third-party verification that they exist!).

Why am I talking about this? Well, I read recently that Bank of America has called in management consultants to help them manage the fallout from an as-yet-nonexistent leak of corporate secrets, although why these secrets be prove embarrassing is not clear. In fact, no-one knows whether the leak will happen, or whether it will impact BofA, although Wikileaks’ Julian Assange had previously mentioned having a BofA hard disk in his possession, so the market drew its own conclusions.

Bank of America shares fell 3 percent in trading the day after Mr. Assange made his threat against a nameless bank

[From Facing WikiLeaks Threat, Bank of America Plays Defense – NYTimes.com]

Serious money. Anyway, I’m interested in what this means for the future rather than what it means now: irrespective of what Bank of America’s secrets actually are because

when WikiLeaks, a whistle-blowing website, promised to publish five gigabytes of files from an unnamed financial institution early next year, bankers everywhere started quaking in their hand-made shoes. And businesses were struck by an alarming thought: even if this threat proves empty, commercial secrets are no longer safe.

[From Business and WikiLeaks: Be afraid | The Economist]

Does technology provide any comfort here at all? I think it does. Many years ago, I had the pleasant experience of having dinner with Nicholas Negroponte, John Barlow and Eric Hughes, author of the cypherpunk manifesto, at a seminar in Palm Springs. This was in, I think, 1995. I can remember Eric talking about “encrypted open books”, a topic that now seems fantastically prescient. His idea was to develop cryptographic techniques so that you could perform certain kinds of operations on encrypted data: in other words, you could build glass organisations where anyone could run some software to check your books without actually being able to read your books. Nick Szabo later referred back to the same concepts when talking about the specific issue of auditing.

Knowing that mutually confidential auditing can be accomplished in principle may lead us to practical solutions. Eric Hughes’ “encrypted open books” was one attempt.

[From Szabo]

Things like this seem impossible when you think of books in terms of paper and index cards: how can you show me your books without giving away commercial data? But when we think in terms of bits, and cryptography, and “blinding” it is all perfectly sensible. This technology seems to me to open up a new model, where corporate data is encrypted but open to all so that no-one cares whether it is copied or distributed in any way. Instead of individuals being given the keys to the database, they will be given keys to decrypt only the data that they are allowed to see and since these keys can easily be stored in tamper-resistant hardware (whereas databases can’t) the implementation becomes cost-effective. While I was thinking about this, Bob Hettinga reminded me about Peter Wayner’s “translucent databases“, that build on the Eric’s concepts.

Wayner really does end up where a lot of us think databases will be someday, particularly in finance: repositories of data accessible only by digital bearer tokens using various blind signature protocols… and, oddly enough, not because someone or other wants to strike a blow against the empire, but simply because it’s safer — and cheaper — to do that way.

[From Book Review: Peter Wayner’s “Translucent Databases”]

There are other kinds of corporate data that it may at first seem need to be secret, but on reflection could be translucent (I’ll switch to Peter’s word here because it’s a much better description of practical implementations). An example might be salaries. Have the payroll encrypted but open, so anyone can access a company’s salary data and see what salaries are earned. Publish the key to decrypt the salaries, but not any other data. Now anyone who needs access to salary data (eg, the taxman, pressure groups, potential employees, customers etc) can see it and the relevant company data is transparent to them. One particular category of people who might need access to this data is staff! So, let’s say I’m working on a particular project and need access to our salary data because I need to work out the costs of a proposed new business unit. All I need to know is the distribution of salaries: I don’t need to know who they belong to. If our payroll data is open, I can get on and use it without having to have CDs of personal data sent through the post, of whatever.

I can see that for many organisations this kind of controlled transparency (ie, translucency) will be a competitive advantage: as an investor, as customer, as a citizen, I would trust these organsations far more than “closed” ones. Why wait for quarterly filings to see how a public company is doing when you could go on the web at any time to see their sales ledger? Why rely on management assurances of cost control when you can see how their purchase ledger is looking (without necessarily seeing what they’re buying or who they are buying it from) when you can see it on their web page? Why not check staffing levels and qualifications by accessing the personnel database? Is this any crazier than Blippy?

These opinions are my own (I think) and are presented solely in my capacity as an interested member of the general public [posted with ecto]

My multiples

Greyscale backing image

[Dave Birch] I watched a strange TV show on a plane back from the US. I was about a woman with “Multiple Personality Disorder” (remember that book Sybil — not the one by Benjamin Disraeli — from years ago). I make no comment about whether the disorder is real or not (the TV show wasn’t that interesting) but there’s no doubt in my mind that when it comes to the virtual world, multiple personalities are not only real, but desirable.

Here’s a good reason for not having your Facebook account in your real name (as I don’t):

Five interviewees who traveled to Iran in recent months said they were forced by police at Tehran’s airport to log in to their Facebook accounts. Several reported having their passports confiscated because of harsh criticism they had posted online about the way the Iranian government had handled its controversial elections earlier this year.

[From Emergent Chaos: Fingerprinted and Facebooked at the Border]

I’ve already created a new Facebook identity and posted a paen to Iran’s spiritual leaders just in case I am ever detained by revolutionary guards and forced to log in. But will this be enough? Remember what happened to film maker David Bond when he made his documentary about trying to disappear? The private detectives that he had hired to try and find him simply went through Facebook:

Pretending to be Bond, they set up a new Facebook page, using the alias Phileas Fogg, and sent messages to his friends, suggesting that this was a way to keep in touch now that he was on the run. Two thirds of them got in contact.

[From Can you disappear in surveillance Britain? – Times Online]

So even if you are careful with your Facebook personalities, your friends will blab. As far as I can tell, there’s no technological way around this: so long as someone knows which pseudonym is connect to which real identity, the link may be uncovered. Probably the best we can do is to make sure that the link is held by someone who will demand a warrant before opening the box.


Subscribe to our newsletter

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

By accepting the Terms, you consent to Consult Hyperion communicating with you regarding our events, reports and services through our regular newsletter. You can unsubscribe anytime through our newsletters or by emailing us.