The early days of the British government’s new cyber-filter have been predictably amusing, but they highlight a serious issue. What are the principles? What do politicians want the technologists to do?
At the end of last year the nice people at Information Risk Management invited me along to their “Risky Business” event in London to enjoy a morning of serious thinking about some key issues in information security. They had some pretty impressive speakers: Mike Lynch, the founder of Autonomy; the head of cyber policy for GCHQ, the head of IT security from the London Olympics and so on. The reason I was thinking about this was because I was thinking about the issue of Internet “filtering”, as is now the fashion in the UK.
If their parents have chosen this option, children using O2 phones will be unable to access almost all of the internet: police websites, the NHS, ChildLine, the NSPCC, the Samaritans, many schools and even the main government website, GOV.UK.
These problem are inevitable. But what do we want? Do we want children to be able to see things like MTV online? Who gets to decide? What is the principle at work? Alec Ross, who was Senior Advisor for Innovation and Technology to the Secretary of State Hilary Clinton, gave the keynote address on “The promise and peril of our networked world”. I was looking forward to this, as I think that it’s important to understand what the State Department’s policies around security, privacy, the web and filtering are. Alec was a good speaker, as you’d expect from someone with a background in diplomacy, and he gave some entertaining and illustrative examples of using security to help defeat Mexican drug cartels and Syrian assassins. He also spent part of the talk warning against an over-reaction to “Snowden” leading to a web Balakanisation that helps no-one.
I was thinking about policy though. Governments, and people, don’t really know what they want us (ie, technologists) to do. This is what I have casually referred to as the “Clinton Paradox” before, and it is nicely summarised here:
We must have ways to protect anonymity of good people, but not allow anonymity of bad people.
[From Digital Identity: May 2011]
I challenged Alec about this in the Q&A — slightly mischievously, to be honest, because I suspected he may have had a hand in the speech that I referred to in that blog post — and he said that people should be free to access the internet but not free to break the law, which is a politician’s non-answer (if “the law” could be written out in predicate calculus, he might have had a point, but until then…). If we take that at face value, though, what does it mean? Alec wasn’t clear if he means just US law or anyone’s law. We didn’t get to discuss that.
When I pushed on the issue of openness, he was clearer. He said that he thought that citizens should be able to communicate in private even if that means that they can send each other unauthorised copies of “Game of Thrones” as well as battle plans for Syrian insurgents. I think I probably agree, but the key here is the use of the phrase “in private”. I wonder if he meant “anonymously”? I’m a technologist, so “anonymous” and “private” mean entirely different things and each can be implemented in a variety of ways.
The politicians are going to have to tell us what they want. If they want people to be able to communicate anonymously, then they are going to have to accept that criminals will do so. If they want us to be able to communicate in private, then they are going to have to introduce an identity infrastructure and tell us under what circumstances the state will be able to “undo” that privacy.
It was an enjoyable and thought-provoking morning, so thanks for that IRM, but it left me slightly pessimistic that the gap between people like me and the who people who a running things is widening. Is this an age thing?