Once more, with feeling: expanding mass surveillance even further will not make us safer

So, who is to blame for the rise of terrorist groups like ISIS or deplorable acts by radicalised individuals like the horrible murder of drummer Lee Rigby in London on 22 May 2013? It depends on who you ask but there seems to be some consensus among the usual suspects that the social media – more specifically Facebook – are to blame. Immigration is to blame. And of course Edward Snowden.

Facebook should have checked “all website postings for possible terrorist content,” says a report from the Intelligence and Security Committee (ISC) in the British Parliament, which claims that had Facebook alerted MI5 to extremist content on the user profile of Lee Rigby’s killer, the murder could have been prevented.

Interestingly, the report also says that the intelligence agencies – despite committing several errors – could not have prevented the murder. At least not without the information from Facebook which the report considers a “safe haven for terrorists”.

Now I am not saying that Lee Rigby’s murder was anything other than a terrible atrocity. But to use it as an excuse to demand even greater surveillance powers when even “the Rigby report blithely conceded, “the government’s counter-terrorism programmes are not working”,” is not the way to respond to acts like this. Neither is exonerating the intelligence agencies and laying the blame squarely on Facebook. Glenn Greenwald’s comments on the matter make it clear how frankly paradoxical it is to claim that

the British intelligence agencies such as GCHQ and MI5 – despite being among the most aggressive and unrestrained electronic surveillance forces on the planet – had no possible way to have accessed [the] exchange [between Rigby’s killers]. But, the Committee said, the social media company not only had the ability – but also the duty – to monitor the communications of all its users and report anything suspicious to the UK Government.

Facebook’s social responsibility, which has been called upon several times since the release of the report, aside, surely agencies as formidable as GCHQ, MI5 and MI6 should not need Facebook to add to their already substantial haystack of intelligence:

[The ISC]’s report itself makes clear that the intelligence agencies of Her Majesty’s Government already collect such massive quantities of private communications that they have no ability even to understand what they’ve collected: in other words, they can’t detect terror plotting because they’re overloaded with the communications of millions of innocent people.

Similarly, the NSA has conceded that “at most one terrorist attack might have been foiled by NSA’s bulk collection of all American phone data”, highlighting yet again the questionable effectivity of bulk collection programmes – something which Barack Obama’s review panel has also previously stated. In the UK, Home Secretary Theresa May claims that surveillance has “foiled 40 plots since 2005”, but – as Seumas Milne correctly asks:

Who would know? Even ministers are in no position to judge the claims securocrats make about themselves.

Observer columnist David Mitchell, too, has some interesting things to say on the subject and, like him, I don’t trust May’s claims very much. But these claims make for great surveillance advertising. And rather than to address the problem that mass surveillance doesn’t seem to be working very well, and to perhaps review both their intelligence gathering practices and the insufficient legislation that regulates them, governments and their representatives resort to the latest fashionable means of deflection: inflammatory rhetoric. Seumas Milne isn’t wrong when he argues that

[i]t takes some mastery of spin to turn the litany of intelligence failures over last year’s butchery of the off-duty soldier Lee Rigby into a campaign against Facebook.

Yet, without fail, the same rhetoric about how terrorism and extremism can only be combated by use of extensive surveillance is used to justify granting additional powers to the agencies. Malcolm Rifkind (the ISC chair) and his committee aren’t the only ones doing so. The aforementioned Ms May is very proficient at it and so is David Cameron.

This is no different in the US, where Edward Snowden – always the anti-Christ – is being blamed for making information available to ISIS that allows them to protect their communications in the most sophisticated of ways. Tellingly, the US Congress has just defeated the USA Freedom Act, a bill that attempted to curb surveillance powers. Mind you, by the time the bill was defeated it has been watered down so much it wouldn’t have done a lot of good anyway:

The provisions limiting who might be watched and why remained extremely vague, to the disappointment of all defenders of civil liberties.

The apparent concession that the NSA would no longer hold years and years’ worth of communications data is worth very little in practice, since the telecommunications companies would still hold on to it.

[T]he definition of a legitimate target expanded in earlier stages of the bill, turning it into an amorphous, greedy phrase that might mean anything.

And yet, the bill

represented at least an attempt to draw up a principled and comprehensible framework within which the intelligence and security services could do their vital work in the digital world. Now that it has failed we are back in the business of messy, ad hoc and incoherent compromise.

However, more to the point of this post and perhaps more concerning – and more telling – than the actual defeat of the Freedom Act is the shouting about the threat of ISIS that occurred prior to it: “God forbid we wake up tomorrow and [Islamic State] is in the United States.

UK officials are only too eager to lap this kind of rhetoric up and regurgitate it as it suits them.

As such, Lord West is reported to have said:

Since the revelations of the traitor Snowden, terrorist groups – in particular Isil (Islamic State) – have changed their methods of communications and shifted to other ways of talking to each other. Consequently there are people dying who actually would now be alive.

Essentially what he is saying is that Snowden is a traitor with the blood of ISIS’s victims on his hands and, by inference, Lee Rigby’s blood too, as the Mail so craftily suggests.

Tellingly, the agencies’ hands are clean – the reasons their operations fail is because Edward Snowden blew the whistle and tech companies like Facebook (who also have blood on their hands) won’t cooperate. And more people will die unless governments introduce legislation that strengthens or expands surveillance powers.

This is a false and dangerous conclusion and one of the single best arguments against both the mass collection and retention of data and the “nothing to hide, nothing to fear” defense of both surveillance and disinterest in protecting oneself was made in an open thread on the Guardian website debating whether or not Edward Snowden’s revelations had had any effect:

In the Netherlands it was practiced that every local parish used to keep a list of all the people who lived in the region, and a small amount of information about them: their place of birth, fathers name and religion. Nothing really important, until 1940. Jews in the Netherlands suffered the most heavily at the hands of the Nazis than in any other country. It only takes a change in government (UKIP anyone?) to make the personal information held by other parties very, very relevant.

(Double check the claim about the Netherlands here if you like.)

UKIP obviously comes to mind but then, given the targeting of Muslim schools by UK schools inspector Ofsted recently, the comment rings even more true. Terrorism isn’t the only extremism on the rise in Britain and elsewhere. In light of recent anti-Islam, anti-EU, anti-immigrant sentiment, as well the documented failure of mass surveillance as a tool to prevent extremist attacks, we need to urgently ask the question if more surveillance is really what we want. After all, as Charles Arthur points out:

If the UK can demand access to the contents of internet accounts – even where the data is stored overseas – of people in the UK, why shouldn’t Russia demand exactly the same of Britons who happen to be in Russia? Why shouldn’t border guards in China demand access to your hard drive as you get off the plane in Shanghai? What’s to stop Iran insisting on the decryption keys to any internet service that wants to connect its citizens?

In short: just because we fancy ourselves safe under our democratic governments doesn’t mean that we are, or always will be. And, as Arthur goes on:

The current system is a mess – but making it easier for MI5 to get hold our emails won’t actually make us any safer. Better work by the intelligence agencies will.

He is right. And for better work by the intelligence agencies, encryption isn’t a problem by default. Facebook or its security settings aren’t the problem. Edward Snowden certainly hasn’t caused the problem. The problem that the sheer mass of information collected is overwhelming the agencies has been around for a while. And knee-jerk responses that aim to preserve powers of questionable effectiveness are a problem because they stand in the way of better solutions. Even members of the ISC itself have warned against using its report to justify greater surveillance powers.

When addressing these issues, governments would do well to stop exploiting “the fears and emotions surrounding [the] attack [on Lee Rigby – and other similarly emotionally charged incidents] to demand still more spying powers.” What they might want to do instead, is to start thinking about how the work of the intelligence agencies can really be improved long-term and to everyone’s benefit. To that question, expanding mass surveillance even further isn’t the answer. And it won’t make any of us any safer, either.

“For your own safety and security” – Thoughts on CCTV surveillance in the UK

Earlier this week, I got into an argument over CCTV in the UK. As you may or may not know, it is nigh impossible, especially in London, to walk the streets without spotting at least one CCTV camera. Statistics aren’t clear on how many cameras actually exist, numbers ranging from 1.8 to over 4 million. The higher figures may be inaccurate as the methods by which they were obtained are somewhat unscientific. FullFact has tried an estimate here. The Guardian has further information on CCTV here. Yet, even if we take the lowest figure available, CCTV surveillance, especially in London, is pretty much ubiquitous.

Whether or not any number of cameras is justified when set against their use (or lack thereof) in preventing and solving crime – that was basically the question we were discussing. If ubiquitous CCTV coverage stops just one major crime, one side of the argument went, then by all means put as many cameras out there as you like. This may sound somewhat legitimate. After all, you cannot be against stopping major crime, can you? Of course you can’t, and I am not. However, the argument still makes me uncomfortable because it begs the question how much we should sacrifice our right to privacy, our right to be left alone and be free of state interference, in favour of security.

Throughout the discussion, I realised that I had little to substantiate the opposite side of the argument that the number of CCTV cameras in the UK is disproportionate and that ubiquitous surveillance isn’t justified given its limited effectiveness and worrying implications for civil rights and liberties. I do worry that a surveillance apparatus as ubiquitous as the one at work in the UK could quickly turn into a very bad thing indeed if it fell into the wrong hands. However, the idea that we might actually at some point have some nutter in charge that will use this apparatus to oppress us seems like a far-away threat to many, who trust our government and our police not to abuse the system.

Now, call me paranoid but if the NSA and GCHQ revelations have taught us anything, it is that we would do well not to simply believe anything we are being told about the benefits of mass surveillance – or its proportionate and lawful use.

During the argument I was having this week, several claims were made in support of the scope of CCTV surveillance in the UK and I was dismayed by my initial inability to meaningfully substantiate what I felt was my legitimate counter-argument.

I would like to try to do that now.

 

Claim #1: Real-time monitoring

One of the arguments in favour of CCTV was that cameras were monitored by real people in real time, that these people could directly intervene if they observed a crime and that this had been used effectively to prevent crime.

Now, take as an example the British Transport Police who operate a CCTV Hub in which fifty staff monitor cameras in real time. According to the BTP, this allows them to intervene directly when a crime is being committed.

The problem with real-time CCTV monitoring is, however, that only a small percentage of it is “proactive” (operators spotting incidents directly). Given that one operator monitors several cameras simultaneously (Nastaran Dadashi mentions a number of up to 16 cameras per operator, Craig Donald mentions numbers between 3 and 35 screens per operator), “reactive” monitoring (after the operator has been alerted to an incident) is much more heavily used because “[p]roactive surveillance seem [sic] impossible since “there are too many cameras and too few pairs of eyes to keep track of them” (Hogan, New Scientist, 2003, page 4). Reactive monitoring is what the BTP seems to mean when it maintains that its operators intervene in crime being committed at stations – responding to an alert about an incident occurring, i.e. “reacting” to an incident rather than preventing incidents from occurring at all by spotting suspicious behaviour proactively. Thus any impression that CCTV cameras are monitored proactively in real time and that operators can spot crime immediately and stop it from happening would not be quite accurate. Hina Keval and Martina Angela Sasse quote a 2005 study on the effectiveness on CCTV in the UK which found that

across several control rooms…there was a very high camera-to-operator and camera-to-monitor ratio, which reduced the “… probability of spotting an incident or providing usable recordings ”.

Arguably, a case can still be made that CCTV can lower police response times and help solving crimes and providing evidence. However, the direct interference with or deterrence of crime due to constant surveillance in real time seems doubtful, as even a single screen per operator “does not guarantee detection”.

 

Claim #2: CCTV cameras with microphones

Another argument in favour of CCTV that cropped up during the discussion was that there were a number of CCTV cameras with microphones on them that allowed the people watching in real time to let criminals know that they were watching.

I was astonished to find out that these really exist. Apparently, Middlesborough first installed cameras with loudspeakers in 2007 and the system was then extended across 20 boroughs. There also seem to be automated talking CCTV cameras one of which was deactivated in Camden because residents didn’t appreciate being told off by a robot, particularly not in American. Which makes sense because the idea seems more than a little ridiculous. For any person, let along your average adult citizen, to be told off by either a camera or an operator behind a camera for littering is disproportionate. What is more, it seems doubtful that a serious criminal would, in the heat of the moment, be deterred by a camera on a pole shouting at him. Rather, cameras with microphones that allow operators to, at worst, harass people represent an intervention by the state into people’s behaviour when they can reasonably expect to be left alone.

Combating things like “‘litter… drunk and disorderly behaviour, gangs congregating’” may sound good and well on paper but who exactly decides when having a few beers or meeting up with your mates constitutes antisocial or criminal behaviour? The scope for abuse is wide. Sarah Boyes reported in 2011 that she heard

many accounts of the cameras ‘telling people off’ for petty incidents. One woman described being shouted at when the end of her sausage roll broke off onto the floor, and was quite incredulous. A teenager told me how he was scolded for throwing a snowball. Another young person remembered friends being reprimanded one evening for boisterously paddling in the new fountain. One local employee remembered hearing a disembodied voice late one evening saying ‘stop urinating on McDonalds’, but when they looked they saw no apparent culprit…

Surely, at best this kind of thing is a waste of resources and money. At worst, it is a sign of the surveillance state going bonkers. Mind you, speaking of 2011: ubiquitous surveillance did little to stop the London riots, as Cory Doctorov has argued.

Also, it’s not like we didn’t have historical precedent (if perhaps not in the UK) of how quickly surveillance can turn against the people it is intended to protect. The Telegraph wasn’t wrong when it argued in 2009 that surveillance of the magnitude as it is conducted in the UK “would still be intolerable in countries with recent memories of totalitarian regimes” – and for good reason. It surely isn’t a coincidence that in many dystopias one of the controlling mechanisms of choice for totalitarian regimes is ubiquitous surveillance.

 

Claim #3: Effectiveness in preventing and solving crime

When most people think of CCTV being used in law enforcement they think of a string of high-profile crimes that have been solved or publicised with the help of footage…But CCTV has grown far beyond this.

Indeed, people arguing in favour of CCTV seem to do so under the impression that CCTV is hugely beneficial when solving or publicising major crime. And even if it isn’t, some seem to think that preventing even one major crime justifies the large number of cameras out there. Now, I would still argue that there is a problem with proportionality here.

Figures released in 2007 revealed that “that police are no more likely to catch offenders in areas with hundreds of cameras than in those with hardly any,” prompting the legitimate question “if some of [the] money [invested in CCTV] would not have been better spent on police officers” or “street lighting, which has been shown to cut crime by up to 20 per cent.”

The Guardian reported in 2008 that “[o]nly 3% of street robberies in London were solved using CCTV images.”

Interestingly, “the head of the Visual Images, Identifications and Detections Office (Viido) at New Scotland Yard” himself warned back then that CCTV was ineffective. Because of this, Scotland Yard was at the time engaged in efforts to “to try to boost conviction rates using CCTV evidence.” Again, what they were trying to do, was to use CCTV after a crime had been committed to hold the perpetrators to account. Which is fair enough, except, CCTV was “originally seen as a preventative measure” (emphasis added).

In 2009, information revealed in response to a request made under the freedom of information act alleged that “[f]or every 1,000 cameras in London, less than one crime is solved per year.”

You might argue that these figures date back a while and thus may not paint an accurate picture of CCTV and its uses today. Yet, in the year ending May 2013, it was reported that the London Metropolitan Police apparently failed to track a large percentage of CCTV footage, resulting, amongst other things, in a conviction rate of 14 per cent for rape. Now, you might argue that ubiquitous CCTV is warranted – and the 500 million spent on it, or estimated 20,000 (the starting salary of a police officer) per camera, well invested – if it stops only one rapist. However, when you consider that 86% of rapists aren’t convicted – or, in fact, deterred – by aid of CCTV, then perhaps it’s pertinent to ask the following questions:

  • Firstly, if CCTV isn’t of much help at all in over 80 per cent of cases, then do the remaining 20 per cent really warrant jeopardising innocent people’s anonymity in public spaces to the extent as it is happening at the moment?
  • Secondly, as apparently CCTV is of such little use in preventing and resolving violent crime, wouldn’t the money be better invested in things that do?

Even a leaflet issued by the College of Policing admits that the use of CCTV not only has a “small impact” (but significant, they say) on the detection and prevention of crime, but also that this impact is mostly observable in car crime (violation and theft). By contrast, CCTV has no significant impact on violent crime – i.e. the kind that does actual physical harm to people. So much for the rape argument. And while there is some cited evidence that CCTV aids crime and terrorism prevention, it is difficult to see the proportionality of millions of cameras – and the money spent on them – when looking at their documented overall (in)effectiveness.

What is even more disturbing are figures claiming that “technology employed in state-owned public places” accounts for “less than 5 per cent of the cameras in the country.” As to the remaining 95 per cent of cameras:

there are very few regulations over how CCTV is set up and run. Everyone has to comply with the Data Protection Act, but apart from that it is often a matter of guidelines rather than law.

At least what should surely be done is to inform people about “the intrusion” posed by HD CCTV cameras (a fact that tends to be conveniently omitted from pro-CCTV testimonies), and the lack of regulation to allow people to have their say on whether they are comfortable or happy with their privacy and anonymity being intruded on in such a way.

 

The inverse argument: abuse of surveillance powers

Which brings me right to the question of abuses or misuse of the technology. Now, I have already given a few examples above.

But how about a scheme that was, thankfully, scrapped in Birmingham a few years ago because the local population opposed it, arguing that a plan “by the West Midlands police to place several hundred CCTV cameras in part of Birmingham…was targeting the mainly Muslim local population,” funded, incidentally, “from a pot to combat terrorism”?

Different but equally pertinent questions about the use of CCTV by police were raised in the wake of the 2009 protests of the Israeli offensive in Gaza, when people were allegedly convicted based on edited CCTV footage while the police failed to investigate misdemeanours by its own officers.

Then there was the case of Mark Summerton and Kevin Judge, from Sefton Council, Merseyside, who were convicted for abuse of their powers as CCTV operators after “spying on a naked woman in her own home.” This thread on the Big Brother Watch website also makes for an interesting read on the subject.

There is also debate about CCTV being used to persecute minor offences, like in parking enforcement, when “[f]or many people, the original intent of CCTV proliferation was to improve security and reduce crime in public places.”

And even if you agree that CCTV could be used for offences that have nothing to do with major crime, there are a number of issues that arise. For example, Charles Farrier of No CCTV says: “We get people saying they parked a car with a disabled badge in the window and they incurred a fine because the camera couldn’t see the badge.”

 

And finally: human rights and civil liberties

Given that and concerns about the abuse of other police powers under RIPA, my lack of confidence in the powers that be is perhaps not entirely unjustified. Granted, the Surveillance Camera Code of Practice issued in 2003 demands that:

A public authority will be bound by the Human Rights Act 1998 and will therefore be required to demonstrate a pressing need when undertaking surveillance as this may interfere with the qualified right to respect for private and family life provided under Article 8 of the European Charter of Human Rights. This is the case whether or not that public authority is a relevant authority. A system operator who is not a public authority should nevertheless satisfy themselves that any surveillance is necessary and proportionate.

Necessary and proportionate – Recent and not-so-recent revelations hint that surveillance is anything but that it violates citizens’ rights to privacy and anonymity within the public sphere by jeopardizing both their liberty and dignity.

People taking issue with Richard Hannigan’s comments aren’t aliens from a parallel universe – David Blunkett might understand that if he stopped living in the past

In this week’s Guardian, Eben Moglen, founder, director-counsel and chairman of Software Freedom Law Centre, wrote about something he called the “anti-privacy bandwagon”, i.e. the “the bandwagon created by the GCHQ boss, Robert Hannigan,” which demands “that the internet companies abandon their stance on privacy.”

Reminder: last week, Robert Hannigan wrote an op-ed for the Financial Times in which he argued, among other things, that there needs to be stronger co-operation between technology companies and “democratic governments” (read: intelligence agencies). I took issue with this in my last post.

Naturally, the UK government threw its weight behind Hannigan’s comments. Home Secretary Theresa May is firmly on board the bandwagon (as improving mobile phone coverage “could aid terrorists”). So is former Home Secretary David Blunkett who, in his own op-ed for the Telegraph, argues that we should not “regard an intervention from the head of GCHQ as some kind of threat”.

Oh to be allowed to live in such times! Where “the name [sic] of those leading our intelligence and security services” are no longer unknown to us! Times in which the head of GCHQ stoops to feign interest in having a debate with us!

Seriously. It isn’t just Moglen who takes issue with Blunkett’s piece – or Hannigan’s for that matter – and rightly so.

Blunkett seems to be getting a few things very wrong. Or perhaps he has misunderstood both Robert Hannigan’s intentions, and the intentions of those who didn’t respond favourably to what Hannigan wrote in the FT. Blunkett definitely seems to have some ideas about the world and, importantly, the internet, encryption, and the law that are deeply concerning.

As is the very structure of his article, which opens by invoking Remembrance Sunday, “the Nazis”, and “the dedicated men and women of Bletchley Park”. Now, that’s a veritable triple-whammy, isn’t it? Take something most people in Britain feel very strongly about, add to that the epitome of everything that was once wrong and evil in the world, and then throw in those who fought that evil by “deciphering and disrupting…signals”. What is more, do it in a week in which the UK sees the release of a much-anticipated film about Enigma-code breaker Alan Turing played by one of the UK’s hottest and most obsessed-over actors. You’d be hard put finding an opener with stronger or more emotionally charged connotations for a larger number of people.

Following this example of poetic virtuosity, Blunkett goes on to express his surprise at the reactions to Hannigan’s “intervention”, taking a stab at Martha Lane Fox who called Hannigan’s comments “reactionary and slightly inflammatory”.

Granted, there is an error in Baroness Lane Fox’s criticism of Hannigan: there is nothing slight about the inflammatory nature of Hannigan’s comments. Someone who accuses technology companies trying to secure their customers’ privacy of becoming the “command-and-control network of choice” for terrorists is not being slight about anything.

Mr Blunkett is pretty quick to add to Hannigan’s narrative:

Baroness Lane Fox and others in her industry should wake up to reality: now is not the time for lofty disengagement or disinterest. Tech companies who provide encrypted – and therefore secret – communications online are, albeit unwittingly, helping terrorists to co-ordinate genocide and foster fear and instability around the world.

Wow, look at the buzz-words: terrorists, genocide, fear – how inflammatory can it get? But well, Blunkett is right about one thing: now isn’t the time for disengagement and disinterest. Assuming that by this, Blunkett means the kind of “lofty” disengagement and disinterest we have so far seen from the UK government and the intelligence agencies. Considering that this is an article in defence of Hannigan’s intervention as “progress”, I am guessing that that’s what Blunkett means. In which case yes, a debate with the GCHQ or anyone else in government about surveillance and privacy would be welcome. However, as I argued last week, an open and informed debate is probably not what Hannigan is really after. It is questionable whether Blunkett is, because by suggesting that “If [Hannigan] is worried, we should be too,” he essentially tells us not to question Hannigan’s opinions but to take his words about the threat of encryption as a tool for terrorists at face-value (and then give up our privacy to mitigate that threat).

Sorry, Mr Blunkett, but that’s not advocating debate. That’s patting us on the head and telling us that the GCHQ know what they are doing and that we would do well to trust them, lest we want the terrorists to come and get us.

Intentionally or not, Blunkett repeatedly falls into the same trap as Hannigan: Firstly, he is suggesting that Baroness Lane Fox and her business have not already woken up to reality. That’s just nonsense. No one is “in denial” about the threat of terrorism or the necessity for a debate about issues relating to the terrorism-security-privacy love triangle. It isn’t privacy advocates or tech companies that have been avoiding the relevant debates for the past 1.5 years and it is the rhetoric of Hannigan’s intervention that Lane Fox takes issue with, not the idea of engagement and interest as such.

Secondly, to suggest, as Blunkett does that tech companies who provide encrypted (i.e. secure) communications online are helping terrorists, is deceitful, as it fails to mention – yet again! – that the same encryption also secures the communications of the vast majority of users who aren’t terrorists and who have never been accused of any wrongdoing.

To use the same inflammatory rhetoric as Hannigan to defend him is, frankly, a bit daft. More than that, it beckons the question if Mr Blunkett has actually “reflected” (as he calls it) on Hannigan’s intervention at all or whether he was perhaps a little bit too blinded by the brilliance emanating from the memory of the Bletchley Park code breakers (no disrespect to them).

Blunkett admits that “we must be wary of knee-jerk responses to new terror threats” but he then immediately follows this up with the knee-jerk comment that “we should not shy away from the fact that terrorists are using mainstream websites, social media and mobile apps to disseminate highly dangerous information and propaganda.”

This may be so. However, again, millions of other users (which Blunkett doesn’t mention) are using websites, social media and apps. And again: the majority of these users aren’t terrorists (or, in fact, paedophiles) and have never – all together now! – been accused of any wrongdoing. This majority of perfectly ordinary users is allowed to “remain anonymous” “thanks to freely available technology” just as much as the minority who uses social media etc. for evil gain. To fail to mention that fact is to avoid a key element of any sensible debate on privacy versus security.

Which is also why it deeply disingenuous, and frankly a bit ridiculous, of Blunkett to suggest that tech companies “cannot be allowed to get away with the absurd idea that they hold no responsibility for what is transmitted on the platforms they provide”, while also appealing to their “moral responsibility”.

Firstly, tech companies are, in the wake of the Snowden revelations, beginning to take more responsibility for securing the communications of their average user and sticking to the reassurances they give those users in their T&C. I am not saying that they are doing this out of the goodness of their hearts – obviously, it is in their business interest to do so – but still.

Secondly, the argument that companies should be bound by their “moral responsibility” because they are “transnational and are therefore not subject to the laws or requirements of any individual country” cuts both ways: Blunkett would do well not to omit the fact that the same moral, and legal, responsibility for the privacy of citizens lies with governments and intelligence agencies as well. Blunkett seems to take it for granted that governments and agencies act within the boundaries of that responsibility. As far as he knows, there are “sufficient checks and balances to provide reassurance” that the agencies do not shun their moral responsibility or, in fact, their legal obligations. This may or may not have been the case “at the time” when Blunkett was Home Secretary but it does not seem to be the case now.

Blunkett misunderstands people’s motivations when he suggests that critics of Hannigan’s intervention view it as some sort of threat. What is problematic and potentially dangerous about it is not only the disingenuous and inflammatory nature of his argument, but also that Hannigan professes to be willing to engage in a debate he is not in fact all that willing to have – at least not unless it is held on his terms. These terms being the acknowledgement that there needs to be a “new deal” between the agencies and technology firms.

Blunkett, too, seeks to “strengthen the links between technology companies and intelligence and law-enforcement agencies,” suggesting that what’s needed to deal with “the radicalisation of young people” is to “understand, occupy and, where necessary, take action wherever they are receiving and communicating information, ideas and, yes, hate” (emphasis added).

Advocates of privacy aren’t saying that we should not do that. They are not saying that targeted surveillance isn’t justified and necessary in certain cases. What they are saying, however, is that the attempt at criminalizing protection efforts (which Hannigan falls just short of doing because, actually, these companies “are unquestionably doing everything the law requires of them” and he must know that) is in itself dangerous.

In fact, that small term occupy that I have highlighted in the quote above, easy to miss as it is, once more hints at one of the true motivations behind Hannigan’s inflammatory rhetoric, Blunkett’s nostalgia, and their shared call for “extra-legal assistance from law-abiding businesses to invade customers’ privacy”: this is about control.

Occupation of certain territories – online or off – on the basis that we as enlightened societies need to save those already using these spaces from the errors of their devious ways is an imperialist notion perfectly in line with the agencies’ desire to control, to map, to “own” the internet.

Make no mistake: by attempting to exert greater control over the internet, governments and intelligence agencies are not just making a stab at controlling the threat of terrorism – and even that with questionable effectiveness – but also every other citizen, dissident or activist using the internet for perfectly legitimate purposes. To remind ourselves of the dangers of this kind of control, we only need to look at what is happening in states where censorship of the internet is still regarded as a viable tool of government control.

By suggesting that tech companies pretend “that they are citizens of a parallel universe” unaware that “[t]hey exist in and depend on the world around them just as much as everyone else”, Blunkett, like Hannigan before him, calls into question the legitimacy of their (genuine or not) concern about their users’ privacy and, by extension, our own (very genuine and legitimate) concern for our privacy as users. Asking for encryption or stronger privacy protection apparently removes us from the Universe According to Blunkett and Hannigan, because in their view we believe that we can just ignore the world around us, i.e. that “dark and ungoverned space” full of terrorists that is the internet, in which we would do well not to ask for the same “illegal” means to protect ourselves that terrorists and paedos use to escape detection. Perhaps the next step would be to suggest that as immigrants aliens from another world, we have no civil rights and liberties?

Ironically, in addition to calling into question the legitimacy of our very real, present-day concerns, Blunkett, by invoking – again – “the code-breakers of Bletchley Park” reveals that it is in fact his own argument that is not rooted firmly enough in the world of modern data communications. Whether or not “[a]t the time… anyone involved in old-fashioned communications [would] have been able to say to [the Bletchley Park code-breakers] “stay away from our business”” misses the point:

these aren’t the days of Bletchley Park or, more importantly, the days of “old-fashioned communications”. It is precisely this circumstance that the legislation Blunkett seems to have so much confidence in to provide “sufficient checks and balances to provide reassurance” has not quite caught up with.

Of course, “the notion that in this modern era, in circumstances of grave danger, we should somehow balk at an open debate about terrorists’ use of the internet” is absurd. But that isn’t what anyone has been asking we do, ever, and certainly not in the wake of Hannigan’s comments.

Rather, people have been insisting that we badly need to have a debate about the use of the internet not just by terrorists but also by those claiming to need full access to our data to protect us from those terrorists – a debate that that has, in fact, long since been going on without Hannigan, Blunkett and a number of other high-ranking figures in government. By suggesting that that debate isn’t “mature” or that it takes place somewhere outside of “reality” in a “parallel universe” Blunkett and Hannigan are simply doing what they have been doing all along: sticking their fingers in their ears and going “la la la”.

Encryption is bad, debate is overrated and GCHQ wants more powers: what Robert Hannigan really said

Robert Hannigan spoke out for this first time this week in his new position as GCHQ-chief. And what he said – or wrote, in a piece for the Financial Times – is not exactly surprising but certainly alarming. Actually, “alarming” doesn’t begin to cover it. “Outrageous” would be the more appropriate term.

Privacy has never been an absolute right, Hannigan writes. US technology companies are becoming “the command and control networks of choice” for terrorists, “facilitate[ing] murder and child abuse”. They are “in denial” about how new technology is helping terrorist networks like ISIS.

To argue, as Julian Huppert does, that this approach is counter-productive is putting it lightly. Bashing technology companies will do little to encourage those same companies – who “responded with irritation to Hannigan’s suggestions” – to give GCHQ et al the support Mr Hannigan thinks they need.

Interestingly, Hannigan also writes that the intelligence agencies “need to show how [they] are accountable for the data [they] use to protect people.” That much is true. The agencies do need to be accountable for the data they collect – and the public and parliament need to be able to hold them accountable. What is more, the agencies need to prove how exactly the data they collect and store helps them keep people safe, rather than to expect us to take their assurances at face value. So far, there is little evidence that mass data collection works to protect people or foil terrorist attacks – a point that has been made several times before and which ACLU lawyer Ben Wizner recently reiterated. As to the threat of ISIS in particular, John Naughton – who seems as “puzzled” by Hannigan’s shenanigans as I am – asks a very relevant question:

How come that GCHQ and the other intelligence agencies failed to notice the rise of the Isis menace until it was upon us? Were they so busy hoovering metadata and tapping submarine cables and “mastering the internet”… that they didn’t have time to see what every impressionable Muslim 14-year-old in the world with an internet connection could see?

Then there is the question of proportionality and control that needs to be asked more insistently. Especially in a week when it has emerged that the UK intelligence agencies have been snooping on privileged conversation between lawyers and their clients. One possible consequence of this breech of attorney-client privilege could be that “[h]istoric convictions [are] quashed in serious cases, including terrorism cases.” Talk about the EU making it harder for British courts to deport terrorists.

In light of this and everything else that has been revealed, I am not sure that any “ordinary user” – as Hannigan calls them – in their right mind would be “comfortable” with “a new deal” between “democratic governments” intelligence agencies and technology companies – even if it were “routed in the democratic values that we share.”

And anyway, I’m confused. First of all, what do the agencies need a deal for? As Jillian York from the EFF points out:

Law enforcement can conduct open source intelligence on publicly-posted content on social networks, and can already place legal requests with respect to users.

Or, as Martha Lane Fox put it:

If you have reasonable suspicion, there is a court process by which you can demand all sorts of things. The security services’ reach is enormous.”

So then, a deal as Hannigan envisions it – again, without explicitly saying so – “must include more, not less, access to information for GCHQ spies.”

Again, it seems questionable that they would need such a deal. As Cole Peters points out in this very remarkable article (what he writes about Hannigan’s characterisation of tech firms and the “depressing end of behaviour on the internet” is particularly noteworthy):

Hannigan doesn’t need technology companies to “co-operate” at all — this suggestion is entirely about blame-shifting and justifying bulk surveillance programmes. The NSA and GCHQ have proven time and time (and time) again that they are more than willing to create their own back doors into technology and communications companies’ data, even when the front door has already been held open for them.

Second of all, by “democratic values”, does Hannigan mean those same values that didn’t stop the agencies from building a “sophisticated apparatus of mass surveillance” that “monitor[s] communications at a scope and scale unimaginable not long ago, from the wholesale siphoning of internet traffic from submarine cables to the collection of millions of webcam images from users unsuspected of any connection to terrorism”? All of this with minimum oversight, while keeping “parliamentarians in the dark”? I suppose that whatever democratic values underlie these practices also justify the government’s and agencies’ avoidance of a debate on surveillance? Not sure the rest of us really do share those values. Not sure they are democratic values either.

The again, it may be our own fault that our governments and spooks aren’t happy to talk to us.

“GCHQ is happy to be part of a mature debate on privacy in the digital age,” Hannigan writes. “But… the debate about this should not become a reason for postponing urgent and difficult decisions.”

What Hannigan seems to be suggesting is that the existing debate on privacy in the digital age “which his agency has spent the last year and a half determinedly avoiding” isn’t mature, and that we all have more urgent and difficult things to worry about than a childish row over something that isn’t an absolute right: “A global debate on privacy is all very well, but the serious (and real) threats facing the country must take priority.”

Three questions:

One, perhaps not an absolute but certainly a “heavily qualified” right under the European Convention (and enshrined in the Human Rights Act), surely there has to be a discussion about whether privacy should be easily sacrificed for the sake of security. Why, if GCHQ is willing to talk, has it not long since joined us in that discussion?

Two, what are those urgent and difficult decisions that should take precedence over a debate about the violation of one of our basic rights? What are those serious and real threats that must take priority?

Short answer: terrorists and technology. For while “[t]errorists have always found ways of hiding their operations…mobile technology and smartphones have increased the options available exponentially.”

One of these options being bad, worse, Snowden-approved encryption:

Techniques for encrypting messages or making them anonymous which were once the preserve of the most sophisticated criminals or nation states now come as standard. These are supplemented by freely available programs and apps adding extra layers of security, many of them proudly advertising that they are ‘Snowden approved’. There is no doubt that young foreign fighters have learnt and benefited from the leaks of the past two years.

Echoing FBI director James Comey’s comments in October that encryption helps criminals, what Mr Hannigan isn’t saying of course is that “technology…can be used for productive as well as destructive purposes.” The same encryption Hannigan and Comey say helps terrorists, is also protecting us from the prying eyes of their own agencies whose “

dirty games – forcing companies to handover [sic] their customers’ data under secret orders, then secretly tapping the private fibre optic cables between the same companies’ data centres anyway – have lost GCHQ the trust of the public.

What Hannigan is also conveniently omitting is that the intelligence agencies have, by building backdoors into encryption, “managed to weaken our security as well as invade our privacy – the exact opposite of what we want to see.”

What these significant omissions highlight is that, as this Guardian editorial points out:

Mr Hannigan’s late-in-the-day call for GCHQ to get engaged appears unattached to any real willingness to reflect on whether the blanket snooping which caused such outrage around the world last year needs to be modified, or even subjected to a more testing standard of accountability.

The intelligence agencies are right, Hannigan says, the rest of us are wrong. Encryption only protects criminals. Therefore, it is bad and we must not use it. Technology companies should work with the agencies rather than provide better security for their users (the majority of whom aren’t terrorists). Far from real willingness for reflection and debate, it seems that Mike Harris, campaign director of Don’t Spy On Us, is right to suspect that Hannigan’s suggestions really are

the intelligence agencies getting on the front foot seeking new powers and wanting to put some of the programmes Snowden uncovered on a legal footing.

And still Hannigan’s “message may prove a winner to many people who hear it, who share the view that further powers are needed to tackle the extremist threats faced by the UK at home and abroad.”

Powers which, if expanded, will certainly come at the expense of our sadly undervalued right to privacy, and, if Theresa May has her wish, at the expense of our mobile coverage as well. As to the latter, over to the wonderful David Mitchell who says all there is to say on May and her own outrageous views.

“I can neither confirm nor deny the name of a hypothetical programme” – How to avoid talking about surveillance.

*** Characters and events in this blog post are one hundred percent real and resemblance with any fictional persons is purely coincidental.***

 

Let’s try something we have never tried before. Let’s not talk about surveillance. Let’s avoid talking about it at all cost. Both the arguably legitimate, targeted kind that is subject to the rule of law and meaningful oversight (because that’s secret and anyway, it probably doesn’t exist) and the other kind. That is the indiscriminate, untargeted mass kind, subject to minimum or no oversight at all. The kind that Edward Snowden blew the whistle on and the lid off.

Edward Snowden. Damn him. He is a major problem. He wants to talk about surveillance all the time. And he wants everyone else to talk about it too. Because he thinks people should know about “what is done in their name and that which is done against them,” so they can have a public debate. I know, right? A public debate about something that no one’s supposed to know about. Well done, Snowden! Who is that clown anyway?

Some hacker who never finished school and now lives in Moscow. Yeah, exactly! Probably works for the Russians! Probably has worked for the Russians all along. Specifically cultivated by them to commit all sorts of shenanigans on Vlad Putin’s behalf!

 

Rule #1 when avoiding surveillance talk: always discredit the leaker

Problem with that Ed Snowden is: some people are actually listening to him! They want to join in the debate! So to avoid talking about surveillance one thing we must do is make sure that people start questioning Snowden. His motives, his personality, the lot. Well, we cannot bloody well kill him, can we? Even though some people have been saying that killing him is the one thing they would really like to do. US Congressman Mike Rogers has even suggested it might be the best thing to do in the name of justice, while former CIA director James Woolsey insists that Snowden should be “hanged” if convicted of treason.

But no, let’s not to that. It does seem a tad medieval. We might turn him into some Robin Hood type figure. No, let’s discredit him instead. People love talking about personality. So let’s draw as much attention to Edward Snowden’s personality as we can. Let’s question his honesty, his sanity if we have to. Many government officials have already shown us how that’s done. For example Mike Rogers keeps saying that Snowden is a traitor. So let’s insist that if Snowden was sincere – as opposed to a “traitor” and a “coward” and a “spy”– he would “come home” to the US and “face the music”. We could even offer to personally pay for his plane ticket. Snowden’s friends, lawyers, and disciples (aka Snowdenites) will argue of course that there is no public interest defence for people charged under the Espionage Act and that therefore Snowden would not be allowed to make his case in front of a jury of his peers. But that’s just nit picking.

Calling Snowden arrogant could also work. Or we could suggest his will end up a drunk, lonely and unhappy and bored. Whatever you say, just make sure people are too preoccupied with Snowden himself to pay much attention to what he has revealed and Bob’s your uncle. And don’t tell anyone his girlfriend is now living with him! Domestic bliss doesn’t fit the picture we’re trying to paint here! Discrediting Snowden is mandatory when avoiding talking about what’s wrong with government and the intelligence agencies surveillance.

 

Rule #2 when avoiding surveillance talk: always use semantics

However, it won’t work on everyone. People like, I don’t know, Privacy International, Liberty, the EFF, Amnesty International or those dumbasses at WikiLeaks will probably still want to talk about surveillance. So confound them. Use semantic acrobatics. That usually works to appease people. Here is some reassuring lingo you can use:

  • Surveillance: tell people that they aren’t subject to surveillance, because we aren’t listening to their calls, only accessing their metadata. They won’t really get that this means we’re “access[ing] all the data about who [they] talk to, where [they] are and what [they] do”.
  • Relevant: tell people that we only collect “relevant” data. Don’t tell them that this means “[e]verything. It might become relevant in the future, thus it’s relevant today.”
  • Inadvertent: tell people that we may have accidentally done something when really “we did [it] on purpose on a massive scale”.
  • Without a warrant; tell people, we never collect their data without a warrant. This means the opposite.
  • External communications: tell people we never monitor the communications of people inside our own countries. Do not tell them that this excludes people in our own countries when they are communicating with someone outside the country or via a server that is located outside the country (think Facebook chat).
  • No: when asked if we collect information on innocent people in bulk, always say no, even though this is a lie. In truth, no “means “fuck you.””

For further inspiration, I recommend a dictionary of what officials are really saying.

 

Rule #3 when avoiding surveillance talk: always insist that everything’s legal

Granted, semantics might not fool the lawyers or linguists among the human rights and civil liberties crowd but they will have a hard time proving that anyone who wants to talk about that they’re being surveilled has legal standing anyway. We can always pull the classified information card.

We might also want to consider changing legislation, like our friends in Oz. They have basically outlawed whistleblowing and a free press. Or the Americans. They prosecute whistleblowers so aggressively, government employees are increasingly afraid of speaking out.

The Germans are currently trying to curb parliamentary oversight and surveillance talk. Bless them, they have a lot to learn. But they’re still haunted by their Stasi past so cut them some slack. Especially because their foreign intelligence agency has a marvellous programme with access to a massive fibreoptic node in Frankfurt AND satellite networks. Satellite companies’ CEOs are all under surveillance. You should have seen the look on their faces when they found out! And anyway, there are operatives inside all major telcos. And the telcos are quite happy to pass on information, even in excess of what’s legally required. Oops, perhaps I shouldn’t have said that. That’s another lawsuit in the making right there. I am guessing Privacy International, Liberty and Amnesty International. They’re currently banging on about data sharing between the agencies and how it’s helping us sidestep legal restrictions. Sidestep being the operative term, guys, not violate. See what I did there? Semantics.

So people like Eric King from Privacy International may argue something along the lines of:

We now know that data from any call, internet search, or website you visited over the past two years could be stored in GCHQ’s database and analysed at will, all without a warrant to collect it in the first place.  It is outrageous that the Government thinks mass surveillance, justified by secret “arrangements” that allow for vast and unrestrained receipt and analysis of foreign intelligence material is lawful.

Here’s one for you, Mr King: it’s lawful because we say it is. We’re not going to argue the point with you:

It is a longstanding policy that we do not comment on intelligence matters. Furthermore, all of [our] work is carried out in accordance with a strict legal and policy framework which ensures that our activities are authorised, necessary and proportionate, and that there is rigorous oversight.

Good luck proving that one wrong. It’s all secret, so fat chance of proving anything at all without touching on classified information, and anyway: the general public mostly don’t give a shit. They go all cross eyed when you mention metadata and they fully expect social media to be monitored anyway. They think that if they have nothing to hide, they have nothing to fear. We have taught them well. Manipulating online polls clearly works. If they do ask questions just pull the national security card and talk about the “bad guys”. Raise the terror threat level and issue a travel warning.

 

Rule #4 when avoiding surveillance talk: if all else fails, be absurd

And if all else fails, just hit them with full-on absurdity. Like during that Investigatory Powers Tribunal hearing in the UK when there was “some jovial discussion … over the pronunciation of [GCHQ’s wiretapping programme] Tempora”. Should the emphasis be on the “Tem” or the “por”, should it be pronounced \tem-ˈpu̇r-ə\ (Tempura) or \tem-ˈpɔːr-ə\? Dickheads. Probably hadn’t had lunch and had sushi on their minds. Anyway, the response of choice to something like that is that we cannot confirm or deny the name of a hypothetical programme but that if it did indeed exist, it would be carried out legally. Insist that the Snowden documents are hypothetical. If we neither confirm nor deny, “there will be nothing to be declared unlawful”.

Boom! That’s how it’s done!

Us: one – Snowden and the general public: nil.

Now let’s all go have Tempura.

 

Another Disclaimer:

No Tempura was harmed during the production of this blog post.