“Yahoo webcam images from millions of users intercepted by GCHQ” gasped the Guardian headline.
The system, eerily reminiscent of the telescreens evoked in George Orwell’s 1984, was used for experiments in automated facial recognition, to monitor GCHQ’s existing targets, and to discover new targets of interest. Such searches could be used to try to find terror suspects or criminals making use of multiple, anonymous user IDs.
This is not reporting, it is manipulative commentary. “[E]erily reminiscent of the telescreens evoked in George Orwell’s 1984” is a prairie dog-whistle phrase designed to make you pop up out of your hole in fear. Which in and of itself is kind of Orwellian manipulation, if you think about it.
The privacy risks of mass collection from video sources have long been known to the NSA and GCHQ, as a research document from the mid-2000s noted: “One of the greatest hindrances to exploiting video data is the fact that the vast majority of videos received have no intelligence value whatsoever, such as pornography, commercials, movie clips and family home movies.”
That’s a pretty neat trick: what they say they’re saying is that the videos have no intelligence value, but what the article is really trying to communicate is: they can see your naughty bits. Although only between 3% and 11% of the Yahoo webcam imagery harvested by GCHQ contains “undesirable nudity”, most of the rest of the article is devoted to discussion of the policies and restrictions around viewing it. The article is preying on people’s fears of having their homemade porn discovered in order to gin up protest that this kind of surveillance is utterly intolerable and must be stopped.
Don’t take this personally, people, but your webchat porn – no matter how talented you think you are – when taken in bulk is not titillating, it’s a hindrance. If seen over and over again – which it is not: there is not time enough in the universe for that – it is for the most part the same (the porn corollary of “there are only a few basic plots in all of literature”) and thus, in bulk, is kind of boring. Viewing a few chats of someone you personally might or might not have a connection with is titillating; millions of images tend to blend together in a rather banal statistic.
The problem is that the body politic is conflicted. There is a tension between people seeming to want their government to be able to detect and hunt down terrorists, but yet not wanting to be part of the sample pool where they can actually be distinguished from those terrorists.
The real fear that people – maybe including the people behind this article – have is that the data collected might be used for a different purpose than, say, the honorable purpose of catching terrorists, or child pornographers, or sex traffickers, etc. It could potentially be used to destroy the careers of exhibitionist politicians, or spy on journalists, or uncover undue influence of business on government leaders, or what-have-you. Fair enough. But that calls for restrictions on the use of the data, not necessarily of the technique. For any terrorist or criminal detection/surveillance technique to be developed to a usable level of accuracy, it has to be able to be fine-tuned to be able to distinguish communications of interest from the background stream, and that means comparing it against a representative sample of communications data. To paraphrase the comic strip character Pogo, we have met the data streams, and they are us.
This horrified and yes, to some degree valid concern over privacy effectively diverts our eyes from the prior – and to my mind – real question, which is: is facial recognition even a valid technique to be using to identify terrorists, or criminals, or “someone doing something”? The article alludes to some of the problems: “The best images are ones where the person is facing the camera with their face upright”, or, with iris recognition systems, trying to trick them by putting contact lenses on mannequins.
Maybe those playing at home could avoid being identified with their web porn by collectively agreeing to wear Guy Fawkes masks. –No, wait, that might make you a target. How about female masking? But perhaps that takes the exhibitionist fun out of it. Seriously, though, if someone didn’t want to be recognized, could they use stage makeup? Or use fish-eye or fun-house lens apps to distort their features? How about just using an avatar? On the other hand, maybe eventually the detection of alteration of facial features would itself end up being used as a flag of potentially suspicious…something. In the race to hide and reveal, it’s all Spy vs Spy.
Solving the technological challenges, though, is beside the point; it seems that the surveillance assumption is that terrorists or criminals are chatting online with consistent recognizable images using clear unencrypted communication. Really? I’d like to see the evidence showing the degree to which that assumption is valid in order to justify the continued public expense of development and implementation.
I don’t know what the right answer is. But I do know that people need to be on guard against emotional manipulation masquerading as “factual reporting”, diverting our attention from asking the right questions.