Archive for December, 2017

The Right to be Forgotten

Friday, December 15th, 2017

This post is mostly a response to this article by Jeffry Toobin.

The “right to be forgotten” has always struck me as utterly bizarre. The European Court of Justice ruled that the right does not apply to newspaper articles, but does apply to search engines. The former are protected by the freedom of the press, which ranks higher than the right to be forgotten, but Google must removed certain search results for the completely legal news articles. Google is also responsible for deciding which requests to accept, but these requests remain secret. The problem is that there is no way to check what Google has removed and appeal that decision if the censorship harms you. Google does inform webmasters if pages on their sites are hidden but they are not required to do this and some folks are arguing that Google shouldn’t even be allowed to do that. This right to be forgotten is so complete that even the law has no record of what’s forgotten.

The more I think about the right to be forgotten itself doesn’t seem completely unreasonable even if I really don’t like the EU implementation. If someone was wrongly accused of a crime and then exonerated, news stories about the accusation appearing on search results would be unjustly damaging. Part of the issue is that people actually believe what they see on Google but I don’t think there’s any way to change that. We can’t expect newspapers to update old articles if the eventual outcome of the situation is very different from what one expects from an article; would news articles about the OJ Simpson trial have to have little disclaimers saying that he was not found guilty? Does OJ Simpson qualify for the right to be forgotten? Sure he’s famous, and Google has higher standards to forget well known people but OJ never wanted to be well known for a crime he “didn’t” commit. Perhaps he forfeited this right when he wrote If I Did It, or did he regain it when he lost the rights to the book?

Of course there is already the well-document Streisand effect when some prominent figure or group attempts to suppress certain information, including through legal means, and the cover-up has the opposite effect. This is perhaps the reason why take downs are not listed, although the stated purpose of a take down is not to make the information disappear, but simply to make it less prominent on search results. If Google had a searchable directory (that is not covered by the standard Google search) then this would be both transparent, and it would prevent casual discover of the forgotten secret. Already Google has no way is not expected to know exactly when a user comes from a jurisdiction in which something has been “forgotten,” and as far as I know has not been forced to take down results in areas outside the jurisdiction of a given law (at least not regarding claims of the right to be forgotten).  How would European courts respond if google.com published a list of take down requests visible only to users who appear to come from outside the EU? Perhaps these requests have legally binding confidentiality clauses that Google is forced to comply with, but that seems quite odd as the requests are entirely between Google and the person wishing to be forgotten. They are not sealed court records. They are not court records at all. It seems to me the Google is doing it’s best to comply without accepting every single request, playing its part has an enlightened monopoly.

This the last point I want to make in this post; none of this would work if Google wasn’t an effective monopoly. I’ve noticed in the past that the best way to avoid any legally required Google censorship is to use any other search engine. It works really well. Nobody thinks about erasing search results from forgotten search engines. If 90% of searches were shared relatively equally between an handful of companies instead of just Google*, then anyone who wants to be forgotten would have to request take downs with 5 different companies using 5 different platforms. Presumably these companies wouldn’t all honor exactly the same requests, and so a result might be gone from Google but still up on DuckDuckGo and Yahoo!. For the semi-forgotten that’s probably still better than nothing, but it’s not really the same effect as in a Google dominated world. It also places a lot more onus on the requester, and I wonder if in such a world the ruling would have expected the requester and the companies to do this. Perhaps the courts or some government agency would be responsible for centralizing the requests and sending them off to all the search engines. I think it would be very unreasonable to force every search engine to employ the lawyers needed to deal with requests to be forgotten. Google probably likes this because it makes it far more difficult for a new competitor to enter the scene.

In other news, the BBC published a list of articles Google has removed from certain search results.

 

*A google search indicated that this is its current market share

Responsible Cyberwar

Wednesday, December 13th, 2017

What means war and what doesn’t? This seems to be the main question in political science regarding cyberconflict. From the perspective of governments the question is more like “how much can we get away with.” From what I know espionage has typically not been treated as an act of war, although the unlucky spies didn’t fare too well regardless. Espionage often involves infiltrating the enemy (and friends!) and if/when those spies get caught the target country gets to decide what to do with them. In this way there’s accountability. In areas like traditional (e.g. terrestrial radio) singles intelligence, it would be quite odd to view listening in as an aggressive act. The hacking of computer systems to gain intelligence seems different from all of these examples; there’s infiltration but no human culprit. As to destructive hacking, I see no significant difference between it and dropping bombs or destroying paper-based information. Just cause you use a computer to do it doesn’t change the fact that you made something happen in the physical world. Cyberattacks can be a lot more specific in what they destroy, in the case of Stuxnet and the Iranian centrifuges the bug literally only broke the machines. The traditional variation of the attack would probably have include several thousand pounds of bunker-busting bombs and left a giant smoking hole in the ground. It would also have killed people, which would probably have been seen as far bigger affront. However, cyberattacks are not inherently narrow in their scope of destruction and they can miss their intended targets or the cyberarms can fall into the wrong hands. The WannaCry attacks are an excellent example of this; NSA tools were leaked and used by criminals to write malware that shutdown hospitals all over the world.* Thinking about it now, this seems very analogous to the Obama era “fast and furious” operation that resulted in the arming of Mexican drug cartels (of course that wasn’t the point, everything just went terribly wrong). The big problem with cyberarms is that if they are leaked they can be copied and adjusted to attack entirely different targets. Once in the wild it’s nearly impossible to stop their spread.

What I’m trying to get at here is that cyberattacks aren’t about war, they’re about creating weapons that will inevitably get into the wild where they can be distributed without limitation. Anyone who thinks they can control cyberweapons and simultaneously use them is delusional. Lucas Kello notes that the cost of cyberattacks include losing precious zero-day vulnerabilities to exploit because those are patched once an attack displays their use. But the cost is higher than that; until patched these zero-days can be used by other countries or criminal gangs or bored teenagers. That’s a really big problem. A responsible government would surely not want such a thing to happen. Nobody would drop re-usable bombs because that would be really stupid.

A responsible government has an alternative option; instead of setting malware-weapons loose they could notify developers of the same vulnerabilities. Any country with an active “cyber-attack” squad already has most of this in place. They already know how to find bugs, the only remaining step is to tell someone who can do something about it.

But what about irresponsible governments? They will certainly use cyberweapons if they have them and so won’t those bugs get out into the wild anyways?

Not if they’re already patched.

 

 

 

*In case I’m wrong and there really was no direct link between leaked NSA tools and WannaCry, such a scenario could happen just as easily as a gun running sting operation could end up arming the cartels and not catching their leaders.