Archive for November, 2017

It Can’t Be Over Already

Tuesday, November 28th, 2017

Today was the last session of this fall’s Freshman Seminar. I can’t believe we have reached the end of the semester. Each year’s class is wonderful, but there was something about this group of students. Engaged. Funny. Thoughtful. Spirited. Excited about the future and the potential it holds for all of us. I hope each of you keeps in touch and calls on me when you have something to share or want to just talk through different opportunities. Thank you for making 50N something to which I looked forward each and every week.

On the other hand, as you probably sensed in class today, I am amazed that the exuberance around blockchain and crypto-currencies continues to grow. When will we see this exuberance pop? I don’t know. Jim would know better than me, but there are a number of hard distributed computing problems that seem to have been swept under the rug, which might be ok in small networks, but worry me in large ones. I was also struck while reading with the lack of careful consideration of the threats against blockchain and the systems built upon it. Maybe such careful threat modeling is done elsewhere, but it continues to feel like we’re repeating the same problematic approach to the world that we saw throughout the development of the Internet and its applications: rush to trumpet the functionality and worry later about the threats.

The first Bitcoin paper says that double spends are not a problem because you can ignore them. But can an adversary flood the network with double spends? Would such an approach become a type of denial of service? Is a double spend request just a normal timing problem in the network, or is it something nefarious? This is just one example I wish I had seen considered.

In a related way, I found the Ether Thief paper fascinating. It appeared that no one deeply involved with the development of Ethereum ran a tabletop exercise to determine and practice what every person involved should do when the inevitable attack happens. You hope your bank is never robbed, but you can bet your bottom dollar that banks run tabletop exercises regularly to make sure that everyone knows what to do when a robbery happens. I suggest that you ask before you put your money into one of these blockchain-based systems what they’ve done to protect themselves against theft and how often they practice doing what they’ll do during an actual theft. No theft will be a textbook theft in all ways, but preparation matters and can help with prevention. The Harvard Kennedy School has experts in the field of Crisis Leadership, and while it might be no fun to be grilled by the likes of Professor Dutch Leonard in one of these mock sessions (from experience), you learn a lot about what you’re ready to handle and what you need to prepare yourself to handle.

Lastly, I’d like to touch upon the Iansiti and Lakhani paper. I thought it was quite good for what it was: an argument against blockchain as a disruptive technology. However, I don’t think blockchain is like TCP/IP. TCP/IP had competitors, but forks of the technology weren’t a long-term issue. If you found a mistake in the current implementation of TCP/IP, you fixed it and everything easily moves over to the new implementation. If you had a dispute about which implementation works better, you put both out there and one eventually won out (everyone moved over to it). Of course, some things are harder to change than others (e.g., consider how long it took to move from IPv4 to IPv6). Even so, we haven’t experienced in TCP/IP the threat of many persistent forks in the way we’re seeing in crypto-currencies. The network effect made the Internet take off quickly and that was fueled by basically a dominant foundational technology. I am not so convinced in the blockchain world.

But then, I’m old. I have often been wrong.

Make Up Your Mind

Wednesday, November 22nd, 2017

I’m going to cheat a bit this week and write a short post. The reason is that I want you to read the article titled “Should Facebook and Twitter Be Regulated Under the First Amendment?” by Lincoln Caplan. It does a better job than I ever could at explaining the seeming contradictions (e.g., President Trump can announce U.S. government policy on the Twitter account @realDonaldTrump and these announcements must be preserved under the Presidential Records Act, but the president can block other accounts from responding to his tweets because some lawyers agree that the account @realDonaldTrump is the person and not the president, and thus his actions on Twitter are protected under the First Amendment) and significant differences between the United States’ and Europe’s free and hate speech laws (and the resulting effect on social media companies and their different approaches).

What a quagmire.

As we discussed this week, what happens online matters. It matters to our wellbeing as individuals, our ability to productively interact with each other in the real world as communities, and as the article states, our “ability to [have] an informed citizenry” as a democratic nation. The more we discuss these topics, the more convinced I become that social media companies aren’t wires, “passive conduits,” or simple gathering spaces. They are collections of algorithms that have real effects on us and the world in which we live.

What do you think? What will you do?

Toward Better Corporate Security

Thursday, November 16th, 2017

One of the topics touched upon briefly this week was how we might approach the securing of a nation’s corporate sector against cyber attacks, whether from foreign powers or organized crime. For the purposes of a simpler discussion, let’s assume that the corporations in question reside entirely within our nation (call it the U.S.). In the physical world, U.S. corporations rely on the U.S. military to protect its national borders against foreign incursions, on the National Guard and local police force to protect its property and maintain peace and order, and on their own contracted security personnel for day-to-day security precautions (e.g., to ensure that only authorized people enter the corporate facilities). What’s the equivalent in cyberspace?

Well, one thing to notice is that the training and the capabilities of the different forces in the physical world decrease in sophistication as you go from the highly trained U.S. military forces down to your run-of-the-mill security guard. You could ask the U.S. military to cover your company’s day-to-day security precautions, but it wouldn’t be a good use of the skilled people in the military. Plus, as was mentioned on Monday, corporations probably don’t want the U.S. military tromping around their site. It seems wrong to Americans that the U.S. government would display that much of a show of force within the country’s borders. In cyberspace, the situation is that much worse because you can’t just post guards at the door and around the property. Anti-virus software, for example, works because it scans everything on your system and is trusted more than anything else in your system (except the operating system kernel). I bet most companies would not want the U.S. military looking through every one of their drawers and files.

If we can’t rely on the sophisticated expertise of the U.S. military’s cyber division, what should one do? Well, I founded a software security company back in 2001 with this mission, and I thought I’d show you some of what I wrote 15 years ago on this question. This was a trip down memory lane for me. I hope you enjoy reading it (unedited), even if it is far from perfect.

**** Our [company’s] beliefs and philosophies

  1. Our business focus is enterprise security. This security focus encompasses the protection of all sensitive digital documents within the enterprise as well as the operation of the enterprise’s distributed computing infrastructure. It does __not__ include protection of digitally-based products (e.g. music files) sold, rented, or otherwise involved in a financial transaction by the enterprise to consumers or other corporations. Though our technology can be used for such purposes, the balance between security and ease of use differs in these two market opportunities.
  2. We are interested in averting strategic disruption (i.e., loss of strategic information) as well as operational disruption (i.e., loss of some or all of the capabilities of the enterprise’s computing infrastructure); disruption that occurs through the unintentional misuse or even malicious use of corporate information or resources. We are __not__ directly addressing the wide range of illegal activities associated with digital commerce.
  3. We believe that security is dynamic. The security concerns that enterprises have will change over time, and thus our security solution must be flexible and extensible to adapt to these changes. The perceived importance of a security threat and the willingness of the enterprise and its employees to change their behavior to protect the enterprise and themselves against particular threats varies, and thus our security solution must support this variability directly.
  4. It is very difficult for an enterprise to quantify how much it should spend on security, and thus enterprises typically purchase a security product only if it is known to be a “best practice.” To become a best practice, a security product must be widely deployed. How does a security product become widely deployed if it is difficult to quantify the benefit of security? The answer, as demonstrated by other successful security products like anti-virus solutions and VPNs, is to provide a meaningful level of security while simultaneously being easy to deploy and essentially transparent to the enterprise user. These three axes work together to define what we call the Security Success Triangle (SST). In our business space, the SST says that we must avoid operational disruption due to the deployment or use of our security solution, since operational disruption is one of the two reasons why the enterprise is purchasing our solution in the first place.
  5. Once we have become a best practice, the SST says that we can increase the amount of meaningful security our solution provides. Besides the powerful and profitable business models that this enables, this observation again reinforces the need for our approach to be flexible and extensible.
  6. A focus on ease of deployment and usability also implies that our security solution must not be tightly coupled to the rest of the enterprise’s computing infrastructure, except when the security solution is enforcing the security policies of the enterprise. In other words, our security solution should be tightly coupled to the enterprise’s applications when those applications are running, but it should be only loosely coupled to the application infrastructure for purposes of maintenance and upgrades, etc. Note that this coupling eases the maintaining and upgrading of both the security infrastructure as well as the application infrastructure.
  7. The best way to obtain a security solution that is easy to deploy and transparent to the end user is to implement the security solution in such a manner that it is possible and straightforward to understand the user’s intentions and to be able to differentiate between normal and abnormal behavior. Security solutions that are implemented far from the end user and deep in the lower layers of the computing infrastructure cannot achieve the level of understanding and differentiation that we desire. Thus, we are driven to an approach to enterprise security that can track and affect the operations performed within applications. (Say something stronger about the need to avoid false positives and user dialog boxes?)
  8. Our business is focused on the distributed computing infrastructure in today’s enterprises, e.g., devices like personal computers, laptops, and PDAs. This infrastructure is not well covered by today’s security solutions, especially when it is not clear who owns or has the right to configure (or even understands how to configure) the device. Personal computing devices are just that–something that employees would like to use for both personal as well as corporate computing. Our security solution must support the often-conflicting needs and requirements created by these two worlds. It is not a viable solution to force the user to work with two separate sets of applications.
  9. Since our business focus is on the enterprise’s distributed computing infrastructure, our approach must support a wide variety of platforms. We cannot rely on special hooks that are unique to one application or computing platform. Also, we must minimize the work necessary to port our infrastructure to new platforms.

There it is. It is missing one thing that the company learned as the business started to grow, and that was a capability to assess what went wrong when something eventually did go wrong. For example, companies definitely wanted to know who accessed what files when so that when a file’s security was breached, you could review the operations associated with that file and determine how your (supposedly correct) security policy failed.

Community Standards

Wednesday, November 8th, 2017

I want to publicly thank Professor Jonathan Zittrain (JZ) for his wonderfully informative and absolutely riveting discussion on the topic of Internet governance. It’s not a topic around which it is easy to get your arms. It’s a mix of individual actors, corporate entities, government agencies, and open communities. There is nothing straightforward about this conglomeration of actors, and I’ve always struggled to know where to start. Luckily, Jim and I know JZ, and it turns out that that’s always a great place to start. So, thank you, JZ!

Much of his presentation — on how jurisdiction and regulation happened as the Internet evolved — was told through the stories of a few key people. It was a great way to give all of us a narrative foundation on which we could anchor further discussion. And that’s what I’d like to try to do here!

While most of Monday’s discussion looked at the past, this issue remains important as the Internet continues to evolve, and some of the most interesting pieces of the current evolution take place in our social media platforms. This got me thinking, “How does Facebook handle this ongoing evolution? Or more specifically, how has Facebook’s own regulation of its platform evolved?”

While I could call up former students that currently work at Facebook, I took a different approach: I decided to look at how the Community Standards page on facebook.com has changed over time. A great way to do this is to take advantage of the Internet Archive Wayback Machine.

The first question I investigated was simply, “How much did the Community Standards page change over the nearly seven years captured by the Wayback Machine?” Instead of looking at every minor change in the page, I focused on the point where the look of the page changed dramatically to the format it has today (see the page as it looked on March 14, 2015 and then its new, basically current look on March 18, 2015). Then I asked, “How different is the content of the page today as compared to the first captured day of its current look?”

I was surprised by the answer: Very little. In my mind, a lot has happened in the 32 months between March 2015 and November 2017. This doesn’t mean that a lot didn’t happen behind the scenes (i.e., the code that automates some of the process, and the policies that the people involved in the “dedicated teams working around the world to review things you [the user] report to help make sure Facebook remains safe”). In a moment, I’ll dig into this behind-the-scenes question, but first I’ll summarize the differences between the content of the Facebook Community Standards page in March 2015 and November 2017.

Briefly, there’s now a video link to help the users “learn more about how it works” or in particular how Facebook decides to remove (or not remove) content, described at a high level with an emphasis on why rather than how. Facebook’s claimed mission has changed slightly, from “Our mission is to give people the power to share and make the world more open and connected” to “Our mission is to give people the power to build community and bring the world closer together.”

There was also an important addition in the second paragraph of the page stating, “Sometimes we will allow content if newsworthy, significant or important to the public interest – even if it might otherwise violate our standards.” Very little else changed – only the title of the category of “Nudity” under “Encouraging respectful behavior” which became “Adult Nudity & Sexual Activity”). So, the biggest change according these differences is the power of Facebook to overrule its own Community Standards. Probably a lot more could be said about this change.

But what actually happens behind the scenes? In the early days of the Internet, these sorts of questions were debated in open forums like the IETF community meetings. The best I could find around Facebook’s Community Standards work (I will admit that I didn’t spend more than an afternoon looking) were the following two articles:

Bickert talks about how hard it is to draw the line and how daunting the task turns out to be on a social network as large as Facebook’s. The key sentence in the post for me was, “We don’t always share the details of our policies, because we don’t want to encourage people to find workarounds – but we do publish our Community Standards, which set out what is and isn’t allowed on Facebook, and why.” I encourage you to think about whether this is an acceptable answer to you.

I’m not 100% decided, but I lean toward more transparency. I’d like to know how Facebook filters what I see, especially if I am using Facebook to “see the world through the eyes of others” as they state in the first paragraph on their Community Standards page. I may not want to see what they filter, but I want to know exactly what they filter in a manner more detailed than their standards. As Bickert says, what is art to one person might be pornography to another.

Finally, in the Morse article, I’m glad to read that Facebook hasn’t replaced their team of humans that do this messy work with AI. As we’ve discussed in this seminar, AI can be even less transparent than people about the decisions it makes. But that’s my take. Yours might be different.

A Short Story from the Future

Sunday, November 5th, 2017

“Pops, what do you think of the new president?”

Samantha paused and looked up from her flexEpaper article. It wasn’t clear that her father had heard her. But then his spoon full of cereal slowed its progress toward his mouth, and he looked up. His hair, what was left of it, had gone completely gray years ago, but his eyes remained as bright as ever.

“President Trump thinks he’s king. How can a man run for U.S. president and not understand … no, and not respect the fundamental idea of separation of powers in our system of government?”

She had heard her dad slip into this rant many times, and she knew you either cut it off quickly, or you’d better make yourself comfortable for it would be awhile before it played itself out. “Dad, Trump hasn’t been president for 12 years.”

Her dad’s confusion was understandable. Trump was the last U.S. president elected the old-fashioned way. The days when you had to register to vote, and had individuals declaring themselves candidates for president, were long gone. No more stump speeches. No more rallies. No more, thank god, endless political advertisements and no more cheap theater marketed as candidate debates. The Tuesday next after the first Monday in the month of November – what a crazy definition – remained the U.S. Election Day, but no U.S. citizen did anything special on that day these days. You simply woke up and learned who we had “elected” based on an analysis of the data collected by the companies constantly mining us for our preferences. That’s what was in today’s headlines. Today was Tuesday, November 2, 2032.

It was a confluence of factors that brought about the change. The Russians had demonstrated how easy it was to manipulate us through our social media platforms and that led us to question what our vote for the 2016 presidential candidates actually meant. To that point, the fears that kept us from electronic voting had focused on the threat of a disgruntled hacker or foreign power changing the actual vote count. But why go to that trouble when you can simply make a mess of the entire campaign process? Subtly manipulating us through our most popular social media platforms each and every day during the ever-increasing length of our presidential campaigns turned out to be much more appealing than trying to surreptitiously change a country-wide vote count in one evening.

Frustration over money in politics certainly was factor too. CBS News estimated that $6.8 billion was spent during the 2016 elections. Seven billion dollars! Well, if you can’t decide how to regulate money in politics, the next best thing, it appears, is to just eliminate the need to spend it at all. There was angst when the suggestion was made, but now everyone wonders why we didn’t make this change earlier given the boost that spending those same dollars on stuff other than political advertising gave to the U.S. economy, especially in the states that had been struggling economically.

The biggest factor, possibly, was Mark Zuckerberg. While his ambitions for political office might have started earlier, 2016 was the year that the media started taking notice. The Silicon Valley tech sector liked to think of itself as disrupting industries to create a better future, and Zuckerberg simply put two and two together. Why should he be forced to get elected the old fashion way? It was messy, expensive, and terribly inefficient.

In 2016, Facebook, Google, and Amazon alone already knew more about each of our likes and dislikes than we knew about ourselves. And they saw that they could acquire this information in what’s today called the shrinking inch, which is a play on the last mile from the telecomm days when telecommunication companies were the most important commercial entities involved in the Internet. No more. Commerce, social, and search companies are now the ones in power, and they have relentlessly moved to eliminate the space between you and their platforms. First they sat on fixed-location desktops fighting for your attention among the many windows on the screen. They then moved to a more prominent position on our smart phones, which we quickly learned were more important than our wallets. Then these companies scattered devices throughout the spaces we live, collecting everything about our every moment. And in their labs right now, these companies are working on ways to integrate their platforms directly into our bodies.

Finally, the incentives align, much to the regret of the Russians. Our data are gold to the Facebooks, Googles, and Amazons of the world. It’s in their interest to ensure that these data are timely and authentic. They still may not protect our data from being eventually stolen, but they benefit from knowing exactly how we feel today.

In Zuckerberg’s amazing mind, this trend was the opportunity. The challenge then became simply a question of the right matching algorithm: integrate, over the entire voting-aged population, what these companies knew about the wants and desires of each U.S. citizen, and then match the result against the characteristics of every U.S. citizen old enough to be the U.S. president, weighted slightly by the requirements of the role of U.S. president.

It is somewhat ironic that a man pushed for this system in the hope that it would improve his chances for election, as we have had nothing but women elected to the Office of the President since.

Samantha tried to think about a range of things she could say next, but only one thought remained. “I wonder if Pops knows that I was elected president.”