iPhones and the Limits of Curated Security

Abstract: The Apple App store uses a curated model in which all applications must be approved by Apple before they are published. In has been argued that this provides better security than Android where applications are published immediately and only removed if they are found to be problematic. We study the cases of Smurf’s Village and similar applications and argue that the protection offered by this form of security is limited. We conclude that the image of Apple as a benevolent dictator acting as a neutral arbitrator to promote the best user experience is dangerously naive. Apple’s economic interests will often be different than their users’ security interests.

In the Apple App store for iOS devices such as the iPhone and iPad, all applications must be approved by Apple before they are published. That is in contrast to Android in which submitted applications are immediately published and only removed if they are determined to be problematic. The Apple approval process has been criticized as being arbitrary and confusing. Many accuse Apple of exercising excessive control. The App Store Review Guidelines are 8 pages. The introduction contains statements like the following:

  • We view Apps different than books or songs, which we do not curate. If you want to criticize a religion, write a book. If you want to describe sex, write a book or a song, or create a medical app.
  • We have over 350,000 apps in the App Store. We don’t need any more Fart apps.
  • [If] you’re trying to get your first practice App into the store to impress your friends, please brace yourself for rejection. We have lots of serious developers who don’t want their quality Apps to be surrounded by amateur hour.
  • If your app is rejected, we have a Review Board that you can appeal to. If you run to the press and trash us, it never helps.
  • If it sounds like we’re control freaks, well, maybe it’s because we’re so committed to our users and making sure they have a quality experience with our products.

The rest of the Guidelines contains specific rules about what is and is not allowed. It is generally acknowledged that all submissions are not reviewed equally. For example, Apple includes a ban on offensive or mean-spirited commentary but specifically exempts professional political satirists and humorists. Additionally, well known companies and organizations are thought to have an easier time getting their apps approved than individual developers.

Though many have criticized those restrictions as excessive, it has been argument that they will provide protection against malware. Indeed, Jonathan Zittrain, among others, has worried that the iPhone will be adopted because it provides increased security even though this security comes the cost of the freedom and flexibility that has led to significant innovation in the computer industry. But just how effectively will the Apple review process protect users? Software auditing is time consuming and expensive. There are a variety of technical measures that would allow malicious developers to hide unapproved functionality from Apple’s reviewers. I may discuss these techniques in another, geekier, post.

In this post, I use the existence of Smurf’s Village and similar apps to argue that a curated app store is not a security panacea. These type of apps, many of which are targeted at children, are free to download but have very expensive in app purchases for items. For example, Smurfs’ Village, which is is based on the Smurfs tv show, sold a wheel barrow of Smurf Berries for $99. Many children amassed huge bills playing these applications on their parents phones. One 8 year old girl amassed a $1400 phone bill playing Smurf’s Village. (See http://www.digitaltrends.com/mobile/publisher-greed-little-girl-amasses-1400-iphone-bill-playing-smurfs-village/ ) There have been allegations of similar spending on other games such as a 5 year old girl spending $99.99 in Fishies, a 9 year old boy spending $670 on virtual weapons for X-Mas Resort and F.A.S.T, and a 9 year-old girl spending $200 on Zombie Toxins and other items in Zombie Cafe. (http://newsandinsight.thomsonreuters.com/New_York/News/2011/05_-_May/Class_action_firms_go_after__free__Apple_apps/ ) Whether these type of applications should count as malware or badware, is a question of semantics that we will not attempt to address. However, it is worth noting that the financial harm suffered by the parents of these children is likely to be greater than those whose PCs or Android phones are infected with traditional malware.

Why does Apple allow these applications in their app store? An app like Smurfs’ Village might be justifiable based laissez-faire arguments in an open market but that argument makes no sense in the context of Apple’s curated market. It is impossible to know Apple’s exact motivation but economic incentives may provide the answer. Apple requires that all game items be purchased using their In APP Purchase API payment system and takes a 30% cut of any transaction. Thus Apple has an incentive for users to spend as much money as possible. Indeed Apple’s guidelines contained no restriction on excessive pricing of in game items even though they contained restrictions on just about everything else. (Apple’s 30% commission is much greater than the amount taken by credit card companies and consequently they have a much greater incentive to allow expensive purchases.)

Apple has since taken some steps to mitigate the problem. They now require a password to be entered before every purchase (previously the password would be cached for 15 minutes) and allow in-app purchases to be disabled in device settings. Whether or not these mechanisms are sufficient to protect parents from these specific type of applications, in app purchases are a rich target for exploitation. In the future, there will likely be new and different attempts to exploit it that are both creative and disturbing. In the media, there is the popular stereo type that undesirable software originates from super hackers who are either anti-social western males or Russian gangs. The danger of this myth is that it causes us to be overly trusting of corporate software. The extent to which Apple uses developer reputation in the approval process may make them more likely to approve unsavory software if it has the veneer of corporate legitimacy.

It is clear that Apple’s review process does not prevent users from being harmed by unsavory applications. Indeed the image of Apple as a benevolent dictator acting as a neutral arbitrator to promote the best user experience is dangerously naive. Apple’s economic interests will often be different than their users’ security interests.

Using SSL to Prove Document Authenticity

This blog post is an idea that I’ve been kicking around for a while but haven’t had the time to research or implement.  I’ve finally decided just to post it speculatively.  I’m really hoping to get feed back from those in the community more knowledgeable about SSL than I am.  Note: This is a relatively geeky topic if you don’t understand what https:// and SSL are this post won’t make much sense…

Introduction

Does anyone know anything about the internals of https?  I was wondering if there is any way to prove that a document downloaded over https really came from the site you claim that it came from.  In other words, if you download a document over https, is there anyway for you to prove to a third party that it actually came from the web site you claim it came from? For example,  let’s say that Alice downloads doc.pdf from https://foobar.com/doc.pdf. https provides Alice assurance that doc.pdf really came from foobar.com (assuming that the certificate is legitimate).  But assuming doc.pdf does not have a digital signature,  if Alice simply sends the downloaded file to Bob, he has no proof that the file actually came from foobar.com. (Obviously, the ideal solution would be for the maintainer of foobar.com to digitially sign the pdf file. But few websites digitially sign the files they distribute and individual users often have no means of convincing a web site to do so.)  My question is whether there is any way for Alice to prove to Bob that she really obtained the file from foobar.com.  I thought that it might be possible for Alice to prove the file’s origin by sending some of the raw network traffic establishing the SSL connection along with the file.  (I’m using a PDF file to simplify the example but presumably the same issues would apply to a web page.)

Use Cases

PACER is an online service used by the United States federal courts to provide online access to court records and documents.  The documents on PACER are generally thought to be in the public domain but remain behind a pay wall.   Efforts such as  the PACER Recycling Project and RECAP allow users to upload PDF documents obtained from PACER to a central server where the documents can then be freely downloaded by others.  However, while PACER uses SSL, it does not provide digitally signed PDF files.  Thus users currently have no way to prove that the documents really came from PACER.

Another use case, is as a replacement for web screen shots.  Because web pages can be easily altered or taken down,  screen shots are often offered as “proof” that a web page used to exist even if it has since been altered or removed.  For example, this CNET news story describes how pranksters from 4chan retaliated against AT&T for blocking their site by posting a fake report saying that AT&T’s CEO died.  The story includes this screen shot of the pranked web page prior to its removal.  Of course screen shots can be easily faked or altered using tools such as Photo Shop or just by saving and editing the html.  Presumably web screen shots posted by CNET are relatively trustworthy, but what about screen shots posted by unknown users?

Ideal Solution

I envision a Firefox extension that would allow a user to easily create an archive bundle for an https: web page containing the page and SSL information proving its legitimacy.  (Obviously this would need to work for single files as well as web pages.)  This bundle would allow other users to view the web page of file as it existed and provide easily verifiable proof that the web page really came from the site in question.

My Questions for the SSL Knowledgable

Is this doable at all?

Screen shots are trivial to fake, if this approach can’t provide perfect proof of the origin of a document how much more assurance would it give you than just a screen shot?

Would releasing the raw https traffic also mean that Alice would be releasing her user name and password?

A minor concern is that the fact that a web site hosted or displayed a particular page is slightly different from the web site signing a file.  Furthermore, there may be issues with XSS vulnerabilities that allow attackers to make an https web site display arbitrary content.  However, XSS attack are a problem now with screen shot being passed around and XSS altered pages could probably be detected by viewing the html source.

But Not All Web Sites Use SSL

It has been repeatedly shown that web 2.0 applications such as gmail and facebook cannot be used securely over an unencrypted connection.  For example, hijacking the account of a facebook users on the same network is trivial. Perhaps I’m being overly optimistic but I believe once these vulnerabilities become more widely know and attack scripts/ exploits become widely available web applications will move to SSL as the default or at least offer https as an option.  (GMail already has an option to enable https though it is buried deeply within the settings.)

Please Comment

There you have it: my first real blog post.  Please let me know what you think.

Update December 13, 2009

Unfortunately, it appears that this won’t work.  The basic problem is that SSL uses a shared key so the client could easily forge messages.  (Initially, technically unsophisticated users might not be able to forge messages and sign them with the key but someone would probably develop an automated tool to do it.)  I still hope that at some point a standardized way to show what a web page showed previously will emerge that’s harder to forge than screen shots. Many thanks to Paco Hope and his colleagues at Cigital for providing feed back on this.