You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'freedom of expression' Category

HLS Worldwide Alumni Congress in D.C.


I had the great pleasure to speak today at Harvard Law School’s Worldwide Alumni Congress here in Washington D.C. Together with my friend John Palfrey, Executive Director of the Berkman Center and Clinical Professor of Law, and my colleague James L. Cavallaro, Executive Director of the Human Rights Program at Harvard and also Clinical Professor, we were presenting research in progress on Internet filtering and human rights issues. In this thematic context, we also had a very interesting discussion about internationalization and the question whether an academic institution like Harvard should focus solely on research, or as to what extent it should also engage in advocacy. Not surprisingly, the alumni in the audience had different views on the topic. However, there emerged some sort of consensus that advocacy is less of an issue in cases where academia – for instance as part of a clinical program –represents a client, because that’s ultimately the thing lawyers are supposed to do.

The line-up of speakers is fantastic as you can take from the website, and so is the social and cultural program. Once again I’m impressed how much effort this wonderful school puts into relationship management – in the best sense of the word. At least some of us at the Univ. of St. Gallen could learn a great deal from our colleagues and friends on this side of the Atlantic – not only in terms of professionalism, but also with regard to personal commitment of faculty members and University staff. For me personally, yet another rewarding experience, both professionally and personally (with thanks to the Alumni office and JP for making it possible.)

Towards A Best Practice Approach to Internet Filtering? Initial Thoughts After Release of Global ONI Survey


I’ve had the great pleasure to celebrate today the launch of the most comprehensive and rigorous study on state-mandated Internet filtering with my colleagues and friends from the Berkman Center and the OpenNet Initiative, respectively. It was an inspired and inspiring conference here at Oxford University, and after a long day of debate it seems plain to me that the filtering reports from 41 countries presented today will keep us busy for the weeks and months to come.

Extenisve coverage both in traditional media (sse, e.g., BBC) and the blogosphere.

In the closing session, Professor John Palfrey, one of the principle investigators (check out his blog), was kind enough to put me on spot and ask for my take away points. Given the complexity of the information ecosystem including its diverse filtering regimes, it seems hard to come up with any kind of conclusion at this early stage. However, among the probably trickiest problems we might want to think about is the question whether we – as researchers – want and should contribute to the development of some sort of best practice model of speech control on the Internet – a model aimed at “minimizing” the harm done to free speech values in a world where filtering and blocking is likely to continue to exist-, or whether such an endeavor would be counterproductive under any circumstances, either because it would be immediately hijacked by governments to legitimize filtering or used by repressive regimes to make filtering more effective.

Having only a tentative answer to that question, we at the St. Gallen Research Center have started to brainstorm about ways in which various governance approaches to content filtering – focusing on filtering regimes in European countries and the U.S. – could be systematically mapped, analyzed, and compared. So far, we have come up with a set of six guiding questions:

  1. Who is obliged or committed to block of filter content?
  2. How do the obliged actors become aware of the content that has to be blocked?
  3. Who determines what content has to be blocked, and how?
  4. What technical means (such as, e.g., IP blocking, URL filtering, etc.) are used?
  5. What are the procedural requirements and safeguards in the filtering process?
  6. Who sets the rules, under which conditions?

The second issue we’re currently debating is how different filtering regimes can be evaluated, i.e., how the benchmarks for online speech control might look like. In this context, we’re considering the application of generic characteristics of good regulation – including criteria such as efficiency, due process, transparency, accountability, and expertise, among others – to online filtering regimes.

What are your thoughts on this idea as well as on the basic question whether we should get involved in a best practice discussion – even (or especially) if we believe in the power of a marketplace of ideas? Comments, as always, most welcome.

EU Parliament Calls For Code of Conduct For Internet Intermediaries Doing Biz In Repressive Countries


With the usual time-lag, the debate about Internet censorship in repressive countries such as China and the role of Internet intermediaries such as Google, Microsoft and Yahoo! has now arrived in Europe. The EU Parliament now confirms what many of us have argued for months, i.e., that the problem of online censorship is not exclusively a problem of U.S.-based companies and is not only about China.

The recent resolution on freedom of expression on the Internet by the European Parliament starts with references to previous resolutions on human rights and freedom of the press, including the WSIS principles, as well as international law (Universal Declaration of Human Rights) and opens with the European-style statement that restrictions on online speech “should only exist in cases of using the Internet for illegal activities, such as incitement to hatred, violence and racism, totalitarian propaganda and children’s access to pornography or their sexual exploitation.”

Later, the resolution lists some of the speech-repressive regimes, including China, Belarus, Burma, Cuba, Iran, Libya, Maldives, Nepal, North Korea, Uzbekistan, Saudi Arabia, Syria, Tunisia, Turkmenistan and Vietnam. The resolution then makes explicit references to U.S.-based companies by recognizing that the “…Chinese government has successfully persuaded companies such as Yahoo, Google and Microsoft to facilitate the censorship of their services in the Chinese internet market” and “notes that other governments have required means for censorship from other companies.” European companies come into play with regard to the sale of equipment to repressive governments, stating that

“… equipment and technologies supplied by Western companies such as CISCO Systems, Telecom Italia, Wanadoo, a subsidiary of France Telecom have been used by governments for the purpose of censoring the Internet preventing freedom of expression.” (emphasis added.)

The resolution, declaratory in nature, in one of its probably most significant parts calls on the European Commission and the Council “to draw up a voluntary code of conduct that would put limits on the activities of companies in repressive countries.” The policy document also stresses the broader responsibility of companies providing Internet services such as search, chat, or publishing to ensure that users’ rights are respected. Hopefully, the Commission and the Council will recognize that several initiatives aimed at drafting such code of conducts are underway on both sides of the Atlantic (I have myself been involved in some of these processes, including this one), and will engage in conversations with the various groups involved in these processes. In any event, it will be interesting to see how the Commission and the Council approach this tricky issue, and as to what extent, for instance, they will include privacy statements in such a set of principles – a crucial aspect that, interestingly enough, has not been explicitly addressed in the Parliament’s resolution.

The resolution also calls on the Council and Commission “when considering its assistance programmes to third countries to take into account the need for unrestricted access by their citizens.” Further coverage here.

Update: On the “European Union’s schizophenric approach to freedom of expression”, read here (thanks, Ian.)

YJoLT-Paper on Search Engine Regulation


The Yale Journal of Law and Technology just published my article on search engine regulation. Here’s the extended abstract:

The use of search engines has become almost as important as e-mail as a primary online activity. Arguably, search engines are among the most important gatekeepers in today’s digitally networked environment. Thus, it does not come as a surprise that the evolution of search technology and the diffusion of search engines have been accompanied by a series of conflicts among stakeholders such as search operators, content creators, consumers/users, activists, and governments. This paper outlines the history of the technological evolution of search engines and explores the responses of the U.S. legal system to the search engine phenomenon in terms of both litigation and legislative action. The analysis reveals an emerging “law of search engines.” As the various conflicts over online search intensify, heterogeneous policy debates have arisen concerning what forms this emerging law should ultimately take. This paper offers a typology of the respective policy debates, sets out a number of challenges facing policy-makers in formulating search engine regulation, and concludes by offering a series of normative principles which should guide policy-makers in this endeavor.

As always, comments are welcome.

In the same volume, see also Eric Goldman‘s Search Engine Bias and the Demise of Search Engine Utopianism.

Some Highlights of Yale’s A2K Conference


Our colleagues and friends from the Information Society Project at Yale Law School have organized a landmark conference on Access to Knowledge, taking place this weekend at Yale Law School, that brings together leading thinkers and activists on A2K policy from North and South and is aimed at generating concrete research agendas and policy solutions for the next decade. The impressive program with close to 20 plenary sessions and workshops, respectively, is available here. Also check the resources page and the conference wiki (with session notes.)

Here are some of Friday’s and yesterday’s conference highlights in newsflash-format:

  • Jack Blakin’s framework outlining core themes of the A2K discourse. The three main elements of a theory of A2K: (1) A2K is a demand of justice; (2) A2K is an issue of economic development as well as an issue of individual participation and human liberty; (3) A2K is about IP, but it is also about far more than that. Balkin’s speech is posted here.
  • Joel Mokyr’s lecture on three core questions of A2K: (a) Access to what kind of knowledge (propositional vs. prescriptive)? (b) Access by how many users? Direct or indirect access? (question of access intermediaries and the control of their quality) (c) Access at what costs? (Does a piece of knowledge that I need exist? If yes, where; who has it? How to get it? Verification of its trustworthiness.)
  • Yochai Benkler’s fast-paced presentation on the idea of A2K as a response to 4 long-term trends (decolonization->increased integration; rapid industrialization->information knowledge economy; mass media monopolies->networked society; communism and other –isms->human dignity), the reasons why we should care about it (justice and freedom), the sources of the A2K movement as a response to the 4-long term trends (incl. access to medicine, internet freedo movement, information commons, FOSS, human genome project, spectrum commons, open access publications, digital libraries, … ), and the current moment of opportunity in areas such as regulation of information production and telecommunication policy.
  • Eric von Hippel’s discussion of norm-based IP systems and a recent study on cultural norms shared among Michelin-starred French chefs that regulate – as a substitute to copyright law – how they protect ownership of their recipes.
  • Keith Maskus’ lecture on the interplay between trade liberalization and increased IP protection of technologies and an overview of econometric studies regarding key IPR claims in this zone (transparent and enforceable IP regimes do seem to encourage increase in IT investments and associated export growth, both at the aggregate and micro-level; however, claim is conditioned, i.e., holds in middle-income countries, but no evidence for low income developing countries).
  • Eli Noam’s talk on the evolution of firms from the pre-industrial age to today’s digitally networked environment, in which organizations are increasingly defined by information. More on the MacLuhanization of the firm here.
  • Suzanne Scotchmer’s presentation on the design of incentive systems to manage possible conflicts among incentive goals such as the promotion of R&D, the promotion of its use, and trade policy goals. Scotchmer’s lecture was based on her book Innovation and Incentives.
  • Michael Geist’s overview of the current controversies surrounding the idea of a two-tiered Internet – hot topics, among others,: VoiP, content control, traffic shaping, public vs. private internet, and website premium – and his discussion of the core policy questions (is legal protection from Internet tiering required? Is tiering needed for network building and management? Is it a North-South issue?)
  • Susan Crawford’s discussion of the different perspectives of the Bellheads versus the Netheads and the clash of these world views in the Net neutrality debate. Susan’s key arguments are further discussed in this paper.
  • Pam Samuelson’s lecture on the history of the WIPO Internet Treaties, the battles surrounding the DMCA and the EUCD, the fight against database protection in the U.S., and the lesson we can learn form these earlier tussles with regard to the A2K movement (first of all, don’t be polemic –engage in thorough research.) [Update: excellent notes of Pam’s lecture taken by Susan Crawford.]
  • Jamie Love’s action points for the A2K movement, including the following (see here): (1) Stop, resist or modify the setting of bad norms; (2) change, regulate, and resist bad business practices; (3) create new modes of production (commercial and non-commercial) of knowledge goods; (4) create global frameworks and norms that promote A2K.
  • Natali Helberger’s discussion of the proposed French provision on interoperability (Art. 7 of the IP Act) as an expression of cultural policy and national interests.

Global Online Freedom Act of 2006: Evil is in the Details


I’ve just read Rep. Chris Smith’s discussion draft of a “Global Online Freedom Act of 2006,” which has been made available online on Rebecca MacKinnon’s blog. Rebecca nicely summarizes the key points of the draft. From the legal scholar’s rather then the activist’s viewpoint, however, some of the draft bill’s nitty-gritty details are equally interesting. Among the important definitions is certainly the term “legitimate foreign law enforcement purposes,” which appears, for instance, in the definition of substantial restrictions on Internet freedom, and in sec. 206 on the integrity of user identifying information. According to the draft bill, the term ”legitimate foreign law enforcement purposes” means

“for purposes of enforcement, investigation, or prosecution by a foreign official based on a publicly promulgated law of reasonable specificity that proximately relates to the protection or promotion of health, safety, or morals of the citizens of that jurisdiction.”

And the next paragraph clarifies that

“the control, suppression, or punishment of peaceful expression of political or religious opinion does not constitute a legitimate foreign law enforcement purpose.” [Emphasis added.]

While the first part of the definition makes a lot of sense, the second part is more problematic to the extent that it suggests, at least at a glance, a de facto export of U.S. free speech standards to the rest of the world. Although recent Internet rulings by U.S. courts have suggested an expansion of the standard under which U.S. courts will assert jurisdictions over free speech disputes that arise in foreign jurisdiction, it has been my and others impression that U.S. courts are (still?) reluctant to globally export free speech protections (see, e.g. the 9th Circuit Court of Appeal’s recent Yahoo! ruling.)

Indeed, it would be interesting to see how the above-mentioned definition would relate to French legislation prohibiting certain forms of hatred speech, or German regulations banning certain forms of expression—black lists, by the way, which are also incorporated by European subsidiaries of U.S. based search engines and content hosting services.

While the intention of the draft bill is certainly a legitimate one and while some of the draft provisions (e.g. on international fora, code of conduct, etc.) deserve support, the evil—as usual—is in the details. Given its vague definitions, the draft bill (may it become law) may well produce spillover-effects by restricting business practices of U.S. Internet intermediaries even in democratic countries that happen (for legitimate, often historic reasons) not to share the U.S.’ extensive free speech values.

Addendum: Some comments on the draft bill from the investor’s perspective here. Note, however, that the draft bill also includes foreign subsidiaries of U.S businesses to the extent that the latter control the voting shares or other equities of the foreign subsidiary or authorize, direct, control, or participate in acts carried out by the sbusidiary that are prohibited by the Act.

Information Ethics: U.S. Hearing, but Global Responsibility


Today, the US House of Representatives’ IR Subcommittee on Africa, Global Human Rights and International Operations, and the Subcommittee on Asia and the Pacific are holding an open hearing on the question whether the Internet in China is a Tool for Freedom or Suppression. My colleague Professor John Palfrey, among the foremost Internet law & policy experts, has prepared an excellent written testimony. In his testimony, John summarizes the basic ethical dilemmas for U.S. corporations such as Google, Microsoft, Yahoo and others who have decided to do business in countries like China with extensive filtering and surveillance regimes. John also raises the question as to what extent a code of conduct for Internet intermediaries could guide these businesses and give them a base of support for resisting abusive surveillance and filtering requests and the role academia could play in developing such a set of principles.

I’m delighted that our Research Center at the University of St. Gallen in Switzerland is part of the research initiative mentioned in John’s testimony that is aimed at contributing to the development of ethical standards for Internet intermediaries. Over the past few years, a team of our researchers has explored the emergence, functionality, and enforcement of standards that seek to regulate the behavior of information intermediaries. It is my hope that this research, in one way or another, can contribute to the initiative announced today. Although the ethical issues in cyberspace are in several regards structurally different from those emerging in offline settings, I argue that we can benefit from prior experiences with and research on ethics for international businesses in general and information ethics in particular.

So far, the heated debate about the ethics of globally operating Internet intermediaries has been a debate about the practices of large and influential U.S. companies. On this side of the Atlantic, however, we should not make the mistake to think that the hard questions Palfrey and other experts will be discussing today before the above-menioned Committees are “U.S.-made” problems. Rather, the concern, challenge, and project – designing business activities that respect and foster human rights in a globalized economy with local laws and policies, including restrictive or even repressive regulatory regimes – are truly international in nature, especially in today’s information society. Viewed from that angle, it is almost surprising that we haven’t seen more constructive European contributions to this discourse. We should not forget that European Internet & IT companies, too, face tough ethical challenges in countries such as China. In that sense, the difficult, but open and transparent conversations in the U.S. are in my view an excellent model for Europe with its long-standing human rights tradition.

Update: Rebecca MacKinnon does a great, fast-speed job summarizing the written and oral testimonies. See especially her summary of and comments on the statements by Cisco, Yahoo!, Google, and Microsoft.

Swissinfo on Legality of Mohammed Cartoons under Swiss Law


Swissinfo has just put online an article entitled “A case to answer over Mohammed cartoons?,” which discusses legal aspects of the Mohammed cartoons under Swiss law. My colleague Daniel Haeusermann, Reseracher at the University of St. Gallen’s Research Center of Information Law, has been interviewed and quoted extensively in this piece.

Google’s Alan Davidson on Areas of Special Concern


Alan Davidson, Washington Policy Counsel and head of Google’s new Washington DC government affairs office, made several interesting remarks in his panel statement. Among them: He identified the following two areas that are of special concern to search engine providers:

(1) Conceptual shift in speech regulation

  • Old approach (offline media): focused on publishers, readers
  • New & emerging generation of speech regulation: focus on deliverers – intermediaries are supposed to police the networks. Examples where this approach is currently up for discussion in D.C.: access to pharmaceutical products, blocking of gaming websites
  • Assessment: It’s not a good idea to target intermediaries: Due process, procedural problem: intermediary, e.g., can’t tell whether or not a particular site featuring copyrighted content is a fair use or not; by going after the intermediary you take the publisher out of equation, can’t go to courts to argue the case
  • Misguided, because search engines are only in the business of indexing existing content; they’re not editors (can’t be, given the scale.)

(2) Government access to information

  • Increasing pressures to provide personalized information (search history, etc.) to third parties
  • Best privacy policy doesn’t help if government wants information for national security reasons; standards really low; plus: search engines not allowed to inform users that info has been passed on to third parties.

Regulating Search? Call for a Second Look


Here is my second position paper (find the first one here) in preparation of the upcoming Regulating Search? conference at ISP Yale. It provides a rough arc of a paper I will write together with my friend and colleague Ivan Reidel. The Yale conference on search has led to great discussions on this side of the Atlantic. Thanks to the FIR team, esp. Herbert Burkert and James Thurman, Mike McGuire, and to Sacha Wunsch-Vincent for continuing debate.

Regulating Search? Call for a Second Look

1. The use of search engines has become almost as important as email as a primary online activity on any given day, according to a recent PEW survey. According to an another survey, 87% of search engine users state that they have successful search experiences most of the time, while 68% of users say that search engines are a fair and unbiased source of information. This data combined with the fact that the Internet, among very experienced users, ranks even higher than TV, radio and newspapers as an important source of information, illustrates the enormous importance of search engines from a demand-side perspective, both in terms of actual information practices as well as with regard to users’ psychological acceptance.

2. The data also suggests that the transition from an analog/offline to a digital/online information environment has been accompanied by the emergence of new intermediaries. While traditional intermediaries between senders and receivers of information—most of them related to the production and dissemination of information (e.g. editorial boards, TV production centers, etc.)—have diminished, new ones such as search engines have entered the arena. Arguably, search engines have become the primary gatekeepers in the digitally networked environment. In fact, they can effectively control access to information by deciding about the listing of any given website in search results. But search engines not only shape the flow of digital information by controlling access; rather, search engines at least indirectly engage in the construction of the messages or meaning by shaping the categories and concepts users’ use to search the Internet. In other words, search engines have the power to influence agenda setting.

3. The power of search engines in the digitally networked environment with corresponding misuse scenarios is likely to increasingly attract policy- and lawmakers attention. However, it is important to note that search engines are not unregulated under the current regime. Markets for search engines regulate their behavior, although the regulatory effects of competition might be relatively weak because the search engine market is rather concentrated and centralized; a recent global user survey suggests that Google’s global usage share has reached 57.2%. In addition, not all search engines use their own technology. Instead, they rely on other search providers for listings. However, search engines are also regulated by existing law and regulations, including consumer protection laws, copyright law, unfair competition laws, and—at the intersection of market-based regulation and law-based regulation—antitrust law or (in the European terminology) competition law.

4. Against this backdrop, the initial question for policymakers then must concern the extent to which existing laws and regulations may feasibly address potential regulatory problems that emerge from search engines in the online environment. Only where existing legislation and regulation fails due to inadequacy, enforcement issues, or the like, the question of new, specific and narrowly tailored regulation should be considered. In order to analyze existing laws and regulation with regard to their ability to manage problems associated with search engines, one might be well-advised to take a case-by-case approach, looking at each concrete problem or emerging regulatory issue (“scenario”) on the one hand and discussion relevant to incumbent legal/regulatory mechanisms aimed at addressing conflicts of that sort on the other hand.

5. Antitrust law might serve as an illustration of such an approach. While the case law on unilateral refusals to deal is still one of the most problematic and contested areas in current antritrust analysis, the emergence of litigation applying this analytical framework to search engines seems very likely. Although most firms’ unilateral refusals to deal with other firms are generally regarded as legal, a firm’s refusal to deal with competitors can give rise to anti-trust liability if such firm possesses monopoly power and the refusal is part of a scheme designed to maintain or achieve further monopoly power. In the past, successful competitors like Aspen Skiing Co. and more recently Microsoft have been forced to collaborate with competitors and punished for actions that smaller companies could have probably gotten away with. In this sense, search engines might be the next arena where antitrust laws with regard to unilateral refusals to deal are tested. In addition to the scenario just described, the question arises as to whether search engines could be held liable for refusal to include particular businesses in their listings. Where a market giant such as Google has a “don’t be evil” policy and declines from featuring certain sites in its PageRank results because it deems these sites to be “evil,” there is an issue of whether Google is essentially shutting that site provider out of the online market through the exercise of its own position in the market for information. Likewise, the refusal to include certain books in the Google Print project would present troubling censorship-like issues. It is also important to note that Google’s editorial discretion with regard to its PageRank results was deemed to be protected by the First Amendment in the SearchKing case.

6. In conclusion, this paper suggests a cautious approach to rapid legislation and regulation of search engines. It is one of the lessons learned that one should not overestimate the need for new law to deal with apparently new phenomena emerging from new technologies. Rather, policy- and lawmakers would be well-advised to carefully evaluate the extent to which general and existing laws may address regulatory problems related to search and which issues exactly call for additional, specific legislation.

Log in