You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Anonymity on the Web

Essay by the Cheshire Cat

“C’mon. Give us the juice. Posts are totally, 100% anonymous.” So reads the web page. Read on and you’ll find every manner of gossip about America’s college students, searchable by name. Joe Johnson’s mental and venereal illnesses. Mary Smith’s suicide attempt after her sex life was splashed across the site a few days earlier.

Of course, all you are really seeing is what nameless people claim, and what others namelessly claim about that.

Many posts are cruel. There is no way to tell if they are true. And no one takes responsibility for them.

JuicyCampus and its kin—, for example—are the evil cousins of the Web’s happiest successes. Facebook, Flickr,, and the hundreds of other web sites have sprung up to bring to connect people to each other and to bring them love, solace, and companionship.

Anonymity is by no means restricted to gossip sites. As newspapers evolve into online publications, anonymous commenting keeps readers coming back. The papers do their best to screen out the sort of mean-spirited stuff in which the gossip sites revel. But the energy of the discussion seems to require that commentators not reveal their identities.

By even the most minimal standard of decency, what JuicyCampus is doing is wrong. Should it be illegal? In fact, isn’t it illegal already? After all, if The New York Times printed anonymous letters resembling the JuicyCampus posts, it would be liable for serious monetary damages for defamation. Publishers have a responsibility to check what they publish, even when their reporters are not the authors. Isn’t JuicyCampus like a publisher?

The law says exactly the opposite: No web site “shall be treated as the publisher or speaker of any information provided by another information content provider.” This law was put in place to encourage services to filter their content—to keep pornography away from children, for example. With the assurance of immunity, sites would not risk publishers’ liability if they tried to edit but occasionally missed a few things. This section of the so-called “Communications Decency Act” made the Web, as one federal judge put it, “the most participatory form of mass speech yet developed.” The Web is free speech paradise.

Do we really want so much freedom that anonymous attackers can lawfully malign the innocent and helpless? Doesn’t freedom of speech come with an expectation that you will take responsibility for your words?

No, it doesn’t—not in U.S. law, anyway. Anonymous speech has a distinguished history in the U.S., going back to Publius himself, for whom this collection is named, the pseudonym used by the authors of the Federalist Papers. Benjamin Franklin wrote pseudonymously—as a young man, when no one would have taken him seriously if his true identity were known, and in later life when he had established a reputation he feared losing. U.S. courts have repeatedly affirmed that the First Amendment applies to anonymous speech.

JuicyCampus is not Hamilton, Madison, Jay, or Franklin. Prosecutors are frustrated by web sites’ blanket immunity. Yet lawmakers should just leave anonymity alone. Virtually every legislative effort to enforce good speech behavior in Cyberspace has overshot its mark. Many have been overturned on First Amendment grounds.

Without some change in the law, few retaliatory tools are available. Spamming might disable offending sites, but such sabotage seems a descent to the level of the evil adversary—and in any case won’t work once the sites realize what is happening. Some colleges threatened to block JuicyCampus from their campus networks. Acting on this bright idea would precipitate a losing game of hide and seek—the same sites would keep turning up under new names, more enticing than ever because of their banned status. The only loss would be to the institutions’ claim to information freedom.

Of course, the problem would go away if everyone stopped patronizing these sites. In the long run, once the novelty has worn off, that is exactly what will happen. In the short run, unfortunately, trying to achieve a consensus not to peek is like sending out an alert telling everyone not to look at the elephant in the middle of the room—while the beast is attacking your loved ones.

You probably remember the faceless, mocking grin of the Cheshire Cat in Alice in Wonderland. But do you remember what finally happened to the Cat? The queen wanted the Cat beheaded. The king called in the executioner. The executioner pled that anything lacking a body could not be beheaded. The king claimed that anything with a head could be beheaded. In the midst of the argument, the Cat’s head disappeared completely, “so the King and the executioner ran wildly up and down looking for it, while the rest of the party went back to the game.” That’s just what will happen in the fight against anonymity, if we can restrain our urge to regulate Internet speech.

This essay is signed pseudonymously in reflexive deference to its subject. For those dying to unveil the author, however, here’s a hint: He’s one of the authors of Blown to Bits: Your Life, Liberty, and Happiness After the Digital Explosion.

Cybercrime – and what we will have to do if we want to get it under control

Essay By Michael Barrett with companion pieces by Beau Brendler and David Clark.

Continue the security conversation with John Clippinger and Dembitz.

As I write this, in the spring of 2008, we have recently passed a milestone – on April 22nd, 1993, Mosaic 1.0 was released by the National Center for Supercomputing Applications (NCSA) . This was the first web browser used by the general public, making the World Wide Web more than just a tool for academics.

How many Internet users are there today? Conservative estimates exceed one billion people. In a decade and a half we have gone from minimal Internet usage to approximately 20% of the world’s population now being online. Moreover, the bulk of that growth has occurred since the year 2000.

In this essay, I will explore two themes: first, how societies adopt new technologies and second, how governance and regulation may co-evolve with new technologies. I’ll use two historical examples – the road system and airplanes – to ask what lessons they may provide for the Internet.

In addition to being the 15th anniversary of Mosaic, 2008 is also the 100th anniversary of the introduction of the Ford Model T. There had certainly been other motor cars available prior to 1908, but the Model T revolutionized how Americans viewed cars and dramatically increased the number of cars on the road, necessitating a new approach to regulation. Pre-Model T regulation can be described as quirky: men walking in front of cars with red flags, 20 MPH speed limits, and so on. However, shortly after 1908, regulation began to change rapidly. For example, in 1918 New York introduced three color traffic lights. A year later, the League of Nations established a committee to harmonize aspects of road system regulation, and its recommendations were accepted and implemented by a number of countries.

New York’s original traffic lights were based on the earlier signaling used on railroads, which were themselves based on maritime signaling. In other words, there’s an established history of stealing good ideas for safety equipment, and re-applying it to a new niche. There’s also a long history of mandating safety equipment via regulation.

Aviation also teaches us useful lessons. The Wright Flyer of 1903 had the same impact on aviation that the Model T had on automobiles. The US Government established the National Advisory Committee for Aeronautics in 1915 ; the Airmail Act was passed in 1925, and the Air Commerce Act was passed in 1926. Less than 25 years after the first flight, there was an extensive regulatory infrastructure in place. Still, contemporary debate centered around a general distrust of regulation, and a sense that the government wouldn’t be able to deal effectively with new technology. But the pressure for regulation was sufficiently strong: accidents were commonplace and the public regarded aviation as novel, fascinating and unsafe.

The other lesson to be learned from aviation is that while each country manages its own process, there is considerable standardization. This is at least in part due to ICAO (the International Civil Aviation Organization). ICAO was formed in 1948 under the auspices of the United Nations. The rationale for such harmonization is obvious – if an airplane is going to fly from one continent to another, the equipment in question needs to be deemed safe in both the origin and destination; the licenses and certifications of the pilots need to be accepted universally, and so on. Commercial aviation has implemented more standardization than many other areas of global commerce.

In the cases of both automobiles and aviation, accidents were the primary force behind regulation. While private industry certainly played a very significant part, it’s no exaggeration that the road and air transportation networks that we take for granted would never have existed without government regulation, and could not exist without it. Can we expect the Internet to be different?

Internet regulation over the past fifteen years has been minimal. I’d argue that there’s a single reason for this: the forcing function that accidents represented for road and air transportation has not existed for the Internet. I’d further argue that e-crime will play this role.

I have been working in Information technology for years and I can vividly remember when the first viruses were written, often by security researchers. Security technology failures have gone through a rather predictable sequence: initial discovery by security professionals, followed by wide scale abuse by teenage vandals, and finally appropriation by wholly criminal enterprises. Now that the teenage vandals have largely dropped away, we are left with attacks motivated solely by money.

This phenomenon has only been a feature of the information security landscape since about 2004. In less than five years, e-crime has changed from an anomaly into an industry. A recent Gartner report suggested that the global “take” from just one form of e-crime, phishing, was $3.2 billion in 2007 (and this may be an underestimate). This is impressive for an industry created less than five years ago. Worse, there is no reason to believe that e-crime is under any effective control. This is not due to inertia or lack of interest. Companies such as my own employer, PayPal, invest substantially in the security of our own applications and infrastructure; we have state of the art fraud management systems; we work with law enforcement to catch, prosecute, and convict criminals whenever possible.

The problem, however, is that there is a huge asymmetry at work. In many jurisdictions, there is no chance of e-criminals being detected, arrested, indicted, convicted, or punished.

Nonetheless, we are cautiously optimistic that phishing can be controlled. If other companies adopt the same strategies we have at PayPal, we’re confident that phishing will become substantially more difficult and less financially rewarding. Unfortunately, there’s also strong evidence that criminals will simply switch from phishing to malware.

I have spent the last three years looking for a clear answer to a very simple question – “How many PCs globally are infected by malware?” Perhaps surprisingly, it’s very difficult to get an answer to this from commercial sources. However, the topic has become interesting to academics, and their conclusions are downright frightening – 12%.

Worse, 12% refers to an average of PCs owned by both consumers and businesses. Because businesses employ people (like me) to ensure the security of their computers, infection by malware is particularly disturbing. By contrast, consumers are on their own when it comes to PC security: most of them purchased a machine that appears to be capable of magic, and they have no clue as to what represents safe vs. unsafe behavior. We exhort them to “buy a firewall”, “turn on auto-updates”, “buy an anti-virus package” and so on, but there are no apparent consequences if they do not. Further, there’s direct evidence that consumers think they know how to protect themselves – but don’t, as evidenced by a common belief that phishing e-mails can be spotted by their poor quality graphics, and abysmal grammar and spelling. This is why data from ISPs suggests that anywhere from 25% to 30% of consumer PCs have been compromised.

By now, I may have convinced the reader that I am of the Chicken Little mentality. But my fear may be warranted: it’s pretty clear that the criminals are only just starting to flex their muscles – the monetization of e-crime is so new that they’ve only been plying their trade for a very short time. If we collectively take no action, then we have perhaps five to ten years before criminal greed literally takes the Internet away from us. If e-crime continues its rise, consumer confidence will be eroded, possibly leading to popular abandonment of the Internet and e-commerce.

However, if things start getting bad enough, society will demand change and, as the histories of other industries teach us, legislators and regulators will step in and mandate change. The obvious question is what that change should look like.

I believe that a very good case can be made for using the road system as an analogy for the Internet. The question we need to ask ourselves is: “Who’s responsible for making the roads safe?”

Drivers are responsible for:
– Being appropriately trained and licensed to operate a vehicle;
– Ensuring that the vehicle is properly licensed, safe to operate, and insured;
– Following all appropriate regulations about safe driving.

Private industry is responsible for:
– Offering safe vehicles for sale;
– Providing safe road equipment to government agencies;
– Building roads to specifications provided by government agencies;
– Offering affordable vehicle insurance to drivers.

Governments are responsible for ensuring that:
– Roads are designed to be safe, and are maintained to ensure safety;
– Equipment used in the road system is safe (have you ever noticed that traffic lights don’t fail with all directions showing green?);
– Drivers are trained and tested to meet standards of safe driving;
– Unsafe drivers are targeted by law enforcement officials;
– There is a minimum level of safety equipment built into personal vehicles;
– There is a robust market for affordable & effective vehicle insurance.

The analogous question is: “Who’s responsible for making the Internet safe?” I’d argue that there should be a shared responsibility among government, private industry and consumers. However, almost none of these regulatory elements are in place today. We need to develop a model framework for Internet governance, and we need to do it soon.

If you are driving a car on the public roads, an entirely different set of standards apply than if you are driving “off road.” Similarly, if you connect your PC to the Internet, it should be appropriately protected by either a hardware or software firewall and an anti-virus product. If you connect an unprotected device to the Internet, you should be liable for any financial losses that you might incur from e-crime, as well as for possible damages that your PC might cause to others. This is regulation at the individual level.

At the level of private industry, ISPs could be responsible for determining whether the PCs of their customers have been compromised, and if they have, refusing to connect them to the Internet. Such determination could be made directly by the ISP concerned, as there are now tools that enable this, or by reports from reliable organizations. Additionally, website hosts and operators should be liable for damages their sites may inflict (even unintentionally) on visiting PCs.

Finally, it’s clear that governments need to act:

We need a globally harmonized framework of legislation against e-crime. Governments need to agree on the definitions of e-crime and of phishing so that attackers from all jurisdictions can be aggressively pursued in the criminal justice system. In order to achieve this, it’s quite possible that a new global governance organization is needed.

Governments need to substantially increase their investment in e-crime law enforcement. The Internet is a global entity. Either we need to find a way to enable global law enforcement teams to cooperate effectively, or we should give up on attempting to police the Internet locally, and establish the “InterNetPol.”

Action is needed and we must act soon. I don’t want to minimize the sheer difficulty of what we’re facing. But, I do know this: we must change the way we work before e-criminals take away this shining thing we call the Internet.

Michael Barrett is the chief information security officer at PayPal, where he oversees the information systems and services that protect the integrity and confidentiality of customer and employee information . Previously, he has served as vice president of security and utility strategy at American Express, and president of the Liberty Alliance, where he co-chaired the Identity Theft Prevention Working Group. He has twice been named one of the 50 most powerful people in networking by Network World magazine and was recently listed as one of’s 59 top influencers in the security industry. He is also an advisor to the Berkman Center’s StopBadWare project.

The Good Governance Mix

Essay by Charlie Leadbeater, a response to Tacit Governance by David Weinberger.

One of the outstanding features of David Weinberger’s writing about the web is his unwillingness to fall into the trap of making all or nothing, simple dichotomies. More than anyone writing about the web he understands and enjoys its miscellaneous messiness.

So I was slightly surprised when I read his apparently cut and dried argument in favor of tacit norms over explicit rules. (Thanks to James Cherkoff for alerting me to the debate.)

David’s argument, if I have it straight, is:

Norms organise us without being imposed top down.
Rules are usually imposed because norms fail.
Tacit governance is usually healthy, whereas rules are a social scar.
The net is self-governing, like a good public space, because no one is in control and so people take responsibility for it themselves rather than relying on an external authority to police it for them. (Some Dutch cities have got rid of traffic lights at junctions for just this reason: it encourages people to self-moderate their driving.)

David’s argument (rules are failed norms) seems rather one sided. It’s reminiscent of the debate about Michael Polyani’s distinction between tacit and explicit knowledge, a distinction widely used in the knowledge management industry.

Polyani did not say there were two different kinds of knowledge but that all knowledge has a tacit and an explicit component. In The Knowledge Creating Company Hirotaka Takeuchi and Ijuro Nonaka explained how Japanese companies innovated by turning tacit knowledge (how a great pastry chef made croissant) into explicit knowledge (the design for a bread making machine) which in turn required would be chefs to develop their own tacit knowledge to use the machine in their kitchens. What counts is the way that tacit and explicit knowledge are combined.

Much the same interaction is at play in most efforts at governance in cities, groups and especially in governing an open, liberal, individualistic society (like the Internet) where people cannot be instructed by a higher authority.

Are rules always failed norms? Norms can survive even if they breed rules. In the UK the law is that you drive on the left. But it’s also a norm that people follow even when there is little prospect of enforcement.

Rules can provide the framework in which norms develop. This is the familiar story in many projects, often involving multiple partners. Initially there is a lot of haggling about contracts. Once that is all done and dusted, the contracts are put away and the project runs according to the norms the participants establish. The success of the project (a film, play, research venture) depends on the norms; but the contracts provide a baseline which allows the project to get going in the first place. In the days when I was employable, I never started a job without a contract. But I never once looked at the contract after it had been signed. In the organizations I worked for (newspapers mainly) work was governed by norms rather than rules.

The norm-based net is not a closed world; it might need protecting by rules. A recent poll found a large majority of UK internet users wanted rules to control how television and newspapers could use information on social networking sites.

Rules can also help groups (online and offline) to collaborate.

The development of the 19th century postal system in the US and UK depended on new rules linking people to addresses. Streets had to be named and houses numbered. All of this involved a massive formalisation of previously tacit organised private life: where you lived was your business. Yet these rules then allowed a flowering of a peer-to-peer communications culture in which hundreds of thousands of Americans and Britons taught one another how to write and reply to letters. No central authority set down rules for writing letters: that was a norm-governed activity. But it depended on a postal system that was rule-governed.

David’s argument suggests that living by norms is better than living by rules. Living by norms means you are freer, less prey to external authority and more likely to be part of a collaborative society. Norm based governance is an end in itself.

A different point of view is that both rules and norms are just means. They should be judged by how they help us to reach goals we value. Let me suggest two goals that we might agree upon: equality of opportunity and advances in science that benefit mankind.

On equality, norms are often just as scarring as rules, not least because they are less explicit and so difficult to challenge. (Women should give up their jobs when they have their first child) is not made good by being a norm. Decades of legislation were needed to challenge norms that entrenched gender inequality. Rules are sometimes needed because norms are too powerful and entrenched, not because they have failed.

What of science and knowledge? The Human Genome Project, probably the most impressive example of global scientific collaboration for the public good, depended on strong norms of sharing information. But after a while those norms were sustained only by a simple set of rules: the Bermuda Principles – codified with the help of the Wellcome Trust. Those rules for sharing data then underpinned the rest of the project.

I think the question is: what sort of rules are needed to sustain norm based governance that promotes equality, openness and democracy? Explicit rules may particularly matter to make sure norms give people equal chances and serve a larger purpose than sustaining the power of the insiders who established them. Explicit governance through simple rules is often essential to create a framework of tacit self governance.

Charles Leadbeater is a London based writer, author of We Think and a visiting fellow at the UK National Endowment of Science Technology and the Arts.

Opening Access in a Networked Science

Essay by Melanie Dulong de Rosnay, a response to The Opening of Science and Scholarship by Peter Suber

Some researchers can’t use their own scholarship anymore because, in order to be published, they assigned all their rights without being aware of the implications of the exclusive terms of their initial agreements with their publishers. They can’t publish their own articles on their webpages; they aren’t sure whether they can send a copy of the post print to their colleagues or reuse it for a book or in class. Furthermore, their library might not be able to afford the subscription to the journal that published their article.

Science is being built incrementally. Scholars quote previous works and aim at disseminating new knowledge broadly into society. How can society take advantage of the opportunities offered by digital publishing and distributing to share scientific results more quickly and thus facilitate the discovery of new knowledge? What steps can further open science and scholarship? Should we simply ensure access to knowledge without paying a fee, or should we do even more to improve that access, such as enhancing legal and technical capabilities for finding, extracting, annotating and compiling information in order to make better use of it?

On April 29, 2008, Peter Suber and Stevan Harnad issued a joint statement defining two forms of Open Access (OA). They introduce a logical distinction between what they call “Weak OA” or “price-barrier-free” scholarship available free of charge, and “Strong OA” or “permission-barrier-free” scholarship whose authors grant to the public more permission than would otherwise be granted by default copyright law. They propose other value-neutral terms as alternatives to “weak/strong”; suggestions include “use/re-use”, “read/read-write” and “basic/full”. Jean-Claude Guédon suggests “read/re-use” and invites a discussion about the computational potential of documents digitized by Google and their searchability. The Budapest Open Access Initiative definition means by OA “free availability (…) without financial, legal, or technical barriers.” So, three categories – rather than – two are indeed foundational for OA material and constitute a typology to define the different forms of OA: economic OA, legal OA and technical OA.

Economic OA
Research available only for a fee can’t be read by researchers from less favored institutions and countries where libraries can’t afford the subscription to a particular journal or online database. The public won’t read these articles either. Economic OA grants basic access rights by making articles and data available for private reading.

Economic barriers to access can be waived though different options. Publishers can issue OA journals which do not charge their readers, and develop alternative publication models: this is the golden road to OA. Authors can also self-archive their articles in pre-print or post-print versions in an institutional repository; many non-OA journals allow authors to do so: this is Green OA. Several policies are available for those authors who want to but can’t. Authors may add a contractual opt-out clause to their publishing agreement to retain some of their rights. Finally, universities and research funders may mandate the archiving of articles in OA repositories.

Legal OA
Legal OA is an additional condition, allowing redistribution, and goes beyond the removal of financial barriers to accessing and reading. Removing permission barriers grants the public rights to use material beyond simple access. Like economic OA, legal OA, or “Permission-barrier-free” scholarship” relies on contractual agreements. Authors must indicate that they are publishing their output free of legal restrictions. Otherwise, third parties will not be aware that they may have additional permissions beyond the right of reading. Without an explicit declaration that additional rights are granted to the public, the right to copy, distribute and make derivatives may be impeded by transaction costs associated with permission requests. Libraries, professors, and other curators and aggregators may wonder if they can reproduce, translate, and redistribute material on websites or in coursepacks without an expensive rights clearance process. Adding a clear license to a journal, repository,or conference website will allow creative and confident usages. The Creative Commons Attribution license complies with the Budapest Open Access Initiative definition and makes legal OA a reality.

However, other Creative Commons licensing options reserving commercial rights and derivative rights do not comply with this definition and can’t lead to legal OA. For instance, one may redistribute legal OA articles only for non-commercial purposes; one may not translate them or distribute derivative works without additional authorization.

Also, Thinh Nguyen at Science Commons demonstrated that even the contractual requirement of attribution is a legal barrier to downstream use of non-copyrightable works, such as scientific data. He suggests the distribution of data under simple and understandable terms as close as possible to the public domain, free of copyright, contractual, database and other controls.

Technical OA
Just as price and rights clearance, technology can create barriers to access, redistribution and reuse of articles and data. But technical choices can also help remove them. Technical OA should ensure that materials can be actually and effectively reused, mined, processed, aggregated, integrated, and searched by both humans and machines. Technical barriers can include the following: protection measures that prevent copying, compulsory registration before download, and design features that add hidden costs to search and processing. For example, it can be difficult to download a dataset, or to parse a website with any software, often because of the publication format (html pages can be more convenient to browse that .pdf files; html and wiki formats allow comments; two-column articles are difficult to read quickly on most screens but are the norm for scientific articles). Poor indexing or lack of metadata also prevent some modes of use.

The opening of this triple architecture of market, law and technology to allow broader and better access, including redistribution and reuse, is made possible by social changes and a shift in power and control as further discussed by Jean-Claude Guédon. Authors are the original rights holders and don’t need to transfer all of their rights to publishers, who are exploring alternative business models to ensure sustainability. More and more journals and book editors, as well as data curators are becoming aware of OA’s social benefit and potential impact on innovation and aim at sharing their results. If they wish to do so, they should make sure that not only economic, but also legal and technical restrictions have been effectively waived, so that researchers and the public can not only access, but also redistribute and reuse materials in any way, including ways that initial creators had not considered.

Melanie Dulong de Rosnay is a fellow at the Berkman Center for Internet & Society at Harvard Law School, where she leads research in copyright law and information science. She is designing a distance learning course on copyright for librarians in partnership with eIFL. She is also working on open access science and open data policy with Science Commons, and coordinating publications for Communia, the European thematic network on the digital public domain.

The Looming Destruction of the Global Communications Environment

Essay by Ron Deibert

Ask most citizens worldwide to identify the most pressing issue facing humanity as a whole and they will likely respond with global warming. However, there is another environmental catastrophe looming: the degradation of the global communications environment. The parallels between the two issues are striking: in both cases an invaluable commons is threatened with collapse unless citizens take urgent action to achieve environmental rescue. The two issues are also intimately connected: solutions to global warming necessitate an unfettered worldwide communications network through which citizens can exchange information and ideas. To protect the planet, we need to protect the Net.

Just as evidence of threats to the global natural environment can be found in seemingly unrelated local events – deforestation here, a loss of wetlands there – so too can threats to the global communications environment. In Belarus, for example, access to opposition websites was disrupted during 2005 presidential elections, and then restored immediately afterwards with no explanation. In response to images and videos of demonstrations being uploaded to blogs and news sites, the Burmese government shut off the Internet entirely, except during the period of curfew when Internet users could be more effectively tracked. In Cambodia, the government quietly disabled the use of text messaging over cellular networks leading up to national elections. In Pakistan, inept attempts to block access to streaming videos containing imagery satirizing the Prophet Muhammed resulted in the collateral filtering for several hours of the entire Youtube service, not just for Pakistanis, but also for most of the entire Internet population around the world.

Further degradation comes from the troublesome encroachments of military and intelligence agencies into the global communications commons. Around the world, states’ armed forces are developing sophisticated doctrines for cyberwar that include everything from computer network attacks to psychological operations. The U.S. Pentagon’s recently launched strategic command for cyberspace, operating under the Air Force, is perhaps the most formidable, ominously talking about “fighting and winning wars” on the Internet. Although details are classified, what this may mean in practice can be fathomed by the recent distributed electronic assault on Estonia, which poisoned the country’s 911, banking, and telephone systems for a period of time after that government decided to move a Soviet era statue. Evidence gathered about the assault suggests that although it was likely a spontaneous uprising of hackers sympathetic to Russian concerns, the event appears to have been at least partially “seeded” by the Russian state, whose actions spiraled out of control like a cyclone in cyberspace.

Meanwhile, states’ intelligence agencies are increasingly extracting precious information flows through the installation of permanent eavesdropping equipment at key Internet chokepoints, such as Internet exchanges, Internet service providers, or at major international peering facilities. When combined with the deep packet shaping activities undertaken by ISPs to limit use of peer-to-peer networks for alleged copyright violations, these incursions eat away at the constitutive principles of the Internet’s “neutral” architecture. As a consequence, once seamless global flows of information are now being damned up, distorted, and diverted into heavily filtered cesspools where surveillance saps creativity and induces a stifling climate of self-censorship.

These and hundreds of other examples from the OpenNet Initiative’s latest research are but a few pieces of evidence of what has become an alarming trend: motivated by short-term security and cultural concerns, dozens of governments and corporations are carving up, colonizing, and militarizing the once seamless Internet environment.

Like any other commons, the global communications environment is a finite public good whose maintenance as a valuable resource depends on sustained contributions of individuals worldwide. And yet citizens are having their legitimate contributions stifled by fickle governments and greedy corporations who are threatened by freedom of speech and access to information.

Fortunately, there are many ways to begin to rescue the global communications environment:

• We need to encourage the research and development of tools (like the censorship-evading software psiphon, or the anonymity network Tor) that support the Internet’s distributed and open architecture.

• We need to promote the Internet’s original culture of sharing, as represented by Creative Commons and the free and open source software movement, as an epistemic bulwark against the possessive and exclusionary instincts of the profiteering motive.

• We need to revise and encourage the original notion of “hacking” as a positive experimental ethic, encouraging citizens – especially youth– not to accept technologies shrink-wrapped and locked down but to open them up and explore them as media of both freedom and control.

• We need to put pressure on governments that censor and the companies who assist them, promoting laws, norms, and principles from the domestic to the international spheres that restrain their shortsighted motives and hold them accountable for their actions.

• And we need to raise global awareness that if we, citizens of the Earth, are ever to solve our many shared problems successfully, we need an unfettered worldwide communications environment with which to do so.

Ron Deibert is associate professor of political science and director of the Citizen Lab at the Munk Centre for International Studies, University of Toronto. He is a co-founder and principal investigator of the OpenNet Initiative and psiphon projects. This essay is a modified and extended version of an earlier essay that appeared on OpenDemocracy.Net

On Technology, Security, Personhood and Privacy: An Appeal

Essay by John Clippinger with a response by Dembitz.

Continue the conversation on security with David Clark, Michael Barrett, and Beau Brendler.

American democracy has weathered many storms in its 239 years. Its survival and prosperity are consequences of both good fortune as well as the remarkable foresight and common sense of its Founders.

However, a new kind of challenge looms on the horizon, unanticipated by even the most prophetic of the Founding Fathers. It is Technology, more specifically, digital technology, which both offers the promise of unfettered communication, learning, and global commerce, and the prospect of a Panopticon-like State. Two extremes, two doors; each with radically different outcomes. Yet it is the latter choice that is currently championed by the American administration, as well as by its English, European, and Asian counterparts. Buttressed by video clips of angry, bearded terrorists, and a steady refrain of color-coded warnings, a State of omniscient surveillance has emerged as not just a patriotic necessity, but as an inevitability.

But at what cost to our liberties? Must society choose between individual freedoms and public security? Today the threat is not of a marauding army spilling over our ramparts in the dead of night, but a failure to recognize the “bad guy”— not just at our “walls”, but within our walls. .

Yet invasions of individual privacy are inherently an affront to the sacred, social contract of any Democracy, and therefore, not to be undertaken lightly. The presumptive and defining impetus of any authoritarian regime is to know everything about its subjects, to conduct a continuous campaign of “total awareness” where it is assumed that the State can be trusted and its subjects cannot. In a Democracy, arming a sovereign– even an elected one – with undue powers is to risk turning the State against its people, thereby undermining a fundamental premise of Democracy: We the People.

But, how do we proceed when national security does require the identification of potentially threatening individuals? Shouldn’t those with nothing to hide be willing to trust the government? Shouldn’t every citizen be willing to cooperate with measures that protect the security and liberty of all people? Is not a diminished sense of privacy and autonomy a small price to pay for national security?

To set up a choice between greater freedom and greater security is to create a Hobson’s Choice. If one has to relinquish one’s personal freedoms in order to be secure, then the very rationale for those freedoms is abrogated. Moreover, without trust between the individual and the state, there can be no basis for a social contract. Rather, if there is to be a true and durable social contract, then each party has a living obligation to the other to ensure the inalienability of liberty and security.

How might this be achieved? The remedies lie in principles, practices, and indeed, technologies never imagined by the Founding Fathers. The same technology that gives us the ability to communicate and congregate also gives us the powers to monitor those very communications and to distinguish friend from foe. Imagine for a moment that the eyes and ears of every man, woman and child within a village could be commandeered to watch and report on your every move. Imagine also that they would do so with good intent, but as with all people, their judgments would be fallible. Yet in this instance, their judgments would be deemed definitive. Imagine then what you would be willing to risk in thought, word or deed. No longer would you be a free citizen. You would be a hostage of the State.

Fortunately, people do not have such powers. But technologies do, and they can be used to watch and listen to and remember virtually everything all the time. Such is the dark side, but it is precisely this seemingly sinister dimension of the technology that, paradoxically, renders it a powerful guardian of democratic principles. While people are inherently incapable of controlling everthing that they see, hear and remember, machines are very good at this, and can embody and enforce policies that can govern precisely what people, companies, and governments can and cannot know about one another.

What this means is that technologies can and should be used to reveal to governments and other authorities only what the governments or authorities explicitly need to know, when they need to know it— without compromising a person’s privacy or freedoms. It also means, almost paradoxically, that governments can collect information – non identifying information – about people to track suspicious behaviors to identify malevolent actors. Governments do not have to sacrifice privacy nor individual freedoms to protect their borders and citizenry. There need not be a Hobson’s Choice.

In order to bring Democratic principles into the 21st century, they need to be reconciled with – and augmented by— Technology; in this case, digital technology. In other cases, it could be biological technologies. In this particular instance, the ability to project democratic freedoms into a digital future would require that every citizen has control over their personal information. As an expression of this inherent right, citizens would need only disclose the minimal amount of information that a governing or authorizing body requires to perform its duties.

Complementing this inherent right is the obligation of the citizen, whether at birth or through naturalization, to be identified and verified by a government authority. However, this identifying information would not be public in any sense, and would only be obtainable through a search warrant.

Citizens would have multiple, pseudo-anonymous personhoods for various commercial, governmental, and non commercial purposes, and these could be monitored through consent and statute, thereby enabling and encouraging the free and protected flow of information.

In some quarters, such an interjection of digital technology into matters of commerce, civil liberties and governance has been met with suspicion, as if it were some kind of unnatural additive, masking potential carcinogens. While that prospect is ever present, digital technology will be an integral component of future democracies, and if we, as members of a global community, show the same perspicacity, commitment, and wisdom as the Founding Fathers, it may be possible to accelerate the birth of authentic democratic institutions, not just within the United States, but throughout the world.

John Clippinger is a senior fellow at the Berkman Center where he works on the development of Higgins – software that gives people control over their personal information. He co-founded the Social Physics project to research the impact of trust, reciprocity, reputation and social signaling on the formation of digital institutions. He is the author of A Crowd of One: The Future of Individual Identity (2007).

What Would a More Secure Future Look Like?

Essay by David Clark with responses by Beau Brendler and Michael Barrett.

Most users of the Internet today would probably say that they are concerned about the state of Internet security. And they would probably be more concerned if they understood the true state of affairs. While many technical improvements have been added to the network over the last decade, many new attacks have been invented as well. More importantly, the motivation for the attacks has changed. The early history of attacks was almost playful, with the computer hacker as a symbol of rebellious technical mastery. Today, attacks are the business of organized crime and cyber-warfare. Attacks originate in parts of the globe with weak laws, little appetite for enforcement and little chance of extradition. Or they originate at the hands of “patriotic hackers”, who launch attacks that may or may not have official state backing.

The recognition that Internet security (or lack thereof) is the backdrop for large illicit profits and cyber-skirmishes should suggest that security is not purely a technical problem. But it is still easy to hope that with enough technical intervention, these problems can be deflected, if not cured. This is a misguided hope. Some of the current problems with the Internet are indeed technical flaws that can be mitigated with a technical solution. But many problems result from the nature of the Internet, and some result from the fact that users of the Internet are only human, and make human mistakes.

First, it is important to recognize how the nature of the Internet has both made it a success and made it vulnerable to malicious behavior. The Internet was designed to be open—open to innovation, to new applications, and to open communication among all parties. This “open by default” design means that it is very easy to try a new application, or to connect to another party anywhere in the world. On the Internet, an inventor does not have to negotiate with the Internet Service Providers to trial a new application, they “just do it”. But this open nature also leaves the network open to attack. We could imagine a very different sort of Internet, with more controls and more regulation. It might be safer. It might feel more like the global equivalent of a police state, with governments and other third parties everywhere watching what their users do.

Here are two specific examples that illustrate the benefits and costs of the open Internet. Today, the data sent across the Internet (the packets), carry a source and a destination address; from the addresses it is possible to surmise where in the Internet the source and the destinations are located. But there is no way that is easy or consistently reliable to map these addresses to the identity of the persons at the ends. So it is very hard to hold people accountable for what their computers have done. We could demand that all packets carry some non-repudiatable mapping back to a person who can be held accountable, but is this the online world in which we want to live? For another example, consider email, which was designed to allow anyone to send a message to anyone. The design did not require that the sender get a permit or a registered identity in order to send, or that the sender first get the permission of the receiver. So we get an open medium of interaction, and we get spam. We could have designed a “Victorian” email system in which you cannot talk to someone unless you have first been introduced. This approach would have excluded the spammers (unless they were clever social climbers), but again, is this restricted world the one we want?

So the starting point for improving the state of Internet security must be a social dialog, not just a technical dialog, about what sort of Internet we want. The challenge to the technical community is not to build a very secure Internet—that might be more of a price than we actually want to pay. The challenge is to find clever ways to give us more security without taking away our freedom of action. And finding these better solutions will require a design process that involves both technologists and social observers, because it will take both technical imagination and social imagination to conceive of a different Internet from what we have today, more secure but still suited to our desires for open, diverse access.

Here, to stimulate our critical thinking, is just one example of a different Internet that has been seriously put forward as a contribution to better security. Imagine that there is not one Internet, but several of them, each of which is accessible from all of the machines connected at the edge. (In technical terms, these would be called virtual networks.) Different activities would be carried out on the different Internets. On some of them, you would, as today, need no permits or authentication in order to connect, but on one of them, intended for ecommerce, you would not be allowed to connect unless you identify yourself by giving a credit card as a form of identification. This approach to identification would exclude that vast segment of the population who have no credit cards. But, perhaps, since folks without credit cards cannot purchase anything, there is no reason to worry about excluding them. Or perhaps there is. And if this slice of the Internet, because it was “safer”, attracted more and more activity, those who have no access to a credit card would be excluded from more and more of the Internet’s activity. So perhaps this would be a bad road to start down. Or perhaps the bad consequences could be mitigated. This sort of analysis, trying to look into the future and see the consequences of our design choices, is both necessary and difficult, since there are so many stakeholders and so many paths to the future.

It is not clear where the locus of leadership should center as we work through these options. The problems are trans-national, so no one government can easily take the lead. The deliberation cannot be just populated by technologists, as I note, but must have strong and creative participation from technologists, because creative technologists can help us to imagine the space of the possible. We must not take the present form of the Internet as a given.

In the U.S., the National Science Foundation has challenged the research community to envision what the Internet of 15 years from now should be, and has reached out beyond the networking community to other parts of CS, and beyond that into the social sciences and the humanities to try to start a multi-disciplinary dialog about the future. Other countries have contemplated similar undertaking, and NSF has reached out to engage them. Perhaps this endeavor, which has an emphasis on better security, can be successful. But it is a significant challenge to build a lasting, multi-disciplinary conversation around difficult issues such as this, no matter how important.

David Clark is currently a Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory. Since the mid 70s, Dr. Clark has been leading the development of the Internet; from 1981-1989 he acted as Chief Protocol Architect in this development, and chaired the Internet Activities Board. He has also served as chairman of the Computer Sciences and Telecommunications Board of the National Research Council.

The Right to Communicate

Essay by Daithí Mac Síthigh, a response to Freedom of Listening, by Lewis Hyde

Lewis Hyde’s thoughtful essay on network neutrality and the trials of 18th-century preachers-without-pulpits is a timely reminder that the issue of net neutrality is not one that should be the sole business of a small group of Internet activists and lobbyists. It’s about time to acknowledge that, while increasingly vehement disagreements between economists on how to stimulate the development of broadband in the US are undoubtedly fun to watch, a broader conversation on the cultural and political impact of new technologies is slowly emerging from the confusion that is net neutrality.

There is something poignant about Benjamin Franklin’s idea that the privately-funded lecture hall would “accommodate … the Inhabitants in general”. It is a simple and elegant notion of public service that can exist in any organisational or regulatory context; an ethos that accommodates the contradictory and puzzling whims of the community is, after all, the ultimate in corporate social responsibility. Yet there is also a strong similarity between the decisions of the debate-favouring friends of Franklin and the millions of hours spent by developers, programmers, moderators, designers, bloggers and more in building a vibrant, chaotic and global Internet. It is unsurprising, therefore, that there has been significant popular participation on the “pro-neutrality” side, and arguably less so on the opposite side.

Those opposed to legislation on net neutrality often argue that the law should not get involved in a matter like this. They forget, though, that the pride and joy of much of the US Internet industry, the broad immunity from suit granted by the Communications Decency Act, is of course a privilege granted by law, not something that came down from the sky above. Indeed, the expansive interpretations of the key section (47 USC 230c) by successive courts serve as a reminder that without a law tailored to the needs of service providers, we would have a very difficult environment for online speech. It may be sensible, then, for lawmakers elected by the people to say that if carriers wish to benefit from the legal protection of being treated like passive carriers (immunity), they may also have to act like passive carriers (neutrality).

On the other hand, it’s important to avoid falling into the trap of “ISPs bad / providers and platforms good”. Of course, for those who support a statutory or regulatory basis for net neutrality, the support of big players is welcome, particularly from a tactical point of view. However, if their dreams came true, and the ISPs were brought under control, the question of the control of content, expression and innovation by culturally significant (and in certain cases, essentially hegemonic) business in the areas of social networking, search and user-generated content would still be on the table. This is not to say that we should be demanding that Google be run by government – but to emphasise that net neutrality is part of a spectrum of issues relating to media, speech and freedom. Indeed, “divided sovereignty” itself, as cited with approval by Hyde, is surely threatened when users must play by the rules of the platform if they wish to interact with their “friends” who have all joined it. Jerome Barron shook the world of First Amendment scholarship in the 1960s by arguing for the right of “access to the media” and the reading of those famous words as a positive right. If the net neutrality debaters return this issue to the centre of the political debate, they will have done us all some service, whatever our creed or views may be.

As Jon Garfunkel points out in the comments to Hyde’s essay, though, our friend Franklin didn’t turn to the legal system. Instead, he took decisive action, bringing others on board and circumvented the status quo. The point I would make, however, is that the question should be not just a matter of where and how steps should be taken – Franklin was not averse to the role of Government acting pro bono publico where appropriate – and more a matter of why. From a non-US perspective, then, we need our own Franklins, to make sense of it all. I would suggest that one aspect, particularly valued in the European Union (and indeed elsewhere), is the idea of public service broadcasting (PSB) – an interesting but marginalised aspect of the US media environment, but a source of national pride in the case of some broadcasters like the BBC in the UK. The role of PSB in providing a cultural space for diverse political, artistic and social expression is a hugely important part of European media history. It is not just about the legal provisions that founded and developed the BBC, though, but the voices and ideas that came over its airwaves, and the impact they had on the community. Similarly, the quiet and determined work of the states that developed the UNESCO Convention on Cultural Diversity shows that the idea that new media has a positive role to play in sustaining and amplifying the range of perspectives and languages, however commercially unfeasible or politically controversial, is getting the recognition it deserves at the level of international law and policy. The challenge facing defenders of PSB and cultural diversity is to build common ground with those who have joined the debate on net neutrality, and to ensure that – through legal or non-legal means, as appropriate – the ideas and vision common to Greek democracy and tumultuous Philadelphia remain a part of the future of the Internet.

Daithí Mac Síthigh blogs at and is completing a PhD on comparative new media law at the School of Law, Trinity College Dublin, Ireland. A participant in the 2007 Summer Doctoral Programme at the Berkman Center, he worked in community and alternative media while studying law in Dublin and Toronto. From August 2008, he is a lecturer in law at the University of East Anglia in the UK.

Malware: The Great Equalizer

Essay by Beau Brendler, a response to David Clark
Continue the conversation with Michael Barrett.

Eight years ago I spent two-grand-plus on a Sony Vaio laptop when they were still sort of cool. It was kind of a muscle car then, full of multimedia editing software I wanted to make movies with in hopes I’d get invited to Cannes rather than conferences with 2.0 in their titles. But then a wretched worm attacked, days of futile damage control followed, and finally I gave up trying to download Service Pack 2 from the Microsoft site and just asked for a CD, which they sent for about $6. (Genius business model! Charge people for patches to fix security holes in your operating system that can’t be downloaded for free because your Web site sucks). After that I might as well have deep-fried my laptop in bacon grease. It lived out its miserable life as hard-drive storage for photos until the screen display dissolved to static.

Just about everybody has a story like this. I don’t want to bore you with mine but to make a point I will return to: I’m supposed to be sort of smart about this stuff, somebody who goes on TV and radio and gets quoted in newspapers talking about security and fraud and other Internet things, yet I was brought low by malicious code in minutes. I feel like the paranoid guy in the first Highlander movie — the only good Highlander movie — who drives around New York City armed with Uzis and MAC-10s only to get push-pinned on Clancy Brown’s giant Kurgan sword. No one’s safe, he complains to the grizzled old detective, bleeding from his ears in a crummy hospital bed. I’ve got all this stuff, and still I’m not safe.

Now, I don’t mean to engage in the kind of hyperbole the computer security industry uses to hype its myriads of marginally effective products. No one’s yet actually been killed by badware (though I have stood in the sweaty Manila headquarters of TrendMicro, watching real-time outbreaks of badware attacks on a topo map of South America alight and blaze red like so many fires in the rainforest, which was a little scary). Dumpster-diving and mailbox raiding were still the number one identity theft vectors last time I checked.

But when I go to bed at night, I know my TV set isn’t going to be stealth co-opted through my satellite cable and coerced to blast my personal data to somebody in Sighişoara. I don’t know this about my PC. A friend of mine who used to manage an Internet service provider told me last week the machine his wife uses to run her home business got skranked so badly by a piece of botnet malware it took days and many dollars to fix. Home invasions just aren’t a happy thing, even if the perpetrators are digital and incapable of carrying baseball bats. I’d be pretty mad if someone somehow outside my house buggered my hard drive so badly that I lost even a single picture of my kids. And again: We’re supposed to know something about computers, my friend and I.

The feds think: We Have a Situation Here. The National Cyber Security Alliance put out a survey couple of months ago that appears to have gone largely unnoticed, though I don’t dispute the results:

* Only 49 percent of consumers changed their password within the past year, 19 percent within the past month. Wanna bet how many are using “password2” or the cat’s name instead of the dog’s?
* 71 percent haven’t heard the word “botnet.” Actually, I’m surprised it’s not higher, and wonder if the question was phrased, “have you ever heard of a botnet?”
* About half the population don’t know “how to protect themselves from cyber criminals,” probably more when you factor in the magic of social research.

Badware’s even coming at us from digital picture frames these days, and some manufacturers aren’t sure how it got there. Buy a memory stick for your camera off eBay, and if it’s not a fake and you can get it to work, God knows what it’s going to leave you with the morning after. A year ago the FBI said a million computers were infected with malware that could have ginned up an “army of bots” that could threaten national security. “Botnets continue to be an increasing threat to consumers and homeland security. Unsecured computers play a major role in helping cyber criminals conduct cyber crimes,” said Ron Teixeira, NCSA’s executive director.

It’s true— a Consumer Reports survey two years ago found only 21 percent of Americans actually enabled security software on home PCs. But I’m not ready to blame slacker consumers for potential national security threats. People have other things in their lives to worry about, and simple advice for the home user actually goes a long way if it’s followed: You don’t need to worry about Van Eck Phreaking, but you should at least turn on WEP-level security on your home network. For anti-virus protection, download Alwil’s Avast! which doesn’t bug you every 12 months to pay for re-up, though you do have to keep registering it. Suck it up and sign up for automatic OS updates.

No matter how often we seem to say this stuff, however, lots of people just aren’t going to do it. So we need help from policymakers, computer manufacturers, law enforcement and regulators. For instance: Every PC that leaves a store should come with free, active anti-virus software that doesn’t ask for $24.95 after 12 months leaving you unprotected until you pay. Consider it takes about 7 seconds from the time an unprotected computer is plugged into the Internet until its first malware infection.

Since laptops don’t come with instruction manuals anymore, every PC should come with a reasonable, understandable, step-by-step tutorial that walks the user through firewall enabling, browser settings, anti-virus setup, and Internet personal security 101 — the basic principles of phishing, ID theft and the top five most popular Internet cons. Hire children’s book authors, not “technical writers” in China to create these interactive tutorials. Set operating systems to enable Internet connections only after the tutorial is done. Regulators should keep closer watch on computer security companies and keep consolidation and mergers in check. Whether there should even be a software security industry is a question unto itself; at the least, we need spirited competition.

Finally, try raging against the machine. Don’t buy computers from companies that load bloatware and force insecure operating systems on the public. Buy a couple of how-to books, spend some time on a site like and consider joining the open source movement. And if you’re still worried: Just turn the damn thing off.

Beau Brendler is director of Consumer Reports WebWatch, which he founded and launched in 2002. He is a frequent contributor to the Consumer Reports WebWatch blog. However, this essay represents his opinions as a computer user.

A Take on Peter Suber’s “The Opening of Science and Scholarship”

Essay by Jean-Claude Guedon, a response to The Opening of Science and Scholarship by Peter Suber

There is much to be liked in Peter Suber’s piece, but one of the most important facets of his argument certainly lies in his beginning: “Who controls access..?” Indeed, the issue of control is closely related to access. Placing it center stage as Suber does reminds us that power is at stake in the quest for Open Access. Discussing the issue of power is not always appropriate in polite company, but in the case of Open Access, it cannot be avoided.

Open Access is not a completely novel process coming out of nowhere; on the contrary, it stands at the end of a long string of transformations in human communication that stem back to the beginnings of writing. Writing is a form of coding. It can be used to hide or to expose. It depends on one’s mastery of the needed arts. Scribes, therefore, wield power.
With writing came control over access to the capacity to write, and over meaning (through reading and commenting). Eric Havelock has argued that if Plato advocates the overthrow of the poet (or bard) by the philosopher, it is because he stands on the side of scribes (unlike Socrates) and wants to locate political power in writing. Preserving the collective memory and local traditions became the province of writers. It also affected the power structure of society. In other words, a political revolution was afoot.
Anthony Grafton and Megan Williams have documented a similar phenomenon in relationship with Origen’s Hexapla. Origen made a massive use of codices to compare and critique texts, and to select a canon that could stand further rebuttals. In imitation of Peter, Origen wanted to build the Church on his Hexapla. What emerged was the Christian Canon. But he also repositioned the reader with respect to texts and thus achieved a somewhat paradoxical result: to generate orthodoxy, he developed critical tools which he probably treated as mere scaffoldings needed for his grand edifice. However, the scaffoldings somehow survived in the form of critical thinking.
Print also opened opportunities for revolutionary shifts. The Thirty-Years War and the development of the “public sphere”, to use Habermas’ terminology, bring support to this claim. In establishing a new alternative to approach reality and truth, the scientific revolution also sowed revolutionary seeds that were quickly disseminated by newly invented print objects, such as scientific journals .

We can now jump to the end of Peter Suber’s first sentence : “…in the age of the internet”: it is indeed the presence of the internet that opened up new revolutionary possibilities for scientific publishing. But let us remember that a revolution corresponds to a shift in power.

Scientists and scholars quickly sought to take advantage of the internet around 1990. They generally wanted to communicate better and faster. As a result, they also began to converge on Open Access solutions.

In listing the advantages of Open Access, Peter Suber brings out characteristics that correspond to the non-contentious meaning of “revolutionary” (and he does not use the word). However, the publishers’ resistance to Open Access is not easily understood from this non-confrontational perspective. Only the quest for power can account for their fierce reactions and their intense lobbying efforts, both in Washington and Brussels.

In his Code 2.0, Lawrence Lessig brings out the concept of “architecture of control”. In the print world, the architecture of control rests on the difficulty of copying while laws continue to prohibit copying and costs are covered by turning documents onto commodities. The copying machine began to weaken this structure, but the rise of the internet removed almost all obstacles to copying, including time and cost. To preserve their role (and revenues), publishers felt that the architecture of control inherent in the print world had to be adapted to the digital world. Key elements of the new architecture of control include centralized servers protected by passwords and licensing schemes rather than outright sales. Moreover, publishers want their copy of the scientific or scholarly article to be the reference copy, the only copy that can be cited.

Why do authors submit manuscripts to publishers although they restrict their dissemination so much? Simply because the architecture of control also includes the branding capacity of journals. Based on the average number of citations per article over a two-year period, a quantified index called the “impact factor” has been developed. It purports to measure the visibility of journals as seen through citation behavior. In turn, visibility is related to quality. Finally, the alleged quality of the journal is equally distributed over all of its authors. In this questionable and indirect chain of reasoning, journals claim the capacity to brand authors. Administrators of universities and their proxy bodies, for example tenure and promotion committees, have bought into this reasoning, largely because it facilitates the evaluation work and decreases the possibilities of divisive arguments. So have juries working for a research granting agency. The net result is that authors have no choice but to submit to this curious game.

Open Access does not frontally attack the situation just described. Some of its supporters are even careful to chart a path around it. However, Open Access is not an end in itself; it is merely a symptom of deeper processes linked to the growing role of digitization in our civilization. It is digitization that brings about opportunities for profound shifts in power. Open Access simply defines a battle front that refers to the challenges being thrown at the architectures of control supported by publishers. Like a litmus test, the quest for Open Access reveals an architecture of control on the wane.

To conclude, the deeper phenomenon behind Open Access has to do with the internet itself. The networked, distributed structure of the TCP/IP protocols harbors an architecture of control of its own which challenges other modes of control. These challenges emerge in various fields, for example free software and the distributed production of knowledge as in Wikipedia. It also reveals itself in the ways in which scientists and scholars want to work and recover full control over the mores of their tribe.

In short, Open Access is a wonderful observation platform to study how an old architecture of control unravels and a new one emerges. For this reason, it is important not only in itself, but also as a way to question the unfolding of the digital age and to meditate on its future.

Jean-Claude GUÉDON received his PhD in History of Science. He is presently Professor of Comparative Literature at the Université de Montréal. He served as Programme Co-Chair for Inet’96 98 and 2000, and was a member of the Programme Committee for Inet ’97. Advisor to the Minister of Culture and Communication of Québec for the francophone meeting of the ministers in charge of infohighways (Montreal, May 1997), he was also Program Committee Chair for the AUPELF-UREF meeting on “Education and Internet” that took place in Hanoi in October 1997. Since then he has served on the Sub-Board of the Information Program of the Open Society Institute (2002-6) and on the board of Electronic Information for Libraries (eIFL) from 2003 until 2007. Presently, he is vice-president of the Canadian Federation for the Humanities and Social Sciences.