You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'information overload' Category

The Future of Books in the Digital Age: Conference Report

9

Today, I attended a small, but really interesting conference chaired by my colleagues Professor Werner Wunderlich und Prof. Beat Schmid from the Institute for Media and Communication Management, our sister institute here at the Univ. of St. Gallen. The conference was on “The Future of the Gutenberg Galaxy” and looked at trends and perspectives of the medium “book”. I’ve learned a big deal today about the current state of the book market and future scenarios from a terrific line-up of speakers. It was a particular pleasure, for instance, to meet Prof. Wulf D. von Lucus, who’s teaching at the Univ. of Hohenheim, but is also the Chairman of the Board of Carl Hanser Verlag, which will be publishing the German version of our forthcoming book Born Digital.

We covered a lot of terrain, ranging from definitional question (what is a book? Here is a legal definition under Swiss VAT law, for starters) to open access issues. The focus of the conversation, though, was on the question how digitization shapes the book market and, ultimately, whether the Internet will change the concept “book” as such. A broad consensus emerged among the participants (a) that digitization has a profound impact on the book industry, but that it’s still too early to tell what it means in detail, and (b) that the traditional book is very unlikely to be substituted by electronic formats (partly referring to the superiority-of-design-argument that Umberto Eco made some time ago).

I was the last speaker at the forum and faced the challenge to talk about the future of books from a legal perspective. Based on the insights we gained in the context of our Digital Media Project and the discussion at the forum, I came up with the following four observations and theses, respectively:

Technological innovations – digitization in tandem with network computing – have changed the information ecosystem. From what we’ve learned so far, it’s safe to say that at least some of the changes are tectonic in nature. These structural shifts in the way in which we create, disseminate, access, and (re-)use information, knowledge, and entertainment have both direct and indirect effects on the medium “book” and the corresponding subsystem.

Some examples and precursors in this context: collaborative and evolutionary production of books (see Lessig’s Code 2.0); e-Books and online book stores (see ciando or Amazon.com); online access to books (see, e.g., libreka, Google Book Search, digital libraries); creative re-uses such as fan fiction, podcasts, and the like (see, e.g., LibriVox, Project Gutenberg, www.harrypotterfanfiction.com).

Law is responding to the disruptive changes in the information environment. It not only reacts to innovations related to digitization and networks, but has also the power to actively shape the outcome of these transformative processes. However, law is not the only regulatory force, and to gain a deeper understanding of the interplay among these forces is crucial when considering the future of books.

While fleshing out this second thesis, I argued that the reactions to innovations in the book sector may follow the pattern of ICT innovation described by Debora Spar in her book Ruling the Waves (Innovation – Commercialization – Creative Anarchy – Rules and Regulations). I used the ongoing digitization of books and libraries by Google Book Search as a mini-case study to illustrate the phases. With regard to the different regulatory forces, I referred to Lessig’s framework and used book-relevant examples such as DRM-protected eBooks (“code”), the use of collaborative creativity (“norms”), and book-price fixing (“markets”) to illustrate it. I also tried to emphasis that the law has the power to shape each of the forces mentioned above in one way or another (I used examples such as anti-circumvention legislation, the legal ban on book-price fixing, and mandatory copyright provisions that preempt certain contractual provisions.)

The legal “hot-spots” when it comes to the future of the book in the digital age are the questions of distribution, access, and – potentially – creative re-use. The areas of law that are particularly relevant in this context are contracts, copyright/trademark law, and competition law.

Based on the discussion at the forum, I tried to map some of the past, current, and emerging conflicts among the different stakeholders of the ecosystem “book”. In the area of contract law, I focused on the relationship between authors and increasingly powerful book publishers that are tempted to use their unequal bargaining power to impose standard contracts on authors and transfer as many rights as possible (e.g. “buy out” contracts).

With regard to copyright law, I touched upon a small, but representative selection of conflicts, e.g. the relation between right holders and increasingly active users (referring to the recent hp-lexicon print-version controversy); the tensions between right holders and (new) Internet intermediaries (e.g. liability of platforms for infringements of their users in case of early leakage of bestsellers; e.g. interpretation of copyright limitations and exemptions in case of full-text book searches without permission of right holders); the tension between publishers and libraries (e.g. positive externalities of “remote access” to digital libraries vs. lack of exemptions in national and international copyright legislation – a topic my colleague Silke Ernst is working on); and the tension between right holders and educational institutions (with reference to this report).

As far as competition law is concerned, I sketched a scenario in which Google Book Search would reach a dominant market position with strong user lock-in due to network effects and would decline to digitize and index certain books or book programs, for instance due to operational reasons. Based on this scenario, I speculated about a possible response by competition law authorities (European authorities in mind) and raised the question whether Google Book Search could be regarded, at some point, as an essential facility. (In the subsequent panel discussion, Google’s Jens Redmer and I had a friendly back-and-forth on this issue.)

Not all of the recent legal conflicts involving the medium “book” are related to the transition from an analog/offline to a digital/online environment. Law continues to address book-relevant issues that are not new, but rather variations on traditional doctrinal themes.

I used the Michael Baigent et al. v. Random House Group decision by the London’s High Court of Justice as one example (has the author of Da Vinci Code infringed copyright by “borrowing” a theme from the earlier book Holy Blood, Holy Grail?), and the recent Esra-decision by the German BVerfG as a second one (author’s freedom of expression vs. privacy right of a person in a case where it was too obvious that the figure used in a novel was a real and identifiable person and where intimate details of the real person were disclosed in the book.)

Unfortunately, we didn’t have much time to discuss several interesting other issues and topics that were brought up and related to the generation born digital and its use of books – and the consequences of kids’ changed media usage in a changed media environment, e.g. with regard to information overload and the quality of information. Topics, to be sure, that John Palfrey and I are addressing in our forthcoming book.

In sum, an intense, but very inspiring conference day.

Update: Dr. David Weinberger, among the smartest people I’ve ever met, has just released a great article on ebooks and libraries.

“Born Digital” and “Digital Natives” Project Presented at OECD-Canada Foresight Forum

2

Here in Ottawa, I had the pleasure to speak at the OECD Technology Foresight Forum of the Information, Computer and Communications Policy Committee (ICCP) on the participative web – a forum aimed at contributing to the OECD Ministerial Meeting “The Future of the Internet Economy” that will take place in Seoul, Korea, in June 2008.

My remarks (what follows is a summary, full transcript available, too) were based on our joint and ongoing HarvardSt.Gallen research project on Digital Natives and included some of the points my colleague and friend John Palfrey and I are making in our forthcoming book “Born Digital” (Basic Books, 2008).

I started with the observation that increased participation is one of the features at the very core of the lives of many Digital Natives. Since most of the speakers at the Forum were putting emphasis on creative expression (like making mash-ups, contributing to Wikipedia, or writing a blog), I tried to make the point that participation needs to be framed in a broad way and includes not only “semiotic democracy”, but also increased social participation (cyberspace is a social space, as Charlie Nesson has argued for years), increased opportunities for economic participation (young digital entrepreneurs), and new forms of political expression and activism.

Second, I argued that the challenges associated with the participative web go far beyond intellectual property rights and competition law issues – two of the dominant themes of the past years as well as at the Forum itself. I gave a brief overview of the three clusters we’re currently working on in the context of the Digital Natives project:

  • How does the participatory web change the very notion of identity, privacy, and security of Digital Natives?
  • What are its implications for creative expression by Digital Natives and the business of digital creativity?
  • How do Digital Natives navigate the participative web, and what are the challenges they face from an information standpoint (e.g. how to find relevant information, how to assess the quality of online information)?

The third argument, in essence, was that there is no (longer a) simple answer to the question “Who rules the Net?”. We argue in our book (and elsewhere) that the challenges we face can only be addressed if all stakeholders – Digital Natives themselves, peers, parents, teachers, coaches, companies, software providers, regulators, etc. – work together and make respective contributions. Given the purpose of the Forum, my remarks focused on the role of one particular stakeholder: governments.

While still research in progress, it seems plain to us that governments may play a very important role in one of the clusters mentioned above, but only a limited one in another cluster. So what’s much needed is a case-by-case analysis. I briefly illustrated the different roles of governments in areas such as

  • online identity (currently no obvious need for government intervention, but “interoperability” among ID platforms on the “watch-list”);
  • information privacy (important role of government, probably less regarding more laws, but better implementation and enforcement as well as international coordination and standard-setting);
  • creativity and business of creativity (use power of market forces and bottom-up approaches in the first place, but role of governments at the margins, e.g. using leeway when legislating about DRM or law reform regarding limitations and exceptions to copyright law);
  • information quality and overload (only limited role of governments, e.g. by providing quality minima and/or digital service publique; emphasis on education, learning, media & information literacy programs for kids).

Based on these remarks, we identified some trends (e.g. multiple stakeholders shape our kids’ future online experiences, which creates the need for collaboration and coordination) and closed with some observations about the OECD’s role in such an environment, proposing four functions: awareness raising and agenda setting; knowledge creation (“think tank”); international coordination among various stakeholders; alternative forms of regulation, incl. best practice guides and recommendations.

Berkman Fellow Shenja van der Graaf was also speaking at the Forum (transcripts here), and Miriam Simun presented our research project at a stand.

Today and tomorrow, the OECD delegates are discussing behind closed doors about the take-aways of the Forum. Given the broad range of issues covered at the Forum, it’s interesting to see what items will finally be on the agenda of the Ministerial Conference (IPR, intermediaries liability, and privacy are likely candidates.)

Rational Choice Theory vs. Heuristics Approach: What are the Consequences for Disclosure Laws?

11

I just finished reading Heuristics and the Law (2006) edited by Gerd Gigerenzer and Christoph Engel. It’s an interesting collection of essays by legal scholars, psychologists, and economists, exploring the conceptual and practical power of the heuristics approach in law. Given my own research interest, I was particularly intrigued by Chris Guthrie’s contribution with the title “Law, Information, and Choice: Capitalizing on Heuristic Habits of Thought.”

In the article, Chris starts with the observation that American law has a long-standing tradition to foster individual autonomy and choice by mandating the disclosure of information. Disclosure rules can be found in many areas of law, ranging from corporate law or product liability to gaming laws, etc. The underlying rational choice approach, according to Guthrie’s analysis, assumes that individuals will use all available information to “identify and evaluate all available options, assess and weight all of the salient attributes of each option, and then select the option they evaluate most favorably.” (id., p. 427). Guthrie contrasts these assumptions with (empirical) insights from heuristic-based approaches (“fast and frugal heuristics program” and “heuristics-and-biases program”), which suggest that individuals often make sound decisions by using limited information, and concludes that lawmakers should heed the lessons of these theories in order to foster autonomy and choice. More specifically, the author argues that lawmakers should not aim for full disclosure of information as rational choice theory would recommend, but should require limited disclosure by identifying the specific pieces of information to be disclosed, requiring the information to be presented in a manner designed to attract user’s attention and inform understanding, and by imposing limitations on the amount of disclosed information.

Clearly, Chris Guthrie’s empirical arguments support some of the observations I have made – inspired by my Doktorvater Prof. Dr. Jean Nicolas Druey – in the context of my information quality research on the one hand and earlier discussions of the information overload problem on the other hand. However, I’m not sure whether I agree with all of Guthrie’s conclusions, particularly once we move from an analog/offline to a digitally networked environment. The author himself acknowledges in the final paragraph – but leaves unanswered – the problem that information phenomena are highly context-specific and that information processing has an inherent subjective component to it. These characteristics have been identified as among the key challenges faced by the legal system aimed at regulating information (i.e. what we call information law on this side of the Atlantic), which by it’s own nature seeks to make general and abstract statements (“norms”) about informational phenomena. Against this backdrop, it is questionable as to what extent the “content-presentation-amount” program suggested by Guthrie might balance this fundamental tension between the characteristics of informational phenomena and information law.

Besides context-dependency and individuality of information processes, I’m for yet two other reasons not convinced that the author’s basic observation – according to which individuals make decisions based on limited information (although full information, in theory, would be available) – justifies the normative conclusion that lawmakers should limit the amount of information to be disclosed when drafting laws.

  • First, there are people who follow – at least in certain situations and sometimes because it’s required by professional ethics or even by duty of care standards – the “text-book”-style decision-making procedure as envisioned by the rational choice theory. These individuals, I would argue, might be worse off under a regime that is based on the approach that “less information is more information”. In short, I would argue that less information is not always more information.
  • Second, the aggregation of large amounts of information becomes much more efficient and effective once we operate in a digitally networked environment. Indeed, some of the “mathematical” work that is involved in preparing decisions under the rational choice theory is increasingly done by services – ranging from peer-based recommendation systems (“wisdom of the crowds”) to more hierarchical expert systems – that are in the business of collecting and comparing information for their users. Here, I would argue that the quality of the information based on which decisions can be made is likely do decrease if lawmakers impose qualitative and quantitative limits on the disclosure of information by the respective senders.

In any event, Chris Guthrie’s arguments deserve close attention and further consideration, although one might disagree with some of the conclusions.

Disclosure Statements: An Afterthought from Devil’s Advocate

4

David Weinberger and John Palfrey, among others, have posted impressive general (as opposed to specific) disclosure statements on their weblogs. Currently, I think that’s a good way to address some of the credibility issues related to weblogs. Probably I should follow suit, although this blog (and blogger) is certainly much less of interest than the two mentioned above.

In any event, let me play devil’s advocate for a moment: What’s down the road if we take general (as opposed to specific, case-by-case) disclosure as an approach seriously and compare it to areas of practice where we’ve been working with somewhat similar approaches? Do we face a future where disclosure statements (only imagine such statements from some of our highly networked colleagues!) get as long and complicated as package inserts of drugs, end user license agreements, or terms of services? Will we one day click on “I agree” boxes to accept disclosure statements before we read a blog? Or will we build aggregators collecting and analyzing disclosure profiles of bloggers, where one can check boxes to exclude, for instance, RSS from a philosopher’s blog who does consulting work beside? If the importance of disclosure statements increases under such a scenario, are we likely to see in the long run (as in traditional media law) legislation and regulation establishing disclosure rules and/or standards?

Information overload: A legal perspective (Part II)

ø

As promised, here’s another translation/summary of Jean Nicolas Druey’s work on the “information overload” (published as: “Daten-Schmutz” – Rechtliche Ansatzpunkte zum Problem der �ber-Information, in: Festschrift zum 65. Geburtstag von Mario M. Pedrazzini. – Bern, 1990. pp. 379-396.)

Druey introduces the article on pp. 379-380 with some general thoughts about the emergence of the information society and the increasing awareness of information as a building block of our society, our lives, etc. He argues that the “information age” must also have an impact on the legal system, not only because we face the emergence of new problems, but — more fundamentally — because law is itself information. However, Druey claims that legal scholars haven’t thoroughly reflected the legal ramifications of the information phenomenon. One exception, according to Druey, is data protection law (in German: Datenschutz. Please note that the title of the article: “Daten-Schmutz,” plays with words. In German, “Daten-Schmutz” sounds almost like “Datenschutz,” but it means something entirely different: Schmutz is smut, i.e. data-smut.)

On p. 380, Druey outlines the fundamental legal problems related to information. He starts with the notion that law is aimed at conciliating opposing interests. Traditionally, opposing interests in information were related to situations where one party was eager to get information about something, but the other party had an interest to keep this information (knowledge) confidential — or at least not to communicate it. The focus of attention in the past, according to Druey, was thus on confidentiality, secrets, and the like. In more recent times, however, the emphasis has shifted: The emergence of the information society is accompanied by the creation of a great number of “information rights”, i.e. rights to get information from other individuals, but also from governments (UG: one might think about the freedom of information act).

Less much attention, however, has been paid to the fact that the structure of interests might be reversed: Consider a person who has information and wants to be heard. Is there a right to be heard? On the other side of the same coin: A person who is *not* informed might have an interest that the information channel remains closed, i.e. that he does not have to receive information.

Thus, the basic conflicts in information law might be mapped as follows (matrix), p. 381:

I. Interested Party

a) Informed Party | b) Non-informed Party
|
c) in information = access rights | = Right to information (disclosure rules)
|
II. Interest —————————————————————————————–
|
d) in non-
information = Confidentiality laws | = Protection against info.
|

This graph suggests an ambiguous nature of information. We are used to think about information as a positive value, and, in fact, the traditional conflicts — i.e. confidentiality and right to information — are based on this assumption. But the viewpoint that someone might have an interest in being protected against information makes it clear that information might also have a negative value. Druey acknowledges that this notion of “negative value of information” is counterintuitive and against our common sense understanding. He argues that the positive notion of information we generally apply describes an ideal world, because we generally neglect that information must be processed at some costs. Druey argues that this ideal situation is not an accurate description of reality, since information needs always to be processed and because information might even be counterproductive due to the possibility that a given receiver draws wrong conclusions based on it (p. 381).

On p. 382, Druey turns to the problem of information overflow. The situation of opposing information interest under this scenario is obviously connected to the forth quadrant ( “protection against information”) of the above matrix. On p. 382/83, Druey outlines a “Postulate of Information Ecology”. He starts with a brief description of the problem of information overload from a rather subjective perspective and some narratives. Then, he turns to the question how the problem of information overload can be addressed. On p. 382, he distinguishes between three approaches: First, the receiver has to learn to live with information overload, has to improve her information selection and processing capabilities, competences, etc. Second, it is crucial that intermediaries (media, but also teachers, consultants, etc.) step in and pre-select, pre-process, translate, customize, … information. However, in many situations the only solution might be to reduce the activity level of the sender (source). The third aspect is the one Druey will be focusing on for the remaining part of the article.

Druey argues that the necessity to develop strategies against information overload that apply to the sender/source derives from the distinct nature of information if compared to goods. If too much goods are distributed, the resulting problem is one of “wasting resources” and “waste disposal”. Information, in contrast, that is redundant limits the attentiveness of the receiver at the costs of information that has probably a higher relevance. This goes back to the characteristic of information that its relevance can only be assessed once it has been consumed. Druey concludes (p. 383): The greater the information supply, the greater the risk to choose irrelevant information and ignore the relevant. In sum, Druey argues that too much information is not only a waste of resources, but does harm, and that the selection cannot be delegated to a market, but creates a responsibility of the sender. (Later, he gets back to the notion that market cannot solve the selection problem.) Druey acknowledges that the idea of a sender responsibility with regard to “too much information” has not been an issue in law. However, he argues that there are at least some examples or precursors where law aims to limit the dissemination of information to serve different (!) interests than the classic secrecy/confidentiality interests. The first example (p. 384-87) concerns the therapeutic privilege, i.e. a situation, “in which the physician may be excused from disclosing information to a patient when there is sufficient evidence that the patient is not psychiatrically or emotionally stable to handle the information, that the disclosure of information itself would pose serious and immediate harm to the patient, such as inducing some physiologic response such as a heart attack or prompting suicidal behavior.” (Source: http://sprojects.mmi.mcgill.ca/ethics/definitions.htm.) Druey takes from this example that duties to inform are not always qualified to bridge lack of trust. Rather, these duties — and the exercise of them — are themselves part of the trust relation.

Second, Druey looks at information bans in antitrust law (pp. 387-390). Here, it might be enough to say that Druey uses a U.S. antitrust case to develop the argument: United States v. Container Corp. of America, 393 US 333 (1969). In essence, Druey argues that markets need information. Thus, market transparency is a prerequisite for the competitive markets and their regulatory effects. However, there are circumstances in which competition is inhibited where information is disclosed, because market participants are adjusting to the behavior of others that has become public. Here, Druey uses Container Corp. to make the point. Druey argues that the ambiguous nature of information (“it is good, but not always”) makes it difficult to make choices, because the question whether information is good or bad heavily depends on factors such as the structure of the market (Druey refers to Justice Douglas’ opinion in Container Corp.). The problem that too much information may create the problem of hugely coordinated behavior is also evident in cases where reactions to rather limited stock exchange crashes lead to fatal chain-reactions and global crisis. As a consequence, Druey suggests that information limitations and delays in information processes might be instruments to stabilize order.

Third, Druey explores consumer protection laws which often stipulate disclosure rules (pp. 390-392). Druey argues that the enormous amount of information available to consumers might result in an overload with the unintended consequence that consumers turn back to “simple” messages, e.g. presented in TV spots, etc. However, he thinks that intermediaries might help, e.g. organizations testing products and publishing rankings of certain categories of products, etc. In any event, Druey argues (p. 392) that it does not solve all the problems just to put intermediaries in place. Rather, the selection problem is simply delegated from the receiver to the intermediary. Thus, the receiver loses autonomy. Moreover, if the decision about information shifts at a large scale from individuals to intermediaries — often market players –, it might affect the market itself.

Fourth (pp. 392-395), Druey looks at limitations of information in the interest of culture and education. In this section, he argues that the concept “free flow of information” as a policy principle does not work, because the capacities to absorb and process information are limited, and because the selection of the best information cannot simply be delegated to the market mechanism. Thus, it is not appropriate to implement “free flow of information” as a policy principle and to let the individual alone with the overwhelming amount of information. Rather, the state has some responsibility to put mechanisms in place to address the problem of over-information of its citizens. Druey makes clear that these arguments are not advocating censorship or the like. He simply argues that the principle “let information flow” is not the solution to a complex problem. One of the most important sentences, in my opinion, is written on p. 394. Druey concludes: “It is one of the tasks of the law to design a system of intermediaries, which guarantees a *relative* maximum of freedom to send, but also receive information.”

In his conclusion (p. 395/6), Druey argues that it is crucial to understand – also from a legal perspective – that we must care about an optimum, not a maximum level of information. Moreover, he summarizes the problem of information overflow as follows: The problem, generally speaking, goes back to the phenomenon that a given receiver over-estimates the importance of a single piece in relation to another piece of information. This phenomenon results from the fact that the receiver is swamped with the selection of information. As a consequence, certain limitations of information flows by law are not against freedom of information/free speech (to be understood as the supply of information to satisfy the information needs of individuals as-best-as-possible). Rather, such limitations might even be required to achieve this freedom (p. 396). Finally, Druey emphasizes that — in the best case — we’re in the process of identifying the problem of information overload, but that we are far away from having any adequate solutions to it. Certainly, however, we reach the limits of what law can and shall do.

Information overload – a legal perspective (Part I)

ø

According to Lyman and Varian’s How much Information 2003? study, print, film, magnetic and optical storage media produced roughly 5 exabytes of new information in 2002 (five exabytes of information is equivalent to the information contained in 37,000 new libraries the size of the Library of Congress book collections.) According to the study, almost 800 MB of recorded information is produced per person each year, equivalent to 30 feet of books if this information was stored on paper.

Moreover, the information society is developing rapidly. Rapid change, in turn, is accompanied by an increase in the information needed to keep up with those developments. Against this backdrop, it comes not as a surprise that “information overload” has been identified as one of the problems of our society. Psychologist use terms such as “information fatigue syndrome” to describe the symptoms resulting from information overload, while representatives of other disciplines focus on ways how to deal with it.

Information overload has been subject to various studies and research programs. Interestingly, however, legal scholarship – itself exposed to the information problem – has not been engaged in this debate. A prominent exception is Jean Nicolas Druey, Professor em. at the University of St. Gallen, Switzland. In a seminal book on “Information as a Subject of Law” (in German, “Information als Gegenstand des Rechts,” Schulthess: Zurich & Nomos: Baden-Baden, 1995) and in an article, he has addressed the phenomenon of information overload from a legal perspective.

Since it’s one of the purposes of this weblog to build a bridge between U.S. and European scholarship in information law, I decided to translate and summarize Druey’s study on information overload. The idea goes back to my colleague Derek Bambauer’s interest in Druey’s approach. Derek is working on a paper on Spam, where he applies an information-policy approach. I’d like to thank him for the ongoing discussion of this and other issues. In this post, I translate and summarize the discussion as presented in Druey’s book. In a next post, I will talk about the article, in which Druey explores the issue in depth.

Druey addresses the phenomenon of “information overload” (in German: “Ueberinformation”) in the context of a broader discussion aimed at demonstrating that information as such – contrary to the mainstream opinion that “more information is better” – does not have an intrinsically positive value. Rather, information is neutral in nature, since it can not only have a positive, but also a negative value, e.g. in case where information lacks quality, has an immoral purpose, or is redundant.

With regard to the third aspect, i.e. information overload (“Ueberinformation”), Druey claims that the problem “information overload” has not sufficiently been analyzed in the different areas of research. He argues that this lack of analysis goes back to the common information theory (Shannon, Weaver) approach to information, which conceptualizes “information overload” as a problem related to the “channel” rather than human beings.

Why is too much information a bad thing? In essence, Druey argues that “overproduction” and “oversupply” of information is a waste of resources. First, in the case of a priest who’s preaching in church without audience, for instance, we have the problem that information is presented at some costs without reaching receivers. Second, too much information is clogging our capacity to receive and process information. In both cases, the problem boils down to a suboptimal usage of potentially useful information on the one hand and unnecessary costs/expenses on the other hand. Third, and even more importantly, the increasing amount of information and data leads to an increasing risk that the wrong information is selected (“wrong selection of information”, p. 69). Druey argues that, consequently, the competition over the scarce resource “attention” (or better “attentiveness”) has a negative feedback-effect on the quality level of information itself (Druey uses the example that an overview of the literature in a particular field aimed to address the problem of “too much information – lost overview” itself contributes to the problem it seeks to solve by adding another piece of information.).

This problem of “wrong selection of information” has first and foremost a negative impact on decision making processes. A citizen, who has his head full of sport news and results, is not necessarily in a good shape to make political decisions (to vote, for instance.) This example, according to Druey, illustrates that the phenomenon “information overload” may not only affect the receiver in a negative manner, but may also have negative effects on other interests and stakeholders. In fact, “information overload” may also infringe the interests of senders in cases where an important/relevant information gets stuck in the blockage. Moreover, information overload may also affect “institutions” which heavily depend on the flow of relevant information (“the major communication problem is information overload”, quotes Druey a scholar in the field of organization theory [Everett M. Rogers/Rekha Agarwala, Communications in Organizations, New York 1976, p. 90]).

But too much information does not only harm information processes. In cases where information is used in order to “regulate” (in a broad sense of the term) certain social mechanisms and processes, “too much” might do harm. Druey refers to the “market” as an example. Total information would kill market dynamics. It’s about an optimum of information, not total information or complete transparency. At the same time, “non-information” may have a limited effect on the general activity level, which, in turn, might be the source of order. This thought is connected to the concept of equality. Equality has something to do with making issues more abstract, to abstract from detailed information (e.g. not to consider information about race, gender, …). The willingness to be governed under a particular regime heavily depends on “not-to-know” (Druey refers to Rawls’ “veils of ignorance”.)

Druey concludes (p. 70) that these examples and arguments suggest that information as such – regardless of the quality of a given piece of information – might be contra-productive from the viewpoint of a receiver. Further, Druey concludes that the discussion has demonstrated that the buzzword “information overload” has two aspects: a quantitative (“too much”), but also a qualitative aspect. The qualitative aspect becomes visible in the context of the above mentioned processes/procedures, which may require the retention of certain information. Druey argues in another chapter of a book that this need for “dosing information” in order to ensure certain processes/procedures is the reference point for certain forms of legal secrets (e.g. protection of trade secrets.)

On p. 135, Druey comes back to the issue of information overload when he discusses a “right against information” (in the sense of a “right not to receive information” as an aspect of informational freedom emerging.) He argues again that the harm of “too much information” is to be seen as the costs associated with the fact that a receiver cannot receive potentially relevant information because he is cognitively “clogged” with information that might not be at the core of his informational interests. Addressing the question of responsibility in the legal sense, Druey argues that the law has not yet been responsive. The threshold that triggers liability is extremely high in the case of information overload compared, for instance, to other cases of negative information (e.g. in the case of misadvise.) A basis for a claim might be seen in contractual obligations, but beyond that, law has not developed remedies to address the problem “information overload”. Thus, and that’s Druey’s conclusion, law has to refer to other solutions (regulatory modes) to regulate the problem. Since there might only exceptionally be an “individual right against information”, law must trust in the regulatory power of filters. Filtering functions are conducted by media, teachers, as well as interest groups, and the like. Druey emphasizes in footnote 16 on p. 137 that media –in the broad sense of the term, i.e. as “informational transformers” – are key to address the issue at stake. Media’s function, according to Druey, is to mix “fire” with “water”, i.e. to harmonize a subjective with an objective approach to (active and passive) information needs.

To be continued.

State of Play: Information Quality Research

ø

Together with Martin J. Eppler (University of Lugano) and Markus Helfert (Dublin City University), I serve as a Guest Editor of a 2004 Special Issue of the International Journal Studies in Communication Science on Information Quality. This special issue brings together researchers in the domain of business and organizational studies, as well as information technology and legal scholars to share findings regarding information quality and information quality management.

I’ve been asked to write a summary of the state of information quality research as far as information law is concerned. Here’s the draft:

“Information quality as a cross-sectional matter only recently has become a subject of legal scholarship. In essence, we might distinguish between three stages in the evolution of information quality as a research topic in law: initially, legal scholars addressed particular aspects of the information quality problem – mostly against the particular backdrop of a new piece of legislation, a court case, or the like – within the well-established, but rather fragmented sub-disciplines of law such as constitutional law, copyright, contract or corporate law. In this early stage, information quality was neither perceived as a distinct research field nor explored from a conceptual angle. In a second stage, triggered by Jean Nicolas Druey’s groundbreaking monograph on information as a subject of law, a debate about the definition of information quality, about quality criteria and about the question of legal assessments of information quality emerged. The insights gained from this scholarly work have been applied to specific problem areas such as the regulation of mass-media where the quality of information as a “product” came up for discussion, or to privacy-related issues. Most recently, however, a more fundamental debate about information quality regulation has been launched. A book edited by one of the authors of this introduction addresses the promise and concerns associated with information quality from the perspective of information law, analyzes key problems of informational quality regulation, and provides theoretical overviews of legal approaches to information quality regulation as well as practice-oriented and sector-specific exemplifications and analyses. Contemporary legal research seeks to analyze what players and/or what forces are regulating quality of information by what means, for what purposes, and with what effects. As far as regulation by law is concerned, the following issues are on top of today’s research agenda:

  • Need for legal intervention: Initially, the question arises whether there is need for regulation at all, since legal interventions into social processes in general and content-related regulation of information and communication processes in particular require compelling factual justification and legitimation (e.g. in cases of market failures due to external effects or asymmetric information).
  • Modalities of Regulation: Different legal strategies and techniques can be used to regulate information quality, for instance direct or indirect modes of regulation, substantive provisions or procedural approaches, ex ante or ex post regulation, minimal versus comprehensive regulation, rules or standards, etc.
  • Sources of normativity: One important strand of research explores in what manner the law values information quality and what the possible sources of normative criteria for law-based information quality assessments are. Increasingly, the law derives normative criteria for quality assessments from the economic system (e.g. efficiency, functionality), especially where information is regarded as a “product”.
  • Regulatory context: Some contributions have considered as to what extent there is need for a uniform information quality framework in law or, by contrast, whether sector-specific approaches to quality regulation are more adequate.
  • Limitations: One of the most important issues concerns the question where the limitations of the law regulating information quality ought to be. Such limitations are necessary due to factual constraints (context-sensitivity of information versus the generalizing nature of law) and fundamental values such as “free speech” and “freedom of thoughts.”
  • Effects of regulation: First experiences with legal attempts to regulate information quality suggest that the actual effects on information quality are not easily predictable. It turns out that information quality regulation by law also causes unwanted or at least unexpected effects.

Information law and regulatory approaches to information quality are still in their early stages. Most of the themes and questions outlined in this paragraph remain to be studied in greater detail and discussed from various perspectives, by integrating knowledge from different disciplines such as communication studies, information science, economics, and sociology.”

Log in