You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'Uncategorized' Category

The Last Post

Tuesday, September 1st, 2015

(or, I’m Moving My Blogging to Other Platforms.)

After a great run of six full years, I’ve decided to retire this blog. It worked well, but increasingly I find that most of the readership from my writing comes from my blogging at The Social Media Collective and occasionally at other venues like The Huffington Post and Wired.

Thanks so much for reading this. I’ll still be blogging and I hope that you’ll keep reading after I move things over there.

In the unlikely event that I launch any new standalone blogs I’ll be sure to alert you via my homepage.

Eco’s “How to Write a Thesis” in 15 Maxims

Tuesday, March 24th, 2015

(or, Thesis Advice, Click-Bait Style)

Italian semiotician and novelist Umberto Eco released How to Write a Thesis in 1977, well before his rise to international intellectual stardom. It has just been released in English for the first time by MIT Press. I’ve just read it.


I was thinking of assigning it in doctoral seminars, but I regret that a great deal of the book involves scholarly practices that are no longer relevant to anyone. For instance: Is it OK to insert an unnecessary footnote in the middle of your text so that your footnote numbering matches up correctly with what you’ve already typed? (Meaning: So you don’t have to re-type the entire manuscript. On a typewriter.)

It turns out that it is not OK to insert unnecessary footnotes.

And there’s a whole bunch of things about index card management, diacritical marks, and library union indices. And some stuff about the laurea.

However, even if I do not find the book relevant to assign as a whole, Eco’s great wit and strong opinions did lead me to compile the best quotes from the book. I present them to you here:

Eco’s 15 Maxims for PhD Students:

From How to Write a Thesis [1977/2015], selected by me. These are slightly paraphrased to make them work in a list. I hope you like them as much as I did.

  1. Academic humility is the knowledge that anyone can teach us something. Practice it.
  2. A thesis is like a chess game that requires a player to plan in advance all the moves he will make to checkmate his opponent.
  3. How long does it take to write a thesis? No longer than three years and no less than six months.
  4. Imagine that you have a week to take a 600-mile car trip. Even if you are on vacation, you will not leave your house and begin driving indiscriminately in a random direction. A provisional table of contents will function as your work plan.
  5. You must write a thesis that you are able to write.
  6. Your thesis exists to prove the hypothesis that you devised at the outset, not to show the breadth of your knowledge.
  7. What you should never do is quote from an indirect source pretending that you have read the original.
  8. Quote the object of your interpretive analysis with reasonable abundance.
  9. Use notes to pay your debts.
  10. You should not become so paranoid that you believe you have been plagiarized every time a professor or another student addresses a topic related to your thesis.
  11. If you read the great scientists or the great critics you will see that, with a few exceptions, they are quite clear and are not ashamed of explaining things well.
  12. You are not Proust. Do not write long sentences.
  13. The language of a thesis is a metalanguage, that is, a language that speaks of other languages. A psychiatrist who describes the mentally ill does not express himself in the manner of his patients.
  14. If you do not feel qualified, do not defend your thesis.
  15. Do not whine and be complex-ridden, because it is annoying.


The Google Algorithm as a Robotic Nose

Friday, January 16th, 2015

Algorithms, in the view of author Christopher Steiner, are poised to take over everything.  Algorithms embedded in software are now everywhere: Netflix recommendations, credit scores, driving directions, stock trading, Google search, Facebook’s news feed, the TSA’s process to decide who gets searched, the Home Depot prices you are quoted online, and so on. Just a few weeks ago, Ashtan Soltani, the new Chief Technologist of the FTC, has said that algorithmic transparency  is his central priority for the US government agency that is tasked with administration of fairness and justice in trade. Commentators are worried that the rise of hidden algorithmic automation is leading to a problematic new “black box society.”

But given that we want to achieve these “transparent” algorithms, how would we do that? Manfred Broy, writing in the context of software engineering, has said that one of the frustrations of working with software is that it is “almost intangible.”  Even if we suddenly obtained the source code for anything we wanted (which is unlikely) it usually not clear what code is doing.  How can we begin to have a meaningful conversation about the consequences of “an algorithm” by achieving some broad, shared understanding of what it is and what it is doing?


(An advertising campaign.)

The answer, even among experts, is that we use metaphor, cartoons, diagrams, and abstraction. As a small beginning to tackling this problem of representing the algorithm, this week I have a new journal article out in the open access journal Media-N, titled “Seeing the Sort.” In it, I try for a critical consideration of how we represent algorithms visually. From flowcharts to cartoons, I go through examples of “algorithm public relations,” meaning both how algorithms are revealed to the public and also what spin the visualizers are trying for.

The most fun of writing the piece was choosing the examples, which include The Algo-Rythmics (an effort to represent algorithms in dance), an algorithm represented as a 19th century grist mill, and this Google cartoon that represents its algorithm as a robotic nose that smells Web pages:

(The Google algorithm as a robotic nose that smells Web pages.)

Read the article:

Sandvig, Christian. (2015). Seeing the Sort: The Aesthetic and Industrial Defense of “The Algorithm.” Media-N. vol. 10, no. 1. (this was also cross-posted to the Social Media Collective.)

Think About New Media Algorithmically

Thursday, March 20th, 2014

(or: How to Explain Yourself to a General Audience of Sophomores)

I recently gave a guest lecture to the University of Michigan sophomore special topics course “22 Ways to Think About New Media.”  This is a course intended for students who have not yet declared a major, where each week a faculty member from a different discipline describes a “way” that they think about “New Media.”  One goal of this is “a richer appreciation of the liberal arts and sciences,” and so I was asked to consider my remarks in the context of questions like: “What is the place of your work in society? What kinds of questions do you ask? How, in short, do you think?”

Wow, that’s a tall order. Explain and defend your field — communication and information studies — to people who have never encountered it before. Tell (for example) an undergraduate interested in chemistry why they should care about your work. And say something interesting about new media. Well, I’ll give it a shot. Here’s a summary of my attempt.

I decided that the way I want people to think about New Media is “algorithmically.” I meant that as a one-word shorthand for “I am interested in algorithms,” or “I think about new media algorithms and try to understand their implications,” and not “I am an algorithm.” (*)

A central question in the study of communication is this one: How do communication and information systems and institutions organize and shape what we know and think? That is, there is a great amount of material that could be watched, read, and heard but of course we each only have time to experience a small fraction of the whole. While we have some freedom to choose what we experience, there are also processes in media systems that shape what music, movies, news, and even conversations we pay attention to. This shaping ultimately helps to determine our shared culture, and new media are now transforming these processes — and therefore our shared culture.

(For instance, Twitter’s algorithms currently think I should pay attention to #NCAAMarchMadness2014 [which is trending]. They tell me this is a recommendation “just for me” [see below]. In fact I hate sports, so perhaps Twitter hates me.)

I used the example of trashy pop bands – a student suggested One Direction – to illustrate this. There may be a large number of musicians with enough skill to comprise a trashy pop band but only a few trashy pop bands are successful at any given time. Musical talent is far more widely distributed than attention to specific bands. Even a casual music listener will agree that talent does not necessarily determine popularity. So what does?

The same is true of more serious topics—consider news. There is enough serious news to fill many newspapers but somehow it comes to be that we hear about certain topics over and over again, while other topics are ignored. How is it that the same events might get more coverage at one moment but less at another moment? It does not seem to be about the “quality” of the news story or the importance of the events, taken in isolation. At this point I employed Ethan Zuckerman’s comparison of attention to Kim Kardashian vs. famine.

Google Trends: Interest in Kardashian vs. famine

(Click to enlarge)

Ultimately this shaping and organization of communication and information determines who we are as a collective, as a public, as a society. A central problem in the study of communication and information has been: how do communication and information systems and institutions shape our knowledge and attention?

This is a particularly interesting moment to consider this topic because, while this is a perennial research problem in the study of communication (cf. Gatekeeping Theory, Agenda-Setting Theory, Framing, Priming, Cultivation Theory, Theories of the Public Sphere, etc.), the new prevalence of attention sorting algorithms on the Internet is transforming the way that attention and knowledge are shaped. A useful phrase naming the overall phenomenon is “Algorithmic Culture,” coined by Alex Galloway.

Decades ago, decisions made by a few behind-the-scenes industry professionals like legendary music producer John Hammond would be instrumental in selecting and promoting specific media content (like the musical acts of Count Basie, Bob Dylan, and Aretha Franklin), and newspaper owners like Joseph Pulitzer decided what should be spread as news (such as color comic strips or crusading investigative reporting exposing government corruption).

They may or may not have done a good job, but it is interesting that today they do not wield power in the same way. Today on the Internet many decisions about media content and advertising are made by algorithms. An algorithm, or step-by-step procedure for accomplishing something, is typically a piece of computer software that uses some data about you to determine what you will watch, hear, or read. A simple algorithm might be “show the most recent thing any friend of mine has posted” — however most algorithms in use are much more complex.

Algorighms sort both content and advertising. Older media industries often promoted content quite broadly, but now the resulting decisions may be individualized to you, meaning that no two people might see the same Web page. Although algorithms are written by people, they often have effects that are hard for any single person to anticipate.

To introduce this topic, I suggested two online readings that are intended to be accessible to a general audience. They both consider how new media are now re-shaping the selection of content online by focusing on the idea of the algorithm. I decided to forward these two from The Atlantic:

(1.) “The Algorithm Economy: Inside the Formulas of Facebook and Amazon,” by Derek Thompson, 12 March 2014, The Atlantic

This very short blog post introduces the idea that algorithms (meaning, a repeatable step-by-step procedure for accomplishing something) now drive much of our experience with new media. It contrasts two major algorithms that most people are familiar with: (1) product recommendations (technically called item-to-item collaborative filtering) and (2) the Facebook news feed (called EdgeRank). A key point is that all algorithms are not equal — these two implementations of algorithmic sorting of content are quite different in their implications and effects.

(2.) “A Guide to the Digital Advertising Industry That’s Watching Your Every Click,” by Joe Turow, 7 Feb 2012, The Atlantic

Most content on the Internet is available for free and supported by online advertising. This longer article is a book excerpt from the introduction of Turow’s book The Daily You. It introduces the new ways that the online advertising industry operates and describes the way that firms match customer data to online content and advertising. This article focuses on the data about audiences that must be gathered and analyzed in order to provide personalized advertising. It then raises the question of whether or not people know about this large-scale data collection about them and considers how they feel about it.

Optional extra: For a more in-depth treatment of the topic, see Tarleton Gillespie’s “The Relevance of Algorithms,” recently released in Media Technologies.

Okay, I’ll stop here for now. But in my next post, I’ll consider how to demonstrate the effects of algorithmic sorting in a simple and easy-to-understand way. Then I’ll tell you how the students reacted to all this.

(*) – Although some days I do feel like an algorithm.

Reddit, Mathematically the Anti-Facebook (and other thoughts on algorithmic culture)

Wednesday, January 29th, 2014

(or, Are We Social Insects?)

I worried that my last blog post was too short and intellectually ineffectual. But given the positive feedback I’ve received, my true calling may be to write top ten lists of other people’s ideas, based on conferences I attend. So here is another list like that.

These are my notes from my attendance at “Algorithmic Culture,” an event in the University of Michigan’s Digital Currents program. It featured a lecture by the amazing Ted Striphas. These notes also reflect discussion after the talk that included Megan Sapnar Ankerson, Mark Ackerman, John Cheney-Lippold and other people I didn’t write down.

Ted has made his work on historicizing the emergence of an “algorithmic culture” (Alex Galloway‘s term) available widely already, so my role here is really just to point at it and say: “Look!” (Then applaud.)

If you’re not familiar with this general topic area (“algorithmic culture”) see Tarleton Gillespie’s recent introduction The Relevance of Algorithms and then maybe my own writing posse’s Re-Centering the Algorithm. OK here we go:

Eight Questions About Algorithms and Culture

  1. Are algorithms centralizing? Algorithms, born from ideas of decentralized control and cybernetics, were once seen as basically anti-hierarchical. Fifty years ago we searched for algorithms in nature and found them decentralized — today engineers write them and we find them centralizing.
  2. OR, are algorithms fundamentally democratic? Even if Google and Facebook have centralized the logic, they claim “democracy!” because we provide the data. YouTube has no need of kings. The LOLcats and fail videos are there by our collective will.
  3. Many of today’s ideas about algorithms and culture can be traced to earlier ideas about social insects. Entomology once noted that termites “failed to evolve” because their algorithms, based on biology, were too inflexible. How do our algorithms work? Too inflexible? (and does this mean we are social insects?)
  4. The specific word “algorithm” is a recent phenomenon, but the idea behind it is not new. (Consider: plan, recipe, procedure, script, program, function, …) But do we think about these ideas differently now? If so, maybe it is who looks at them and where they look. In early algorithmic thinking people were the logic and housed the procedure. Now computers house the procedure and people are the operands.
  5. Can “algorithmic culture” be countercultural? Fred Turner and John Markoff have traced the links between the counterculture and computing. Striphas argued that counterculture-like influences on what would become modern computing came much earlier than the 60s: consider the influence of WWII and The Holocaust. For example, Talcott Parsons saw culture through the lens of anti-authoritarianism. He also saw culture as the opposite of state power. Is culture fundamentally anti-state? This also leads me to ask: Is everything always actually about Hitler in the end?
  6. Today, the computer science definition of “algorithm” is similar to anthropologist Clifford Geertz’s definition of culture in 1970s — that is, a recipe, plan, etc. Why is this? Is this significant?
  7. Is Reddit the conceptual anti-Facebook? Reddit publicly discloses the algorithm that it uses to sort itself. There have been calls for Facebook algorithm transparency on normative grounds. What are the consequences of Reddit’s disclosure, if any? As Reddit’s algorithm is not driven by Facebook’s business model, does that mean these two social media platform sorting algorithms are mathematically (or more properly, procedurally) opposed?
  8. Are algorithms fundamentally about homeostasis? (That’s the idea, prevalent in cybernetics and 1950s social science, that the systems being described are stable.) In other words, when algorithms are used today is there an implicit drive toward stability, equilibrium, or some other similar implied goal or similar standard of beauty for a system?

Whew, I’m done. What a great event!

I’m skeptical about that last point (algorithms = homeostasis) but the question reminds me of “The Use and Abuse of Vegetational Concepts,” part 2 of the 2011 BBC documentary/insane-music-video by Adam Curtis titled All Watched Over by Machines of Loving Grace. It is a favorite of mine. Although I think many of the implied claims are not true, it’s worth watching for the soundtrack and jump cuts alone.

It’s all about cybernetics and homeostasis. I’ll conclude with it… “THIS IS A STORY ABOUT THE RISE OF THE MACHINES”:

All Watched Over By Machines of Loving Grace 2 from SACPOP on Vimeo.


Some of us also had an interesting side conversation about what job would be the “least algorithmic.” Presumably something that was not repeatable — it differs each time it is performed. Some form of performance art? This conversation led us to think that everything is actually algorithmic.

Bad Behavior has blocked 50 access attempts in the last 7 days.