You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for March, 2010

Does the FCC Want Your Internet Slow?

Monday, March 22nd, 2010

(or: Reaction to the National Broadband Plan)

Dan Schiller and I have another co-authored piece that just went up in the Technology section of the Huffington Post, titled “Does the FCC Want Our Internet Slow?”  This is part of our ongoing collaboration to write short pieces about current issues in media and technology.

A behind the scenes scoop just for you: I actually advocated for a title that included the word “sucks” but Dan reined me in on that.  I think he was right to do it.  I worried for a while that there is something grammatically incorrect about the current title.  But I think it just sounds grammatically incorrect.

Why the Internet is on the verge of blowing up all of our methods courses

Saturday, March 20th, 2010

(or: Methodologists, atone!)

By far my favorite book on the research methods, Unobtrusive Measures (first published in 1966), is a skeptical romp through social science where the authors take the position that most of what we call social science is wrong.  The theme of the book is that research is likely wrong because research design is very difficult and researchers too easily substitute received wisdom and procedure for hard thinking about designing studies, experiments, measures, tests, and so on.  Scientific conduct has a rote character that extensive training and preparation (e.g., making you get a Ph.D.) can reinforce.  Peer review and the tenure system can be engines of conservatism.

So you perform a survey in which you ask a particular question of a particular group not because it means something as evidence or it is a particularly good idea.  You do it because your advisor did it that way, or someone else (cite, year) did it that way and it is therefore respectable. And if someone did it before, it’s comparable.  This is perfectly reasonable.  It’s likely you are interested in a particular problem, but not really in the methods or statistics relevant to tests related to that problem, so you offload all of the thinking about statistics by performing the methods and statistics that everyone else does.  It’s efficient.

Yet when you stop and actually think about the intricacies of any particular research design, it gets ugly.  Einstein said, “Theory is something nobody believes, except the person who made it. An experiment is something everybody believes, except the person who made it.”  For decades (since even before Webb in 1966), various cranky types have been alarmed at the misuse of quantitative research.

My own struggles with the topic led me to design a graduate course called Unorthodox Research Methods.  The premise is this:  Most research courses teach procedure, but we need to train our students to think about research design and evidence first and we are not doing a good job of that.  (I’m revising the syllabus for this course and so I’m thinking about these issues again, hence this post.)  The Internet is making us rethink many of our research methods, and Webb’s 1966 critique has never been more apt.

A blog post is only big enough for one example, so here’s a big one:  A huge pitfall in our procedure-based methods education is the use of statistical significance.  Even non-quants are familiar with those nagging asterisks that appear after all sorts of columns in all sorts of journal articles across the social sciences.  Statistical significance is the end of conversation about method in many research projects.  Once p < .05, you pack up your kit and go home. Why do you test significance this way?  Because it’s a step in your list of steps. I think it is fair to say that most researchers have internalized this approach despite the fact that it is totally wrong and the statistics literature has railed against it for decades.

Just so we are clear: statistical significance is often useless — it’s not even a hint toward the right answer for your research project in many situations.  Luckily for the truth, the rise of the Internet is about to cause this test to blow up in our face.  We have taught statistics so badly in the social sciences that most researchers do not appear to realize that the test of significance is about sampling.  (Bam!)  It is a test that helps you figure out if you are being excessively skeptical because of the small size of the sample that you’ve got.  And our samples are now changing.

Thanks to the Internet and our ongoing revolution in computing we are entering the era that the UK calls e-Social Science, and here in the US we call Computational Social Science.  Fast processors. Big iron. Big datasets. Many variables.

Data from the cloud now potentially lets us test all kinds of social science questions (particularly if you are interested in human communication) that before would have by necessity sat in a small sample questionnaire. As social scientists turn toward “big data” they are going to trip over their bad habit of significance testing.  The fact is, most methods courses and research procedures in wide use are obsessed with errors caused by sampling, especially small sample sizes. (Bam!) But as a sea of digital data opens up to the horizon, our problems are increasingly about specification error and not sample sizes, just as measures are increasingly unobtrusive and not self-reports.

Remember, statistical significance is about sampling.  “Except in the limiting case of literally zero correlation, if the sample were large enough all of the coefficients would be significantly different from everything.” (McCloskey, p. 202).  Take your study of communication patterns from 60 paper-and-pencil questionnaires replicate it with a random sample of a million Facebook accounts (if you can get access… see this editorial).  You’ll find that statistical significance — particularly at the arbitrary point of p < .05 — tells you zip.

(click for more shirts like this.)

I think most of the solution is to de-emphasize procedure, as social science procedure is becoming much more volatile as information technology improves.  We need to get people to understand that research design is a creative act, not the boring part of the research process.  Students need to write new procedures, not memorize old ones.  To that end, we need classes about evidence and research design.  Figuring out how to do that is a challenge but we’ve got to step up to it.  (If you’ve got ideas for revising the syllabus for my last attempt, send me an email or a comment.)

Chant it with me:  Statistical significance does not equal substantive significance.  Please chant it with me.  This is something we ought to know already but it may take the big datasets of the Internet to teach us.  What other lessons are in store?

Read more:

Deirdre McCloskey: Rhetoric Within the Citadel: Statistics ( and Why Economic Historians Should Stop Relying on Statistical Tests of Significance (

John P. A. Ioannidis: Why Most Published Research Findings Are False (

Jonathan A C Sterne and George Davey Smith: Sifting the Evidence: What’s Wrong with Significance Tests? (

Is YouTube the successor to Television — or to LIFE Magazine?

Saturday, March 13th, 2010

(or: Now blogging HuffPost too.)

I’m now blogging at the Huffington Post: see

Dan Schiller and I plan to co-author a series of short pieces about media and technology.  What an honor and a privilege it is to be writing with Dan!  I’m thrilled that our first piece is up, titled: Is YouTube the successor to Television — or to LIFE Magazine?

Really the best thing about co-authoring is that they scaled and cropped our pictures so that our heads appear to be very different sizes.  Now I have no doubt that Dan’s brain is much more effective than mine, but I haven’t noticed that my own head is so small.

Christian Sandvig Dan Schiller

(Microcephaly. [left])

We look pretty weird next to each other.   But the Huffington Post isn’t about aesthetics, in case you haven’t noticed.

Zen Koans of Modern Warfare 2

Wednesday, March 3rd, 2010

(or: Not the Wind, Not the Flag)

[Thanks to the emails generated by my previous Modern Warfare 2 rant, I’ll revisit the game.  Here are five Modern Warfare 2 multiplayer koans vaguely in the style of The Gateless Gate.  Mumon is a Chinese Zen master (1183-1260).  I did not write verses because I am lazy.]

1. The infinite chain of FFA

While playing free-for-all (FFA) on a small map, stop stalking someone and turn around suddenly.  You will see that someone has been stalking you, and unknown to him, behind his back you see someone stalking him in turn.  But maybe it doesn’t stop there?   FFA on a small map is a chain of soldiers arranged in a circle.  Everyone stalks the person in front of them who is facing the other way.  They spawn, stalk, shoot and are killed from behind … then it repeats until the round ends at the score limit.  This is what you make possible as you jab at “X” to respawn as fast as you can.  You are shooting yourself in the back and the other players are your instrument.

Mumon’s comment: It is a repetitive ritual: spawn, stalk, shoot, die.  But it is no more compulsive than Farmville.

2. The exclamation

When something unusual happens while you are playing with strangers (mercenary or FFA) an exceptional kill will cause your enemy to exclaim out loud–they have forgotten that their headset is on.  Just as you press the trigger you hear an “aawwww!” or, if it is something weird, more of a surprised “oh!”   It’s a sound that you forced out of them.  Maybe if the XBox headsets were more sensitive you would hear this more often.  Maybe you could hear the sharp intakes of breath that you cause.

Mumon’s comment:  That is the sound of a stranger breathing for you.  Mostly it is swearing.

3. The dance of equals

In multiplayer you will discover your perfect match.  You will come upon each other in an open courtyard.  Each of you will empty your clip, firing at short range, while you strafe and dodge this way and that.  As the last rounds are fired and you both start to reload, no one has been hit!  You both switch to knives and leap backward and forward.  As the sweep of the knives finishes this strange little dance, no one has been hit!  This might even produce a momentary pause. Or even peace. Or was it lag?

Mumon’s comment: Then you will both be killed by a Predator missile from the sky.  The sky is always the victor.

4. The opposite of lag

You will see the crosshairs perfectly centered over your opponent.  You will fire.  You see the report and feel the recoil, but you are the one who has died, even though you know in your heart that you fired first.  When it happens you will blame the lag and cry out at this injustice.

Yet there will be other times.  Other times when you achieve an uncanny fluency.  You can do no wrong as you score point after point.  Every bullet of yours finds its target.  This, you think to yourself, is skill.  I am unstoppable.

Mumon’s comment: Skill is the opposite of lag.  Is skill the opposite of lag?

5. The Care Package of Enlightenment

Capt. John “Soap” McTavish asked: “When I call down a care package, am I enlightened?”

Capt. John “Soap” McTavish asked: “When I call down an emergency airdrop, am I enlightened further?”

Mumon’s comment: When you defend and retrieve every one of the crates from an emergency airdrop–even on a busy map filled with enemies–you are still 108,000 miles from a good killstreak reward.  At least that’s usually my luck.

Capt. John “Soap” McTavish said: “Tango down.”

Appendix: Mumon’s Zen Warnings:

The Model 1887 is the false Zen.  It is overpowered and there is no honor in it.  To rely on the Claymore is to tie yourself without a rope. If you camp in the back of the plane on Terminal you go astray from the essence.  If you camp in the front, you oppose the principle.  If you neither camp in the back nor the front, you are a dead man breathing.  Tell me now, what will you do?

Bad Behavior has blocked 47 access attempts in the last 7 days.