Search the Site

Freakonomics: What Went Right? Responding to Wrong-Headed Attacks

Warning: what follows is a horribly long, inside-baseball post that most people will likely have little interest in reading, and which I had little interest in writing. But it did need to be written. Apologies for the length and the indulgence; we will soon return to our regular programming.

*     *     * 

I. Going on the attack is generally more fun, profitable, and attention-getting than playing defense. Politicians know this; athletes know it; even academics know it. Or perhaps I should say that especially academics know it?

Given the nature of the Freakonomics work that Steve Levitt and I do, we get our fair share of critiques. Some are ideological or political; others are emotional.

We generally look over such critiques to see if they contain worthwhile feedback, or point to an error in need of correction. But for the most part, we tend to not reply to critiques. It seems only fair to let critics have their say (as writers, we’ve already had ours). Furthermore, spending one’s time responding to wayward attacks is the kind of chore you’d rather skip in order to get on with your work.

But occasionally an attack is so spectacularly ridiculous, so riddled with errors and mangled logic, that it’s worth addressing.

The following essay responds to two such attacks. The first one was relatively minor, a recent blog post written by a Yale professor. The second was more substantial, an essay by a pair of statisticians in American Scientist. Feel free to skip ahead to that one (at section III below), or buckle up for the whole bumpy ride.

*     *     *

II. On Jan. 27, an assistant professor of political science and economics at Yale named Chris Blattman published on his blog a post called “Do the big newspaper blogs plagiarize?” It began:

I regularly read at least two big blogs run by newspapers — Freakonomics at the NY Times and Ideas Market at WSJ. They find a wonderful sampling of things across the web.

What’s interesting: they seldom say where they find their material. The bloggy custom of hat tipping is nearly absent. Once in a while Freakonomics gives a blog hat tip, but (oddly) they never actually hyperlink. …

Impolite? Yes. Nefarious. Possibly. Plagiarizing? I’d ding my students if they did this so regularly and egregiously.

At first, I thought the post was a joke. In fact, I only knew Blattman’s name via numerous posts and links on our blog. So I looked up Blattman’s e-mail address and sent him a note with the subject line “was that a joke?”:

the thing you wrote about the freakonomics blog, I mean. fwiw, we haven’t been on NYT for a year (and even when we were, we were hardly “run by” the paper)

but I’d argue we are borderline zealous about attribution, linking, and hat-tipping. I’d invite you to actually read our last 100 posts, or last 1,000, and tell me if you really believe in what you just wrote. especially compared to the general esthetic of many blogs which routinely reprint entire articles and hijack photos with neither attribution nor payment.

Blattman promptly replied:

Hi Stephen

Well, I could be mistaken, but to use myself as an example (and this is not a self-interested search for blog traffic–especially since my post might up ending traffic from the blogs I discussed), I can say that a quick search done right now shows a h/t to me once in each of December and November of 2011, but not a hyperlink. My point was simply that this practice is uncommon and, to some, impolite. 

Actually, what drew my attention was a switch, perhaps a year or two back. One of the  first times I was linked from Freakonomics, there was a hyperlink, which I noticed only because of the surge in traffic. But I seem to recall that hyperlink was actually removed later that day, and then later mentions didn’t have the hyperlink, just the name. This is what made me think it was more of a deliberate policy.

Now it was my turn to reply:


I’m sorry but I think you are wrong on just about every front here. 

Here’s what you wrote in your post:

+ Your headline is “Do the big newspaper blogs plagiarize?” First of all, as I wrote in my first e-mail, we are not “a big newspaper blog.” Second of all: do you know what plagiarizing is? Is that what you’re charging us with? Even I don’t think so but for some reason, that’s the word you chose in your headline, so … what say you?

+ You then wrote: “What’s interesting: they seldom say where they find their material. The bloggy custom of hat tipping is nearly absent. Once in a while Freakonomics gives a blog hat tip, but (oddly) they never actually hyperlink.” Hard to believe, but it appears that just about everything in this paragraph is wrong. We “seldom say where” we “find our material”? Please, go look at any selection of Freakonomics posts and tell me that this is even remotely true. Or just search for “blattman” and look at the most recent posts. There are links; there are explanations; there are excerpts; and there are hat tips. In fact, I’d argue that our blog adheres more to the rules of journalistic attribution than a) the vast majority of blogs; and b) a lot of mainstream news sources. Furthermore, you write that “the bloggy custom of hat tipping is nearly absent.” “Nearly absent”? Are you serious? You do know that “HT” stands for “Hat tip,” right? Again, please look at our posts and tell me we don’t hat tip — including to you! Maybe you are unhappy that our hat tips sometimes aren’t linked to the blog of the tipper? I agree it’s probably better to do so than not do so, and I’m sure we are very inconsistent, but the tone and content of your post imply a far more serious set of charges that I would argue are at least 99% wrong.

Furthermore, to charge someone in a headline with plagiarizing, and then to defend that post in a followup e-mail with “Well, I could be mistaken, but to use myself as an example …” and “I seem to recall that hyperlink was actually removed” strikes me as an astronomically weak argument. We don’t routinely remove links, or even edit posts unless an error is found, because we believe in the established rules of journalism and publishing, and those rules generally forbid messing with things once they’ve been published unless explaining to the reader why it’s been done. So as for a “deliberate policy” — well, our deliberate policy is to conduct ourselves with a considered appreciation for where ideas come from, to describe them well and accurately, and to expect others to do the same. Why you decided to single us out for behavior that we don’t practice is beyond me but suffice it to say I’ll be happy to not hat tip you in the future, or link to you, or ever mention your name. 



Blattman’s response:

Hi Stephen

The continued association to the Times was my mistake, which I’ll correct. The plagiarization charge was a cheap shot in your instance, and I will fix this as well. I think it applies more to the ideas market blog than Freakonomics, because you are right in that you hat tip. N
onetheless, as I mentioned in the post and my last email, what seemed unusual to me about the Freakonomics blog in particular is that it seldom hyperlinks its HTs. This doesn’t mean you deserve to get lumped with those that are more egregious, but the practice might be something to reconsider. In any case, I’m sorry this escalated. It’s my fault, for taking the charge too far, and a product of hasty and sometimes thoughtless blogging.


Blattman had graciously apologized and promised to amend his post but I was still worked up, I must admit, at how wrong and facile and flip-floppy this argument was, especially coming from a professor at Yale. Perhaps I am naïve about modern standards in the academic community? Is it okay to toss off a false and inflammatory charge and, if you happen to get caught, mumble an apology and chalk it up to haste?

So I wrote back:

explanation and apology accepted and we will try to do better at creating *linked hat tips*. am still a bit astonished, however, that the issue of *linked hat tips* could provoke such a broadly erroneous charge, especially from an academic. oh well. thnx, sjd

Soon after, Blattman published another post on his blog, headlined “More on yesterday’s cheap shot @freakonomics and @WSJIdeasMarket.” He writes:

First, lest anyone mistake this blog for a quality news and analysis outlet, let me remind everyone I blog hurriedly in my nearly non-existent spare time, and do not think much before I write. For if I did, there would not be a blog post every day.

My first thought was this: it is of zero interest to me whether a man named Chris Blattman is able to produce a blog post every day – unless or until his daily quota results in a false accusation against me.

I was also surprised to see a Yale professor admit that he doesn’t “think much before I write,” especially when he’s writing about the writing habits of people who do.

Here’s more:

Nonetheless, there is thoughtless and then there is reckless. Sometimes I am the latter. …

Why spend more blog space on such frivolous things? No good reason. On this occasion, I started it and I should fess up when I overstate myself, or falsely accuse.

Also, I have an overdeveloped sense of justice, which often pushes me in the right direction, but sometimes leads me along silly and fruitless paths, such as accosting strangers on New York City sidewalks for littering, or (more successfully) trying to bring order to Dubai airport lines when hundreds of people are jumping queues during a 4am rush.

I will admit: I still get a great sense of satisfaction from the memory of hundreds of people from as many nations meekly looking ashamed and falling back into line.

The arc of Blattman’s two posts strikes me as remarkable. He begins by wrongly accusing us of plagiarism and lesser sins, and writes that “I’d ding my students if they did this.” Then, when confronted with some facts, he freely admits his errors. But then, in his follow-up apology post, he explains that while he indeed may be guilty of having filed false charges, the fault can be traced to his acute moral sense. This is a man who travels the world cleaning up other people’s messes — at 4 a.m., no less!

My advice to people like Chris Blattman is simple: if you want to leave the world neater, try starting fewer messes yourself! Rather than shouting “plagiarism” in a crowded blogosphere, you could send an e-mail saying, “Hey Dubner, it sure would be nice if you linked to my blog every time you hat-tip me.”

But that would have deprived Blattman content for a blog post. As I wrote above, the incentives to attack in public are strong, no matter how wrong-headed the attack may be. This is similar to the strong incentives that lead people to predict the future. Wrong predictions are usually forgotten and barely ever punished — but on the off chance that you do successfully predict a rare event, the bragging rights last forever.

*     *     *

III. The Jan.-Feb. 2012 issue of American Scientist includes an article headlined “Freakonomics: What Went Wrong?” It was written by a pair of statisticians named Andrew Gelman and Kaiser Fung. They damn us with a bit of faint praise, including this:

The word “freakonomics” has come to stand for a light-hearted and contrarian, yet rigorous and quantitative, way of looking at the world.

But make no mistake: Gelman-Fung come to bury, not to praise. Their central charge: 

In our analysis of the Freakonomics approach, we encountered a range of avoidable mistakes, from back-of-the-envelope analyses gone wrong to unexamined assumptions to an uncritical reliance on the work of Levitt’s friends and colleagues. 

I’ll give Gelman-Fung credit: they certainly spent more time on their attack than did Blattman. But it doesn’t seem to have helped much. Let’s look at the evidence. 

1. Their first example of a “mistake” concerns a May, 2005, Slate column we wrote about the economist Emily Oster’s research on the “missing women” phenomenon in Asia. Her paper, “Hepatitis B and the Case of the Missing Women,” was about to be published in the Aug. 2005 issue of the Journal of Political Economy. At the time, Levitt was the editor of JPE, and Oster’s paper had been favorably peer-reviewed.

Oster argued that women with Hepatitis B tend to give birth to many more boys than girls; therefore, a significant number of the approximately 100 million missing females might have been lost due to this virus rather than the previously argued explanations that included female infanticide and sex-selective mistreatment.

Other scholars, however, countered that Oster’s conclusion was faulty. Indeed, it turned out they were right, and she was wrong. Oster did what an academic (or anyone) should do when presented with a possible error: she investigated, considered the new evidence, and corrected her earlier argument. Her follow-up paper was called “Hepatitis B Does Not Explain Male-Biased Sex Ratios in China.”

Levitt subsequently wrote a Freakonomics blog post about the Oster affair, headlined “An Academic Does the Right Thing.” He detailed the error, the new data, etc.; he wrote:

I have great admiration for her doing this. I know a lot of people who wouldn’t have done the same thing. They wouldn’t have undertaken a study that could show their biggest result was wrong, and if they found a negative result, the
y would try to bury it.

Also, hats off to Justin Lahart at the Wall Street Journal who wrote this article on the topic. Here are the key papers.

What do Gelman-Fung make of this exchange? 

Monica Das Gupta is a World Bank researcher who, along with others in her field, has attributed the abnormally high ratio of boy-to-girl births in Asian countries to a preference for sons, which manifests in selective abortion and, possibly, infanticide. … In a follow-up blog post, Levitt applauded Oster for bravery in admitting her mistake, but he never credited Das Gupta for her superior work. Our point is not that Das Gupta had to be right and Oster wrong, but that Levitt and Dubner, in their celebration of economics and economists, suspended their critical thinking.

In other words, Gelman-Fung are distressed that, in a blog post that Steve Levitt wrote about Emily Oster’s admission of error, he did not specifically name one of Oster’s critics — even though, in that post, Levitt linked both to the relevant Journal article and a secondary Journal article listing the “key papers” on the topic, both of which spelled out Das Gupta’s involvement.

Seriously? This amounts to a “suspension of critical thinking”? I’m sorry, but I can’t give Gelman-Fung any points for this one.


2. Gelman-Fung take issue with a column we wrote in the New York Times in 2006 called “A Star Is Made.” It concerned the research of K. Anders Ericsson, a psychologist at Florida State University whom we’ve written about several times. The column argued that “the trait we commonly call talent is highly overrated.” Here are Gelman-Fung:

It begins with the startling observation that elite soccer players in Europe are much more likely to be born in the first three months of the year. The theory: Since youth soccer leagues are organized into age groups with a cutoff birth date of December 31, coaches naturally favor the older kids within each age group, who have had more playing time. So far, so good. But this leads to an eye-catching piece of wisdom: The fact that so many World Cup players have early birthdays, [Dubner and Levitt] write,

may be bad news if you are a rabid soccer mom or dad whose child was born in the wrong month. But keep practicing: a child conceived on this Sunday in early May would probably be born by next February, giving you a considerably better chance of watching the 2030 World Cup from the family section.

Perhaps readers are not meant to take these statements seriously. But when we do, we find that they violate some basic statistical concepts. Despite its implied statistical significance, the size of the birthday effect is very small.

The authors acknowledge as much three years later when they revisit the subject in SuperFreakonomics. They consider the chances that a boy in the United States will make baseball’s major leagues, noting that July 31 is the cutoff birth date for most U.S. youth leagues and that a boy born in the United States in August has better chances than one born in July. But, they go on to mention, being born male is “infinitely more important than timing an August delivery date.” What’s more, having a major-league player as a father makes a boy “eight hundred times more likely to play in the majors than a random boy,” they write.

So here’s what we seem to have done:

I fail to see an error here other than our inability to write in a 2006 magazine column what we were able to write in a 2009 book. Do you?


3. Gelman-Fung take issue with what we’ve written about the perils of drunk walking:

In SuperFreakonomics, Levitt and Dubner use a back-of-the-envelope calculation to make the contrarian claim that driving drunk is safer than walking drunk, an oversimplified argument that was picked apart by bloggers. The problem with this argument, and others like it, lies in the assumption that the driver and the walker are the same type of person, making the same kinds of choices, except for their choice of transportation. Such all-else-equal thinking is a common statistical fallacy. In fact, driver and walker are likely to differ in many ways other than their mode of travel. What seem like natural calculations are stymied by the impracticality, in real life, of changing one variable while leaving all other variables constant.

There is some validity to this criticism. We tried to make clear in the book, and in a subsequent Freakonomics Radio segment, that we had to make certain assumptions in this analysis. While there is a lot of good data on drunk driving (and driving in general), there is much less on walking, and especially drunk walking.

So, as Gelman-Fung rightly note, there is no way to know if, for instance, “the driver and walker are the same type of person.”

There’s also the fact that a drunk walker is likely to travel a much shorter distance than a drunk driver. (That’s why we offered a per-mile analysis rather than a time-based analysis.) Most important, we made clear that a drunk driver poses a danger to other people while that is much less true of a drunk walker (although wandering into a roadway while drunk can certainly pose a danger to others). 

Gelman-Fung write that our argument was “picked apart by bloggers.” Their American Scientist article includes only a cursory bibliography and no footnotes or endnotes, nor do Gelman-Fung cite any specific sources in this case, so it’s unclear who those bloggers were and what they picked apart.

That said, I agree we should have done a better job spelling out these assumptions and caveats. But to me the big picture is clear. Even though we don’t know much about the overlap between drunk drivers and drunk walkers, and even though it’s obvious that drunk walkers travel shorter distances than drunk drivers, the raw numbers a
re compelling:

So, while our methodology is hardly foolproof, I’d hope that most people would appreciate the baseline argument here: drunk walking is a dangerous activity that has been largely overlooked and, therefore, was worth writing about.

For what it’s worth, those are two of the key criteria that go into determining what Levitt and I write: overlooked and worth writing about.

Furthermore, it should be said: just because we have identified drunk walking as a real danger, we have repeatedly made clear that we do not in any way encourage drunk driving. Still, some people feel that talking about drunk walking misses a larger problem. For instance, our recent radio piece on drunk walking provoked some interesting pushback from pro-bicycle and anti-car quarters, who feel that pedestrians, drunk or otherwise, are the innocent bystanders of a car-mad society.


4. Gelman-Fung write about a section in SuperFreakonomics describing an effort to identify potential terrorists via U.K. banking data. Levitt did this analysis in collaboration with a U.K. bank-fraud expert whom we call Ian Horsley (his real identity had to be protected).

This portion of Gelman-Fung’s essay is so error-ridden and deprived of logic that it’s hard to decipher. So let me back up and explain what they’re actually writing about.

The SuperFreakonomics section in question begins with a discussion of how forensic analysis of this sort is particularly challenging when you’re dealing with a relatively small amount of wrongdoing within a large population. As we write:

When data have been used in the past to identify wrongdoing — like the cheating schoolteachers and collusive sumo wrestlers we wrote about in Freakonomics — there was a relatively high prevalence of fraud among a targeted population. But in this case, the population was gigantic (Horsley’s bank alone had many millions of customers) while the number of potential terrorists was very small.

We then discuss the sad fact that even if you could create an algorithm that identified potential terrorists at a 99 percent accuracy rate, this still wouldn’t be acceptable: 

We’ll assume the United Kingdom has 500 terrorists. The algorithm would correctly identify 495 of them, or 99 percent. But there are roughly 50 million adults in the United Kingdom who have nothing to do with terrorism, and the algorithm would also wrongly identify 1 percent of them, or 500,000 people. At the end of the day, this wonderful, 99-percent-accurate algorithm spits out too many false positives — half a million people who would be rightly indignant when they were hauled in by the authorities on suspicion of terrorism.

Nor, of course, could the authorities handle the workload. 

This is a common problem in health care. A review of a recent cancer-screening trial showed that 50 percent of the 68,000 participants got at least 1 false-positive result after undergoing 14 tests. …

We then describe how Horsley and Levitt created a smaller, tighter algorithm, built on a variety of metrics concerning the banking habits of customers of Horsley’s own bank, which wound up having significant predictive power:

Starting with a database of millions of bank customers, Horsley was able to generate a list of about 30 highly suspicious individuals. According to his rather conservative estimate, at least 5 of those 30 are almost certainly involved in terrorist activities. Five out of 30 isn’t perfect — the algorithm misses many terrorists and still falsely identifies some innocents — but it sure beats 495 out of 500,495.

Maybe you think that identifying only five terrorists out of a potential 500 isn’t worthwhile. But keep in mind these data were drawn solely from Horsely’s own bank. The idea was to create an algorithmic model that could be shared with other banks and institutions to ultimately cast a wider net.

Here, then, is how Gelman-Fung, critique this section of the book:

The straw man [Levitt and Dubner] employ—a hypothetical algorithm boasting 99-percent accuracy—would indeed, if it exists, wrongfully accuse half a million people out of the 50 million adults in the United Kingdom. …

But in the course of this absorbing narrative, readers may well miss the spot where Horsley’s algorithm also strikes out. The casual computation keeps under wraps the rate at which it fails at catching terrorists: With 500 terrorists at large (the authors’ supposition), the “great” algorithm finds only five of them. Levitt and Dubner acknowledge that “five out of 30 isn’t perfect,” but had they noticed the magnitude of false negatives generated by Horsley’s secret recipe, and the grave consequences of such errors, they might have stopped short of hailing his story. The maligned straw-man algorithm, by contrast, would have correctly identified 495 of 500 terrorists.

I don’t understand how Gelman-Fung conclude that we “[kept] under wraps the rate at which [the algorithm] fails.” I literally don’t understand it. Are they reading the same thing that we wrote – the same thing that you just read above?

But more bizarrely, they seem to extol the “maligned straw-man algorithm” which “would have correctly identified 495 of 500 terrorists.”

Yes, it would have correctly identified 495 of 500 terrorists – at a cost of rounding up an additional 500,000 law-abiding citizens!

In accusing us of failing to understand the tradeoff of false positives versus false negatives, it seems as if Gelman-Fung simply don’t care about the tradeoff of false positives versus false negatives. Are they advocating the British authorities round up entire neighborhoods throughout the country in order to extract 500 potential bad guys? If so, then their comprehension of democratic society is perhaps even worse than their comprehension of what we have written.

In the end, Levitt and Horsley turned over their results to MI5. Given the nature of this project, we can’t say any more than that. But imagine if, instead of producing a list with 30 names on it, 5 of whom were quite possibly terrorists, Gelman-Fung would apparently prefer that Levitt and Horsley had strolled into MI5 with a list of half a million names. Here, there are probably 500 bad guys on this list. Good luck. That meeting wouldn’t have likely lasted long.


5. Strangely enough, Gelman-Fung don’t write about a mistake we once made that I’d consider more substantial that those they include.

In Freakonomics, we wrote about th
e author and civil-rights activist Stetson Kennedy – who had infiltrated the Ku Klux Klan in the 1940’s in an attempt to break it up. We based our account on interviews with Kennedy in his home, his own published and unpublished works, and several other Klan histories. After our book’s publication, we were presented with unpublished evidence arguing that Kennedy had significantly embellished his role in infiltrating the Klan, and that his portrayal of said role was inaccurate. We then sought out further historical evidence and presented it to Kennedy, again in person, in order that he might rebut it. No satisfying rebuttal was forthcoming. Then, with a heavy heart, we wrote a 2006 New York Times column presenting evidence of Kennedy’s embellishments. I say “with a heavy heart” because it is of course no fun to admit you’ve been had (our column was headlined “Hoodwinked?”); but also because Kennedy was a national treasure (he died last year), a man on the right side of many good fights, and exposing him was therefore an unsavory task. 

But: because our original writing had perpetuated an error that occurred in many books and other historical portrayals, we felt compelled to correct it, and we did, as publicly as we knew how.

Why did Gelman-Fung omit this story?

Perhaps they reasoned that readers of their essay might conclude that we do approach our work with the utmost appreciation for accuracy and legitimacy, and are willing to explain in the New York Times when we’ve been had. Which, of course, might lead that same reader to conclude that the “mistakes” Gelman-Fung point out are in fact not mistakes at all.

Nor do Gelman-Fung mention our Freakonomics Radio, produced in collaboration with American Public Media and WNYC. I don’t know whether this means they found it faultless, were unaware of its existence, or neither. In any case, the radio project represents the majority of our new content over the past two years, with more than 60 podcasts and 10 hour-long radio shows to date.  If you believe Gelman-Fung’s claims and doubt that our work is intellectually honest and rigorous, I’d invite you to take a look at our radio archives, which include complete transcripts and links to supporting material.


*     *     *


6. Gelman-Fung conclude their essay by offering some advice “for the would-be pop-statistics writer,” using what Levitt and I have done “wrong” as a cautionary tale.

In this section, Gelman-Fung offer plenty of practical advice about writing in general. My only objection is that whenever they write specifically about us and our work, it becomes clear that they don’t know what they’re talking about. Which leads them to use exceedingly weasely language to promote their argument. For instance:

Although there’s no way we can be sure, perhaps, in some of the cases described above, there was a breakdown in the division of labor when it came to investigating technical points. 

Although there’s no way we can be sure, perhaps, in some of the cases described above …”?! 

It is hard to imagine writing a sentence that hedges more. Years ago, I taught a freshman comp class at Columbia, called “Logic and Rhetoric.” It encouraged young writers to concentrate on those two essential elements of worthwhile writing: the logic (and accompanying ideas, facts, examples, etc.) and the rhetoric (clear, crisp, transparently honest communication). I think I learned as much about writing that year as I taught about writing. That said, I would occasionally come across a student’s sentence like the one above that Gelman-Fung wrote. I’d explain to its author why it was so bad: with rhetoric as contorted and sketchy as that, a reader has every right to distrust whatever “logic” you’re about to unload. Language and rhetoric are inextricably bound up with each other; the failure of one contributes to a failure in the other. When I read that Gelman-Fung sentence, it seems to me that what they are really saying is: We don’t actually know what we’re talking about when we talk about how Levitt and Dubner work, and we’re certainly not going to go to the trouble to do any original reporting or even fact-checking, but in the interest of attacking a ripe target like Freakonomics, let’s make some assumptions and worry later about the facts …

When they turn their attention specifically to SuperFreakonomics, Gelman-Fung write:

Success comes at a cost: The constraints of producing continuous content for a blog or website and meeting publisher’s deadlines may have adverse effects on accuracy.

That might seem a sensible argument — unless the exact opposite is true. No one holds a gun to our head to write anything. Most of what we write on our blog is a natural continuation of what we’ve already written or a casual version of what we’re working on next. Furthermore: we, not our publisher, set the deadline for our second book. Nor did we rush it. Indeed, we address the timing at the very beginning of SuperFreakonomics

As profitable as it might have been to pump out a quick follow-up – think “Freakonomics for Dummies” or “Chicken Soup for the Freakonomics Soul” – we wanted to wait until we had done enough research that we couldn’t help but write it all down. So here we finally are, more than four years later …

Did Gelman-Fung simply fail to read the book they decided to trash? I wouldn’t have thought so, but they also write this:

The strongest parts of the original Freakonomics book revolved around Levitt’s own peer-reviewed research. In contrast … SuperFreakonomics relies heavily on anecdotes, gee-whiz technology reporting and work by Levitt’s friends and colleagues.

This is grotesquely wrong. Here’s how:

  1. “Relies heavily on anecdotes?” Simply not true. I’d love Gelman-Fung to provide a list so that I could refute it. Do we tell stories? Yes. Are the stories generally a) backed by data; and b) illustrative of a larger point we’re making? Also yes.
  2. “Gee-whiz technology reporting”? By this Gelman-Fung may be referring to our controversial chapter about global warming, which indeed discussed a variety of technological solutions. But how do we rely on “gee-whiz” reporting? Having read their essay, I am not sure that Gelman-Fung actually understand what reporting is, and they certainly don’t seem to have done much for the essay. Rather, they interpret (quite sloppily) what they have read in our books, look around to see what some bloggers have to say, and make a bunch of claims that you couldn’t get away with in an op-ed for a second-rate newspaper.

Our books, meanwhile, feature a good deal of original reporting in addition to the writing based on empirical analysis. SuperFreakonomics alone reflects hundr
eds of interviews and reporting trips to, among other places: London and elsewhere in the U.K. (for the terrorism project described in Chapter 2); Washington, D.C. (for the medical-informatics system known as Azyxxi in Chapter 2); Bellevue, Wash. (the anti-hurricane measures and anti-global warming measures in Chapters 4 and 5); Grand Rapids, Mich. (the inefficiency of chemotherapy, Chapter 2); New Haven (the monkey experiments in the Epilogue); an undisclosed location in the northeast (for the car-seat crash tests we commissioned, as described in Chapter 4); and Queens, N.Y. (for the Kitty Genovese story, as described in Chapter 3.) Granted, that last trip was only a subway ride for me — but reporting is reporting, and Gelman-Fung’s inability to recognize and acknowledge it strikes me as a great deficit.

Finally, Gelman-Fung argue that SuperFreakonomics, unlike Freakonomics, featured the research of Levitt’s “friends and colleagues” rather than Levitt himself. This is one of their largest assumptions, and perhaps their faultiest as well. I cannot say for certain how they came to this conclusion but I do have a guess.

Our books feature stories that include a combination of reporting, data analysis, and character-based narrative. The characters we’ve written about — the sociologist Sudhir Venkatesh, the economist John List, the U.K. fraud officer Ian Horsley, etc. – are often co-authors with Levitt on academic papers. While writing about the analysis and/or investigations that Levitt and/or I have done, we tend to focus on these co-authors rather than insert ourselves as protagonists in the narrative.

Why? It is a way to both share credit and to not be constantly thumping one’s own chest. (The irony is that Chris Blattman accuses us of spreading too little credit, while Gelman-Fung interpret our credit-spreading as having failed to do original work.) In both Freakonomics and SuperFreakonomics, we devote a lot of space (and effort) to writing an endnotes section that fully explains our sources and methodologies. In SuperFreakonomics, the endnotes section ran about 12,700 words, about the length of a book chapter. 

Here, drawn from those SuperFreakonomics endnotes, are some of the original Levitt research around which the book was built:

Steven D. Levitt and Jack Porter, “How Dangerous Are Drinking Drivers?Journal of Political Economy 109, no. 6 (2001).

Steven D. Levitt and Sudhir Alladi Venkatesh, “An Empirical Analysis of Street-Level Prostitution,” working paper.

Ilyana Kuziemko and Steven D. Levitt, “An Empirical Analysis of Imprisoning Drug Offenders,” Journal of Public Economics 88 (2004). 

Steven D. Levitt and Chad Syverson, “Antitrust Implications of Outcomes When Home Sellers Use Flat-Fee Real Estate Agents,” Brookings-Wharton Papers on Urban Affairs, 2008.

Roland G. Fryer, Steven D. Levitt, and John A. List, “Exploring the Impact of Financial Incentives on Stereotype Threat: Evidence from a Pilot Study,” AEA Papers and Proceedings 98, no. 2 (2008).

Mark Duggan and Steven D. Levitt, “Assessing Differences in Skill Across Emergency Room Physicians,” working paper.

“Identifying Terrorists Using Banking Data,” Steven D. Levitt and A. Danger Powers, working paper.

Steven D. Levitt and Matthew Gentzkow, “Measuring the Impact of TV’s Introduction on Crime,” working paper.

Steven D. Levitt, “The Effect of Prison Population Size on Crime Rates: Evidence from Prison Overcrowding Litigation,” The Quarterly Journal of Economics 11, no. 2 (May 1996).

Steven D. Levitt and John A. List, “What Do Laboratory Experiments Measuring Social Preferences Tell Us About the Real World,” Journal of Economic Perspectives 21, no. 2 (2007).

Levitt and List, “Viewpoint: On the Generalizability of Lab Behaviour to the Field,” Canadian Journal of Economics 40, no. 2 (May 2007).

Levitt and List, “Homo Economicus Evolves,” Science, February 15, 2008.

Levitt, List, and David Reiley, “What Happens in the Field Stays in the Field: Professionals Do Not Play Minimax in Laboratory Experiments,” Econometrica (forthcoming, 2009)

Levitt and List, “Field Experiments in Economics: The Past, the Present, and the Future,European Economic Review (forthcoming, 2009).

Steven D. Levitt and Jack Porter, “Sample Selection in the Estimation of Air Bag and Seat Belt Effectiveness,” The Review of Economics and Statistics 83, no. 4 (November 2001).

Steven D. Levitt, “Evidence That Seat Belts Are as Effective as Child Safety Seats in Preventing Death for Children,” The Review of Economics and Statistics 90, no. 1 (February 2008).

Levitt and Joseph J. Doyle, “Evaluating the Effectiveness of Child Safety Seats and Seat Belts in Protecting Children from Injury,” Economic Inquiry, forthcoming.

Ian Ayres and Steven D. Levitt, “Measuring Positive Externalities from Unobservable Victim Precaution: An Empirical Analysis of LoJack,” Quarterly Journal of Economics 113, no. 8 (February 1998).

Given these citations, how can one justify what Gelman-Fung wrote about SuperFreakonomics? If I had to guess I’d say that either a) they were wed to the anti-Freakonomics argument they’d embarked on and were unwilling to let facts stand in the way; or b) they simply failed to read the endnotes. They would hardly be the first people to fail to read a book’s endnotes – but, given the fact that they are launching a scholarly attack published in a journal like American Scientist, one might have expected otherwise.

*     *     *

7. Finally: it is true, as Gelman-Fung write, that we sometimes feature the work of researchers we’ve come to know. (We also write about lots and lots of people we don’t know at all. Furthermore, Andrew Gelman himself has turned up on our blog several few times – and as he has made clear, he is plainly not our friend.)

Gelman-Fung present this “friend and colleague” idea as an argument that we favor or feature the work of certain scholars because we happen to know them. There is indeed an arrow to be drawn between what we write and whom we know – but Gelman-Fung have the arrow traveling in the wrong direction.

< p>It isn’t that we necessarily write about friends’ and colleagues’ work simply because we know them; it’s that we sometimes become friends and colleagues with people who do interesting work.

And I’d be surprised if Gelman and Fung didn’t do exactly the same thing. Isn’t that the point of living a life of the mind – to seek out the most fascinating, energetic, right-minded thinkers you can find and spend your time learning from them and with them? 

Indeed, if you take a look at Gelman’s blog, you’ll find he consistently references and praises the work of certain scholars whom he seems to admire. One of them, as it happens, is Chris Blattman. And Blattman, on his blog, seems to admire Gelman as well.

So Gelman and Blattman seem to like each other’s work, and I’m happy for that. If they are real friends in real life, so much the better. That’s how things work. But having one set of rules for yourself and another set for the people you choose to attack is neither good logic nor good rhetoric.

Another scholar who often appears on Gelman’s blog is Dan Kahan, a professor of law and psychology at Yale. Kahan is a leader of the Cultural Cognition Project, a scholarly group that explores how people’s underlying beliefs and biases color their rational assessment of important topics like climate change and nuclear power. 

I interviewed Kahan for a recent Freakonomics Radio podcast called “The Truth Is Out There … Isn’t It?” It’s about how even smart people – in fact, especially smart people – tend to seek out information that confirms their ideological or moral views rather than honestly assessing the evidence.

In the podcast, I describe some interesting research Kahan and others had done on the perceived risks of climate change.

DUBNER: [Ellen] Peters and Kahan found that high scientific literacy and numeracy were not correlated with a greater fear of climate change. Instead, the more you knew, the more likely you were to hold an extreme view in one direction or the other — that is, to be either very, very worried about the risks of climate change or to be almost not worried at all. In this case, more knowledge led to … more extremism! Why on earth would that be? Dan Kahan has a theory. He thinks that our individual beliefs on hot-button issues like this have less to do with what we know than with who we know.

We then hear from Kahan:

KAHAN: My activities as a consumer, my activities as a voter, they’re just not consequential enough to count. But my views on climate change will have an impact on me in my life. If I go out of the studio here over to campus at Yale, and I start telling people that climate change is a hoax – these are colleagues of mine, the people in my community—that’s going to have an impact on me; they’re going to form a certain kind of view of me because of the significance of climate change in our society, probably a negative one. Now, if I live, I don’t know, in Sarah Palin’s Alaska, or something, and I take the position that climate change is real, and I start saying that, I could have the same problem. My life won’t go as well. People who are science literate are even better at figuring that out, even better at finding information that’s going to help them form, maintain a view that’s consistent with the one that’s dominant within their cultural group.  

I found this observation fascinating. It’s a striking example of what Danny Kahneman calls being “blind to our blindness” — that is, how our biases lead us to form conclusions that we think are rational but in fact are merely extensions of our preexisting beliefs.

Were Gelman-Fung blind to their blindness? Did they come to believe, for whatever personal or professional reasons, that Levitt’s and my work was in need of attack, and did they then set out to gather evidence that seemed to support their bias?

Or, put more colloquially: once they’d picked up a hammer, did everything look like a nail?

I can certainly understand why Freakonomics is an appealing target for someone like Gelman-Fung. As I noted earlier, there are strong incentives to attack, particularly in the public sphere, where one can get a ton of attention in a blink by assailing the reputation of someone who’s been plugging away for years. Whether in the academy, the media, the political arena, or elsewhere, public discourse these days often seems little more than a tit-for-tat game in which you wait for someone or something to achieve a certain momentum and then shout as loudly as you can that it’s “wrong!” Or, in written form: Epic fail.

That is generally not what Levitt and I try to do in our Freakonomics work. There are a lot of different ways to explore and explain how the world works, and to resort so easily to attack mode seems to strike me as both counterproductive and exhausting.

To be fair, I’m guessing that even Andrew Gelman and Kaiser Fung and Chris Blattman would agree with me on this point. A shouting match can be fun to watch once in a while, but the world is more interesting than that, or at least it should be.