Search the Site

Posts Tagged ‘research’

Diversity in Research

A new NBER paper by Richard B. Freeman and Wei Huang looks at the ethnic diversity of research collaborators. They find that papers with more authors in more locations tend to be cited more:

This study examines the ethnic identify of the authors of over 1.5 million scientific papers written solely in the US from 1985 to 2008. In this period the proportion of US-based authors with English and European names fell while the proportion of US-based authors with names from China and other developing countries increased. The evidence shows that persons of similar ethnicity co- author together more frequently than can be explained by chance given their proportions in the population of authors. This homophily in research collaborations is associated with weaker scientific contributions. Researchers with weaker past publication records are more likely to write with members of ethnicity than other researchers. Papers with greater homophily tend to be published in lower impact journals and to receive fewer citations than others, even holding fixed the previous publishing performance of the authors. Going beyond ethnic homophily, we find that papers with more authors in more locations and with longer lists of references tend to be published in relatively high impact journals and to receive more citations than other papers. These findings and those on homophily suggest that diversity in inputs into papers leads to greater contributions to science, as measured by impact factors and citations.

Don’t Remind Criminals They Are Criminals

Psychologists have long argued about the power of priming, i.e the power of subtle cues and reminders to influence behavior.  For instance, there are a number of academic papers that find that if you make a woman write down her name and circle her gender before taking a math test, she will do substantially worse than if she just writes her name.  The idea is that women perceive that they are not good at math, and circling their gender reminds them that they are women and therefore should be bad at math.  I’ve always been skeptical of these results (and indeed failed to replicate them in one study I did with Roland Fryer and John List) because gender is such a powerful part of our identities that it’s hard for me to believe that we need to remind women that they are women! 

In an interesting new study, “Bad Boys: The Effect of Criminal Identity on Dishonesty,” Alain Cohn, Michel Andre Marechal, and Thomas Noll find some fascinating priming effects.  They went into a maximum security prison and had prisoners privately flip coins and then report how many times the coin came up “heads.”  The more “heads” they got, the more money they received.  While the authors can’t tell if any one prisoner is honest or not, they know that on average “heads” comes up half the time, so they can measure in aggregate how much lying there is.  Before the study, they had half the prisoners answer the question “What were you convicted for?” and the other half “How many hours per week do you watch television on average?”  The result: 66 percent “heads” in the treatment where they ask about convictions and “only” 60 percent “heads” in the TV treatment. 

The Retraction Penalty

In a new working paper called “The Retraction Penalty: Catastrophe and Consequence in Scientific Teams” (gated), Ginger Zhe Jin, Benjamin Jones, Susan Feng Lu, and Brian Uzzi explore a fascinating research question:

What are the individual rewards to working in teams? This question extends across many production settings but is of long-standing interest in science and innovation, where the “Matthew Effect” [a.k.a. “the rich get richer and the poor get poorer” suggests that eminent team members garner credit for great works at the expense of less eminent team members. In this paper, we study this question in reverse, examining highly negative events – article retractions. Using the Web of Science, we investigate how retractions affect citations to the authors’ prior publications. We find that the Matthew Effect works in reverse – namely, scientific misconduct imposes little citation penalty on eminent coauthors. By contrast, less eminent coauthors face substantial citation declines to their prior work, and especially when they are teamed with an eminent author. A simple Bayesian model is used to interpret the results. These findings suggest that a good reputation can have protective properties, but at the expense of those with less established reputations.

To me, this finding is a bit surprising at first glance but, upon second glance, not really — but still fascinating.

If you are even a little bit interested in this topic and don’t know about the Retraction Watch website, you should. A few recent examples:

Beware the Weasel Word "Statistical" in Statistical Significance!

As Justin Wolfers pointed out in his post on income inequality last week, the Census Bureau was talking statistical nonsense. I blame the whole idea of statistical significance. For its weasel adjective “statistical” concedes that the significance might not be the kind about which you care. Here, I’ll explain what statistical significance is, and how its use is harmful to society.

To evaluate the statistical significance of an effect, you calculate the so-called p value; if the p value is small enough, the effect is declared statistically significant. For an example to illustrate the calculations, imagine that your two children Alice and Bob play 30 rounds of the card game “War,” and that the results are 20-10 in favor of Bob. Was he cheating?

To calculate the p value, you need an assumption, called the null (or no-effect) hypothesis: here, that the game results are due to chance (i.e. no cheating). The p value is the probability of getting results at least as extreme as the actual results of 20-10. Here, the probability of Bob’s winning at least 20 games is 0.049. (Try it out at Daniel Sloper’s “Cumulative Binomial Probability Calculator.”)

Research from My Favorite Economic Gabfest

I’ve just gotten back home after a terrific few days at the Brookings Panel on Economic Activity.  It’s my favorite gabfest of the year, featuring economic analysis that is both serious research, and also connected to ongoing policy debates.  (OK, I’m biased–I’m an editor, and organize the conference along with Berkeley’s David Romer.)  And while I think some of you may enjoy slogging your way through the latest papers, others may prefer your summaries simpler and lighter. So I went ahead and recorded a few short videos summarizing the papers. I hope you enjoy!

Faster Than Light: A Guest Post

I recently had occasion to e-chat with Rocky Kolb, a well-regarded astronomer and astrophysicist at the University of Chicago. Talk turned, of course, to the recent likely discovery of the Higgs boson — but, as Kolb talk about that, he raised an even broader and more interesting point about scientific discovery.

He was good enough to write up his thoughts in a guest blog post that I am pleased to present below:


Faster Than Light
By Rocky Kolb

After the news coverage of the past week, everyone now understands what a Higgs particle is, and why physicists were so excited about the July 4th announcement of its probable discovery at CERN, a huge European physics accelerator laboratory.  (The disclaimer “probable” is because it could turn out that the new particle seen at CERN is not the Higgs after all, but an imposter particle with properties like the Higgs.)

For a few days it was common to see, hear, or read my colleagues struggling to explain why the discovery of a Higgs particle is a triumph for science.  But after a week of physics in the news, the media has moved on to cover the Tom CruiseKatie Holmes divorce and shark sightings near beaches.  Perhaps all the public will be left with is a memory that there was a triumph for science.  Science works: theories are tested and confirmed by experiment.

I think that the CERN Higgs discovery was, indeed, a triumph for science.  However, the Higgs was not the only dramatic announcement at CERN in the past year.  But the other dramatic result is something many physicists would rather forget.

Yes, I Just Paid $1,600 for a Set of Encyclopaedia Britannica

Encyclopaedia Britannica has declared that its latest print edition will be its last; from here on out, everything will be digital. Jim Romenesko rounds up coverage from the Times, the Chicago Tribune, and elsewhere. I am not much of an impulse buyer, but when I read that there were only 800 sets remaining — that’s what they say, at least — I jumped right in and paid nearly $1,600 to have a set shipped to my home in New York.

Freakonomics: What Went Right? Responding to Wrong-Headed Attacks

Warning: what follows is a horribly long, inside-baseball post that most people will likely have little interest in reading, and which I had little interest in writing. But it did need to be written. Apologies for the length and the indulgence; we will soon return to our regular programming.

*     *     * 

I. Going on the attack is generally more fun, profitable, and attention-getting than playing defense. Politicians know this; athletes know it; even academics know it. Or perhaps I should say that especially academics know it?

Given the nature of the Freakonomics work that Steve Levitt and I do, we get our fair share of critiques. Some are ideological or political; others are emotional.

We generally look over such critiques to see if they contain worthwhile feedback, or point to an error in need of correction. But for the most part, we tend to not reply to critiques. It seems only fair to let critics have their say (as writers, we’ve already had ours). Furthermore, spending one’s time responding to wayward attacks is the kind of chore you’d rather skip in order to get on with your work.

But occasionally an attack is so spectacularly ridiculous, so riddled with errors and mangled logic, that it’s worth addressing.

The following essay responds to two such attacks. The first one was relatively minor, a recent blog post written by a Yale professor. The second was more substantial, an essay by a pair of statisticians in American Scientist. Feel free to skip ahead to that one (at section III below), or buckle up for the whole bumpy ride.

Are Fake Resumes Ethical for Academic Research?

“Audit studies” have been popular in labor economics research for 10 years.  The researcher sends resumés of artificial job applicants in response to job openings. Typically there is a crucial difference in some characteristic of the person that indicates a particular racial/gender/ethnic or other group to which one person within a pair of resumés belongs while the other does not.  The differential response of employers to the difference in the characteristic implied by the resumés is taken as a measure of discrimination in hiring.

Is this ethical? 

Research Ideas From a Mexican Reader

A reader called HDT writes to say:

I live in Mexico and have often wondered why more American economists and students of economics don’t often venture down here because the country offers what seems, to me at least, a treasure trove of economic oddities that should fascinate anyone interested in how markets work.

* As Mexico is heading toward what’s likely to be the second most important election in its history, the subject of vote-buying is of particular interest if for no other reason than that it’s practiced fairly openly, especially in rural areas. I know that during the last elections, here in Yucatan, votes were being bought, in cash, for around $80. (Pigs and cows were also exchanged for votes, but I wasn’t ever able to find out what the “going rate” was for those particular transactions.) There are, of course, people employed by the major political parties who specialize in determining what votes are worth throughout the country. I imagine they’re easier to find, and talk to, than you might expect.

* There’s also the rather intriguing issue of how Mexican real estate agents determine a reasonable price for any given property they’re hoping to sell. The problem is that it’s customary to decrease the tax burden on the sale of a home by getting the buyer to lie about how much he or paid. In other words, the sales prices stated in government records are almost never accurate. Everyone knows this. And yet, properties regularly change hands and real estate agents do manage to make a living. But how?

Any takers?

Beware: This Blog Apparently Causes Academic Fraud

Way to scapegoat, Chronicle of Higher Education!

An article about a Dutch psychologist accused of faking his research data wonders if academic fraudsters are responding to the wrong incentives:

Is a desire to get picked up by the Freakonomics blog, or the dozens of similar outlets for funky findings, really driving work in psychology labs? Alternatively—though not really mutually exclusively—are there broader statistical problems with the field that let snazzy but questionable findings slip through?

Research Retractions Rising

There’s a new trend emerging in academic research: more retractions. According to a recent article in Nature by Richard Van Noorden, “[i]n the early 2000s, only about 30 retraction notices appeared annually. This year, the Web of Science is on track to index more than 400 (see ‘Rise of the retractions’) — even though the total number of papers published has risen by only 44% over the past decade.”

The article suggests that the increase is a result of ‘an increased awareness of research misconduct” and “the emergence of software for easily detecting plagiarism and image manipulation, combined with the greater number of readers that the Internet brings to research papers.”

While scientists and editors support the change, they point to various problems with the system: policy inconsistencies across journals, “opaque” explanations for retractions, ongoing citation of retraction papers and the stigma surrounding retraction. “[B]ecause almost all of the retractions that hit the headlines are dramatic examples of misconduct, many researchers assume that any retraction indicates that something shady has occurred,” writes Van Noorden.

The Latest from the Brookings Panel

I’m back from my favorite conference of the year—the Brookings Papers on Economic Activity. It was a terrific line-up of papers. And to call the discussion lively would be an understatement. (Full disclosure: David Romer and I are the co-editors.)
While a close reading of technical research papers is my idea of a good time, I’m told not everyone is wired this way. So I went into the studio to record a very simple summary of my thoughts on the papers. You won’t quite get the whole two days of economic policy wonk-ery, but this video is a start:

A Twitter Experiment

I’m a long-time Twitter skeptic. It’s difficult for an economist to see a 140 char lmt as a ftr. My journalist friends tell me I’m dead wrong. And a recent long and boozy evening with co-founders Evan Williams and Jason Goldman convinced me to give it a try. Is Twitter worth the hype? Let’s find out.
Today I’m beginning my Twitter Experiment. I’m now tweeting @justinwolfers. I’m going to keep this up for a couple of weeks as a “burn in” period—basically so that I can learn the ecosystem before my experiment begins. Then on the morning of August 1, I’m going to wake up, and flip a coin. Heads, I’ll open Twitter; tails I won’t. And I’ll do the same on August 2, and then every day for three months. If the coin comes up heads, it doesn’t necessarily mean that I’ll tweet, just that it will be a Twitter-aware day; I’ll consume the stream, and tweet away if I feel the need. Tails, and I’ll simply tweet “Tails, goodbye,” close the stream (unless I need it for research) and then resist the urge to tweet for the rest of the day.

How to Streamline Drug Research?

We all know that information is valuable, and that more information is generally better than less.
But in the realm of pharmaceutical research (as in others, to be sure), there’s a troubling paradox: while successes are widely publicized, and while the results of clinical trials are usually published, the research from projects that fail before that stage is usually kept hidden.

Your Tax Dollars at Work (Seriously)

A long-standing pet peeve of mine is that so much academic research is funded by public tax dollars and yet the public is rarely given access to the findings of that research.
In a short Times piece today, I found a hero: Michael Tuts, a particle physicist at Columbia who, among other things, is doing work at CERN, the European Organization for Nuclear Research:

Our Daily Bleg: How to Justify Long-Term Scientific Research?

In response to our bleg request, Rafe Petty of the University of Chicago chemistry department wrote in with the following question(s). Let him know what you think in the comments section, and send future blegs to: I was recently at a lecture by George Whitesides, one of the most well-known living chemists. He gave a very interesting lecture at . . .

What Do Declining Abortion Rates Mean for Crime in the Future?

The abortion rate in the United States is at a thirty year low — though even with the decline, we are still talking about a large number of abortions in absolute terms, or 1.2 million per year. To put this number into perspective, there are about 4 million births per year in the U.S. John Donohue and I have argued . . .

‘The Isaac Newton of Biology’

Talk about a nickname that is hard to live up to! Franziska Michor, who is a friend, former Harvard Society Fellow, and honorary economist, is featured in this year’s Esquire “Genius” edition under the headline “The Isaac Newton of Biology.” And she is only 25, and can also drive an eighteen-wheeler. Here is a link to her research on cancer.

Disturbing Facts about Sexual Abuse

From research by economists J.J. Prescott and Jonah Rockoff, here are a few current statistics on sex offenses reported to the police: 1) 25 percent of victims are 10-14 years old; 23 percent are nine or younger. 2) 22.5 percent of the offenders are family members. Only 8 percent are strangers. 3) 25 percent of sex offenses reported to the . . .

Levitt on Abortion/Crime: A FREAK-TV Collage of Evidence

Video In the video player on the left, you’ll find Part 2 of Levitt’s discussion of the research behind the abortion/crime link. (You can find Part 1 in the video player as well; here’s the blog post that accompanied it.) In this installment, he discusses the collage of evidence that convinced him and John Donohue of the link between legalized . . .

Abortion/Crime: Where Do Ideas Come From?

Video It’s always interesting to see where smart people get their ideas. Often, especially in the creative arts, it’s impossible to trace an idea down to its roots. But it’s easier in the social sciences. I, for one, believe that Steve Levitt has had an awful lot of good research ideas, and it’s good to hear how a particular idea . . .