In a new working paper called “The Retraction Penalty: Catastrophe and Consequence in Scientific Teams” (gated), Ginger Zhe Jin, Benjamin Jones, Susan Feng Lu, and Brian Uzzi explore a fascinating research question:
What are the individual rewards to working in teams? This question extends across many production settings but is of long-standing interest in science and innovation, where the “Matthew Effect” [a.k.a. “the rich get richer and the poor get poorer” suggests that eminent team members garner credit for great works at the expense of less eminent team members. In this paper, we study this question in reverse, examining highly negative events – article retractions. Using the Web of Science, we investigate how retractions affect citations to the authors’ prior publications. We find that the Matthew Effect works in reverse – namely, scientific misconduct imposes little citation penalty on eminent coauthors. By contrast, less eminent coauthors face substantial citation declines to their prior work, and especially when they are teamed with an eminent author. A simple Bayesian model is used to interpret the results. These findings suggest that a good reputation can have protective properties, but at the expense of those with less established reputations.
To me, this finding is a bit surprising at first glance but, upon second glance, not really — but still fascinating.
Many folks always ask me what the impact of randomized trials are on development. We at Innovations for Poverty Action and the M.I.T. Jameel Poverty Action Lab are dedicated to randomized trials to help push forward evidence-based policymaking. Yet what is the evidence that evidence shifts views? Not always so easy to do. I’ve done some work on the donor side, which I’ve reported on here before. Here is a meta-study that uses two of my studies that found fairly different results. One found that access to credit in South Africa led to increased income, the other found that access to credit in the Philippines had no discernible impact on income.
The researchers sent off about 1,500 mailers to microfinance institutions around the world, telling them about the positive study, the negative (or non-positive, technically) study, or a placebo (no mention of a study), and asked them if they wanted to participate in a randomized trial to measure the impact of their organization. They then saw which microfinance leaders responded, and whether they responded favorably or negatively. Read More »
In the Washington Post, Peter Whoriskey writes about the rising incidence of fraud in research labs:
It may be impossible for anyone from outside to know the extent of the problems in the Nature paper. But the incident comes amid a phenomenon that some call a “retraction epidemic.”
Last year, research published in the Proceedings of the National Academy of Sciences found that the percentage of scientific articles retracted because of fraud had increased tenfold since 1975.
The same analysis reviewed more than 2,000 retracted biomedical papers and found that 67 percent of the retractions were attributable to misconduct, mainly fraud or suspected fraud.
One of the less-obvious downsides of academic fraud:
The trouble is that a delayed response — or none at all — leaves other scientists to build upon shaky work. [Ferric] Fang said he has talked to researchers who have lost months by relying on results that proved impossible to reproduce.
Moreover, as [Adam] Marcus and [Ivan] Oransky have noted, much of the research is funded by taxpayers. Yet when retractions are done, they are done quietly and “live in obscurity,” meaning taxpayers are unlikely to find out that their money may have been wasted.
“Audit studies” have been popular in labor economics research for 10 years. The researcher sends resumés of artificial job applicants in response to job openings. Typically there is a crucial difference in some characteristic of the person that indicates a particular racial/gender/ethnic or other group to which one person within a pair of resumés belongs while the other does not. The differential response of employers to the difference in the characteristic implied by the resumés is taken as a measure of discrimination in hiring.
Is this ethical? Read More »
Freakonomics readers may know that I’m not the most qualified person to talk about using surveys. My first attempt — asking street gang members “How does it feel to be black and poor? Very bad, bad, good, …” — was met with laughter, disbelief and, scorn. (I suppose it was all uphill from that point!)
A basic question social scientists confront is: Why would you want to participate in our survey? Interviews can be long and boring; who wants to sit on the phone or stand on a streetcorner answering questions? A few bucks may not be worth the time. In fact, you have likely already perfected methods of avoiding telemarketers and sidewalk interviewers. From a data standpoint, your skilled avoidance is our problem: the views of respondents can differ from non-participants. From political races to consumer habits to opinion polls … we love numbers, and we need participation to get an accurate reading. Read More »
Yes! That’s the argument in a new Historical Biology paper called “A Call to Search for Fossilized Gastric Pellets.” Here’s the abstract:
Read More »
Numerous extant carnivorous, piscivorous and insectivorous species – including birds, pinnipeds, varanid lizards and crocodiles and mammals – routinely ingest food combined with a high proportion of indigestible material that can be neither absorbed through digestion nor eliminated as faecal matter. Their solution is to egest the indigestible portion through the mouth as a gastric pellet. The status of gastric pellets in extant species is reviewed. Arguments based on phylogeny, anatomy and biomechanics strongly suggest that many extinct species, including crocodilians and pterosaurs, may also have produced gastric pellets routinely.
Way to scapegoat, Chronicle of Higher Education!
An article about a Dutch psychologist accused of faking his research data wonders if academic fraudsters are responding to the wrong incentives:
Read More »
Is a desire to get picked up by the Freakonomics blog, or the dozens of similar outlets for funky findings, really driving work in psychology labs? Alternatively—though not really mutually exclusively—are there broader statistical problems with the field that let snazzy but questionable findings slip through?