A Great Example of Bias Within Academia
It is amazing how good we are — even the smartest, most rational people among us — at not recognizing our own biases. (Danny Kahneman memorably calls this being “blind to our blindness.”)
We recently put out a podcast called “The Truth Is Out There … Isn’t It?” about how people decide what to believe about everything from global warming and nuclear risk to UFO’s. It was inspired by the research of Dan Kahan and his colleagues at the Cultural Cognition Project; they have found that we systematically filter our beliefs through our personal and political ideologies. In other words, we allow our biases to influence what we think about theoretically non-ideological issues, but we aren’t aware of that influence.
We’re also working on an upcoming podcast about media bias, which will feature Tim Groseclose (author of Left Turn) and a cast of thousands. Once again, we bump up against the issue of people making seemingly objective judgments that are based, in some large part, on their subjectivity.
If you are at all interested in these kind of bias stories, and especially if you care about the realm of academic economics, you’ll definitely want to look at a new paper by Christis Tombazos and Matthew Dobra, who looked for bias within their own field. The paper (PDF here) is called “Using a Voting Mechanism to Evaluate the Quality of Research in Economics: Lessons from the Australian National Research Assessment” (emphasis added):
As part of the Australian National Research Assessment, the nation’s 133 most senior academic economists participated in a voting process that assigned quality ratings to almost a thousand journals of economics. The ratings were applied on the nation’s 975 academic economists’ publications retroactively by a number of institutions for a variety of purposes. The government used them to rank Universities and to distribute research funds. And Universities used them in hiring decisions, and the determination of salaries and publication bonuses. This study investigates the determinants of voting decisions. We find that voters are influenced by objective measures of journal quality. However, we also find strong evidence that, other things equal, voters assign the highest possible quality rating to journals in which they have published. They also overstate the quality of journals to which they have special access while understating the quality of journals that fall primarily in the fields of expertise of their 842 non-voting colleagues, or in which these non-voting colleagues have published.
Comments