Nickeled and Dimed by Barbara Ehrenreich
One of the things I’ve learned from Levitt is that you need a thick skin if you are going to write about controversial topics. And since Betsey Stevenson and I wrote about “The Paradox of Declining Female Happiness,” we’ve been called everything from left-wing fools to right-wing tools. But it can be a real kick in the guts when you learn that someone you thought you admired turns out to be simply dishonest. And that’s how I felt when I read Barbara Ehrenreich‘s “takedown” of our research in today’s LA Times.
Our research is simply about documenting a fact: since the 1970’s, women’s self-reported happiness has fallen, relative to that of men. This seems paradoxical, given the tremendous strides made by the women’s movement. We report this fact, test that it is a robust finding, and suggest that future research may help sort out whether it reflects how the women’s movement affected women’s hedonic state; whether it reflects the differential impact on women of some broader social trend; or if instead it is telling us something about the (un)reliability of happiness data.
But Ehrenreich thinks our research “doesn’t pass the giggle test”:
Only by performing an occult statistical manipulation called “ordered probit estimates” do the authors manage to tease out any trend at all.
O.K., so her first criticism is that we use an appropriate statistical technique for dealing with ordered responses (like “very happy,” “pretty happy,” or “not too happy.”) Still, it’s fair enough to ask us to be more transparent. But we are! Right there in Figure 1 of the paper, we report the trend in the proportion of women reporting that they are very happy. And in fact, we say the following:
Women begin the sample 4 percentage points more likely than men to report that they are very happy, and end the sample 1 percentage point less likely.
But somehow Ehrenreich misses this. Instead, careful to cherry-pick her evidence, she focuses on the subsequent paragraph, which reports:
Women were 1 percentage point less likely than men to say they were not too happy at the beginning of the sample ; by 2006 women were one percentage point more likely to report being in this category.
Both statements are correct, and both are necessary for a balanced reading. Taken together, they tell us that the decline in female happiness is largely about a decline in those “very happy,” rather than an increase in those “not too happy.”
Oh, and she forgot to mention something else: The same trend that is evident in these data is also evident in the Virginia Slims Poll, the Monitoring the Future Survey, and in Europe, in the Eurobarometer. Last week, Chris Herbst reminded us of another dataset, the DDB Needham Life Style Survey. And guess what? Those data also show a significant trend decline in women’s life satisfaction.
Now, there’s still a real debate to be had about whether this trend is important. Ehrenreich says,
Differences of that magnitude would be stunning if you were measuring, for example, the speed of light under different physical circumstances, but when the subject is as elusive as happiness — well, we are not talking about paradigm-shifting results.
This is a judgment call, but one best made with some knowledge about the determinants of happiness. It turns out that average happiness in a population is a rather stubborn thing and that this is a very large shift, relative to other things that affect average happiness.
For instance, the relative decline in women’s happiness that we document is about equal to what you would see if the unemployment rate rose from 4-1/2 percent to 13 percent, or if women’s incomes had fallen by over 30 percent. (See more, here.)
Ehrenreich is a fine rhetorician though, and she doesn’t miss a beat. She suggests that our study “purports to show that women have become steadily unhappier since 1972.” Purports? No, Barbara, we demonstrate that in half a dozen separate datasets, women’s reported well-being has fallen relative to men.
Here’s a challenge: find a single dataset that points in the opposite direction, and we’ll donate $1,000 to your favorite charity. And we’ve made it easy for you — start by downloading all of our raw data here.
Another tired old rhetorical trick is to infer intent to authors (without asking them):
As Stevenson and Wolfers report — somewhat sheepishly we must imagine — “contrary to the subjective well-being trends we document, female suicide rates have been falling, even as male suicide rates have remained roughly constant.
Umm, there’s nothing sheepish about it. We found the fact, we published it, and we think it’s interesting. No referee or editor pushed us to include this. Barbara, this is how social science works.
And then there’s her accusation that we only care about white people:
Another distracting little data point that no one, include the authors, seems to have much to say about is that while “women” have been getting marginally sadder, black women have been getting happier and happier.
It ain’t little, and it ain’t distracting. It’s a finding that’s entirely ours, and we highlight it in Table 2 (and elsewhere). Oh, and we’ve written another paper investigating this fact in greater detail. But as Ehrenreich should know, the problem is that the paucity of data on black women doesn’t allow strong conclusions to be drawn, either way.
Then there are rookie errors. An interesting fact about the decline in women’s measured happiness is that it is so ubiquitous, affecting young and old; married and single; parents and non-parents; those working in the market and those working at home. From this, Ehrenreich concludes:
If you believe Stevenson and Wolfers, women’s happiness is supremely indifferent to the actual condition of their lives … .
But Ehrenreich is confusing our finding — which is about common trends in happiness — with her own concern, which seems to be about the determinants of the level of happiness. (And in fact, our other work demonstrates that happiness is profoundly shaped by the actual conditions of one’s life.)
Finally, there are two interesting ironies. First, Ehrenreich’s primary source of information for her attack on the Stevenson-Wolfers paper appears to be, umm, the Stevenson-Wolfers paper. Truly honest empirical work reports all the relevant facts — including those that could be used as ammunition by one’s fiercest critics. By that standard, we’ve done a pretty good job.
And second, she’s right to note that one reason that there’s now greater interest in our findings is that Marcus Buckingham is promoting a recent book he’s written that incorporates our results. And Ehrenreich’s vitriolic op-ed? It arrived just as she’s hawking her own new book about happiness.
Now in all of this mess, there’s a real point to be made — and it’s the point that Levitt made a couple of years ago. There has certainly been some media hyperbole about our work. But Levitt’s reading was prescient:
To the extent their results are being exaggerated, it is by people like me who write blog posts about their paper without being explicit about the size of the effect. The authors can’t really be blamed for that.
Steve’s right. There’s an interesting debate to be had about how to interpret the facts we uncover. We’ll participate in that debate. But the facts? Facts have this stubborn habit of being true. (This post was coauthored with the always-happy Betsey Stevenson.)