Can You Trust Census Data?

No. At least that’s the conclusion of an important new paper (ungated version here) by Trent Alexander, Michael Davern and Betsey Stevenson, who find enormous errors in some critically important economic datasets.

Let’s start with the 2000 Decennial Census. Your responses to the Census were used for two purposes. First, the Census Bureau tallied up every response to produce its official population counts. And second, it produced a 1-in-20 sub-sample of these responses, which it made available for analysis by researchers. Just about every economist I know has used this Census sub-sample, as do a fair number of demographers, sociologists, political scientists, and private-sector market researchers.

The errors are documented in a stunningly straightforward manner. The authors compare the official census count (based on the tallying up of all Census forms) with their own calculations, based on the sub-sample released for researchers (the “public use micro sample,” available through IPUMS). If all is well, then the authors’ estimates should be very close to 100% of the official population count. But they aren’t:

Census ChartSource: Inaccurate Age and Sex Data in the Census PUMS Files: Evidence and Implications
Trent Alexander, Michael Davern and Betsey Stevenson

The two estimates are pretty similar for those younger than 65. But then things go haywire, with the alternative estimates disagreeing by as much as 15%. In fact, the microdata suggest that there are more very old men than very old women — I know some senior women who wish this were true! The Census Bureau has confirmed that the problem isn’t with the authors’ calculations. Rather, the problem is in the public-use microdata sample.

What’s the source of the problem? The Census Bureau purposely messes with the microdata a little, to protect the identity of each individual. For instance, if they recode a 37-year-old expat Aussie living in Philadelphia as a 36-year-old, then it’s harder for you to look me up in the microdata, which protects my privacy. In order to make sure the data still give accurate estimates, it is important that they also recode a 36-year-old with similar characteristics as being 37. This gives you the gist of some of their “disclosure avoidance procedures.” While it may all sound a bit odd, if these procedures are done properly, the data will yield accurate estimates, while also protecting my identity. So far, so good.

But the problem arose because of a programming error in how the Census Bureau ran these procedures. The right response is obvious: fix the programs, and publish corrected data. Unfortunately, the Census Bureau has refused to correct the data.

The problem also runs a bit deeper. If the mistake were just the one shown in the above graph, it would be easy to simply re-scale the estimates so that there are no longer too many, say, 85-year-old men — just weight them down a bit. But it turns out that the same coding error also messes up the correlation between age and employment, or age and marital status (and, the authors suspect, possibly other correlations as well). When you break several correlations like this, there’s no easy statistical fix.

Worse still, the researchers find that related problems afflict the microdata released for other major data sources. All told, they’ve found similar errors in:

  • The 2000 Decennial Census.
  • The American Community Survey, which is the annual “mini-census” (errors exist in 2003-2006, but not 2001-02, or 2007-08).
  • The Current Population Survey, which generates our main labor force statistics (errors exist for 2004-2009).

These microdata have been used in literally thousands of studies and countless policy discussions. While the findings of many of these studies aren’t much affected by these problems, in some cases, important errors have been introduced. The biggest problems probably exist for research focusing on seniors. Yes, this means that many of those studies of important policy issues-retirement, social security, elder care, disability, and medicare-will need to be revisited.

The problem is that until the Census Bureau does something about these widespread problems, we can’t even begin this process of cleaning up problematic research findings. Right now, the authors warn that: “The resulting errors in the public use data are severe, and… should not be used to study people aged 65 and over.” Given the long list of afflicted datasets, up-to-date credible research on seniors is virtually impossible.

The whole research community is waiting for the Census Bureau to do something about these problems.

UPDATE: Carl Bialik of the Wall Street Journal also?digs a little deeper into these problems.


Leave A Comment

Comments are moderated and generally will be posted if they are on-topic and not abusive.



View All Comments »
  1. Don Sakers says:

    II work in a public library, and the Census Bureau has been using our meeting room for a year to interview recruits, train census workers, and meet with clients.

    These people are unable to keep their meeting dates, times, and locations straight. They are always showing up at the wrong branch at the wrong time, calling to cancel meetings that they never booked, or confirming meetings that are booked for a different date/time/location. Interviewees show up expecting census officials who never materialize, or census officials arrive and sit for hours waiting for clients who were given different date/time/place.

    Based on my experience with the Census Bureau this time around, I have absolutely zero confidence in any numbers that they produce.

    Thumb up 0 Thumb down 1
  2. KB says:

    Thanks for helping to get the work out about the 65+ estimates. I know there are aged researchers out there who are unaware of the issue. But what to do? Stop using these data sets? Stop doing research? Point myself in the direction of Census headquarters and focus in an angry stare?

    Thumb up 0 Thumb down 0
  3. WiL says:

    Don: Sounds like the Census people are recoding the microdata on their calendars!

    Thumb up 0 Thumb down 0
  4. kip says:

    Won’t the 2010 census data be out soon, with the bug presumably fixed? Also, why did it take 10 years for this to be discovered? Hopefully more careful scrutiny will be paid to the 2010 data…

    Thumb up 0 Thumb down 0
  5. DON says:


    Thumb up 0 Thumb down 1
  6. Tom Peters says:

    Are they errors or are they indications of skulduggery?

    I’m not a conspiracy theorist at heart but during the periods mentioned, some or all of the people employed in compiling the information contained in the datasets mentioned were members of a larger group that is antagonistic toward the census.

    That the current census was designed and implemented by those same people is also worthy of note.

    Thumb up 0 Thumb down 0
  7. jh says:

    To Don Sakers:

    Those Census employees you are referring to are likely temporary employees. They probably aren’t the same people who are in charge of creating the public use micro sample or doing most of the on-going data analysis that goes into the final numbers.

    To Justin Wolfers:

    Your blog title, and some of the wording throughout, is a bit misleading. It seems there is only evidence that there are problems with the public use micro sample file. Is there evidence also that their own aggregate data are way off, too? The way this is written could lead someone to conclude the totals are wrong (which they might be) despite the fact that it doesn’t appear you’ve offered up any evidence about that.

    Thumb up 1 Thumb down 0
  8. Grace Meng says:

    I work for a nonprofit that is working on creating new ways to collect and analyze sensitive data, through something we call a datatrust, and repeatedly, people have asked me, “Will the datatrust’s data be as accurate as data collected through careful statistical sampling?” Although we envision our work to complement, not replace, traditional data collection, this article makes clear that the assumption that current ways of collecting and releasing data are perfectly accurate is false. We’re currently exploring new technology that deals with the privacy issues by adding noise to data, but in a way that reveals more information about what kind of noise is added, in contrast to the Census where they will not or cannot reveal what they’ve done to the data to “scrub” it of identifying information. More information about PINQ can be found here:

    Thumb up 0 Thumb down 0