Medicine and Statistics Don’t Mix

Some friends of mine recently were trying to get pregnant with the help of a fertility treatment. At great financial expense, not to mention pain and inconvenience, six eggs were removed and fertilized. These six embryos were then subjected to Pre-Implantation Genetic Diagnosis (P.G.D.), a process which cost $5,000 all by itself.

The results that came back from the P.G.D. were disastrous.

Four of the embryos were determined to be completely non-viable. The other two embryos were missing critical genes/D.N.A. sequences which suggested that implantation would lead either to spontaneous abortion or to a baby with terrible birth defects.

The only silver lining on this terrible result is that the latter test had a false positive rate of 10 percent, meaning that there was a one-in-ten chance that one of those two embryos might be viable.

So the lab ran the test again. Once again the results came back that the critical D.N.A. sequences were missing. The lab told my friends that failing the test twice left only a 1 in 100 chance that each of the two embryos were viable.

My friends — either because they are optimists, fools, or perhaps know a lot more about statistics than the people running the tests — decided to go ahead and spend a whole lot more money to have these almost certainly worthless embryos implanted nonetheless.

Nine months later, I am happy to report that they have a beautiful, perfectly healthy set of twins.

The odds against this happening, according to the lab, were 10,000 to 1.

So what happened? Was it a miracle? I suspect not. Without knowing anything about the test, my guess is that the test results are positively correlated, certainly when doing the test twice on the same embryo, but probably across embryos from the same batch as well.

But, the doctors interpreted the test outcomes as if they were uncorrelated, which led them to be far too pessimistic. The right odds might be as high as 1 in 10, or maybe something like 1 in 30. (Or maybe the whole test is just nonsense and the odds were 90 percent!)

Anyway, this is just the latest example of why I never trust statistics I get from people in the field of medicine, ever.

My favorite story concerns my son Nicholas:

Relatively early on in the pregnancy we had an ultrasound. The technician said that although it was very early, he thought he could predict whether it would be a boy or a girl, if we wanted to know. We said, “Yes, absolutely we want to know.” He told us he thought it would be a boy, although he couldn’t be certain.

“How sure are you?” I asked

“I’m about 50-50,” he replied.

Leave A Comment

Comments are moderated and generally will be posted if they are on-topic and not abusive.

 

COMMENTS: 102

View All Comments »
  1. Peter says:

    Moral of the story: statistics don’t lie, but sometimes life just gets its way

    Mazel to all!

    Thumb up 0 Thumb down 0
  2. Chris Nelson says:

    During childbirth classes for my second child, we learned that US OBs consider full-term to be 40 weeks but European OBs consider it to be 41. So, a lot of US women are induced to labor because they are “late” when in Europe they’d be right on schedule. I became convinced that doctors can’t do statistics.

    Thumb up 0 Thumb down 0
  3. Justin says:

    This seems like an example of base rate neglect. If the test for the embryos is 90% accurate, that doesn’t mean that if the test comes back with a bad result that there is only a 10% chance of the embryo being viable.

    Let’s say for instance that 1 out of every 1000 embryos is not viable; that 1/1000 is the base rate. The actual chance that an embryo is not viable after being tested with 90% accuracy is 1/10 multiplied by the base rate of 1/1000.

    I’m aware that this result doesn’t “feel” right, but I’m sure if you Google base rate neglect you’ll find both that it’s a common logical fallacy and people who can explain it better than I can.

    Thumb up 1 Thumb down 0
  4. Jp says:

    That they defied the odds is not evidence that the odds were incorrect. Maybe they really did have a 1 in 10,000 chance of having a healthy baby, and they were that one chance. That something is unlikely is not the same as to say that it is impossible.
    This is not to argue the statistics were not wrong. Only that your evidence of it being wrongs is not evidence at all. If you had actual evidence (like, say, information about the specific test or how it was performed, the impact of other factors, etc.), you could argue they did their math wrong. But getting lucky doesn’t make statistics useless. It actually helps us better understand the nature of probability, and how it differs from possibility.

    Thumb up 0 Thumb down 0
  5. Willie says:

    I’m obviously in the wrong career. I want the job of ensuring people that they have a 50/50 chance of having a boy (especially if they are Chinese). Who knows I might even get a tip for the good news! It probably would have been cheaper to just implant the first egg they harvested rather than spend $5000 for a test which proved unreliable. That’s like paying your mechanic $5000 to run a test to verify that your Michellins will go 50,000 miles. He takes the tires off bounce the once on the pavement and if they don’t explode collects his $5000. Oh and if I’m wrong I’ll pay you your $5000 back. I’d take that risk everytime on a Lexus, but maybe not so much on a Jeep Wrangler.

    Thumb up 0 Thumb down 0
  6. tim says:

    And yet think of all the children in foster homes and orphanages…

    Thumb up 0 Thumb down 0
  7. Jerry Tsai says:

    While not discounting the possibility they merely had good luck and certainly agreeing with your point the test results are correlated, perhaps the Pre-Implantation Genetic Diagnosis test is flawed.

    Such a test would be difficult, and think about how the estimate of its accuracy would have had to been gained. You would have needed to implant each embryo (positive or negative test) and measured whether each embryo came to term. I’d bet that that sort of research was not done, which probably means they used some sort of proxy for viability, which may mean the test results should not be given as much credence as they purport to have.

    Since in a very recent post, you touted the Johns Hopkins Department of Biostatistics’ videos, you should be aware you seem to be contradicting the Department’s very existence with the title of your blog post. Actually, your invocation of correlation to explain why we should not trust the one-in-10,000 probability figure would be textbook statistics. The lesson here, more likely, is not that statistics and medicine do not mix, but that bench scientists and clinicians need to better educated in statistical thinking.

    Thumb up 0 Thumb down 0
  8. EmilyAnabel says:

    A colleague of mine opted for an early screening test during her pregnancy. Part of the test involved a visit with a “genetic counselor” whose job it was was to explain how to interpret what amounted to the Type I and Type II errors from the tests. Unfortunately, the counselor, though well-meaning, had no grasp of elementary probability. After a few moments my colleague (a researcher in biochem) realized this and spent the rest of the appointment trying gently and patiently to explain Bayes’ theorem in hopes that this might help other patients in the future. She made no detectable progress.

    Thumb up 0 Thumb down 0