A Youth Intervention in Chicago That Works

A new NBER working paper (abstract; PDF) by University of Chicago researchers Sara Heller, Harold A. Pollack, Roseanna Ander, and Jens Ludwig analyzes the effects of a Chicago program targeted at “disadvantaged male youth grades 7-10 from high-crime Chicago neighborhoods.”  The results of the intervention look promising:

Improving the long-term life outcomes of disadvantaged youth remains a top policy priority in the United States, although identifying successful interventions for adolescents – particularly males – has proven challenging. This paper reports results from a large randomized controlled trial of an intervention for disadvantaged male youth grades 7-10 from high-crime Chicago neighborhoods. The intervention was delivered by two local non-profits and included regular interactions with a pro-social adult, after-school programming, and – perhaps the most novel ingredient – in-school programming designed to reduce common judgment and decision-making problems related to automatic behavior and biased beliefs, or what psychologists call cognitive behavioral therapy (CBT). We randomly assigned 2,740 youth to programming or to a control group; about half those offered programming participated, with the average participant attending 13 sessions. Program participation reduced violent-crime arrests during the program year by 8.1 per 100 youth (a 44 percent reduction). It also generated sustained gains in schooling outcomes equal to 0.14 standard deviations during the program year and 0.19 standard deviations during the follow-up year, which we estimate could lead to higher graduation rates of 3-10 percentage points (7-22 percent). Depending on how one monetizes the social costs of crime, the benefit-cost ratio may be as high as 30:1 from reductions in criminal activity alone.

Leave A Comment

Comments are moderated and generally will be posted if they are on-topic and not abusive.

 

COMMENTS: 7


  1. Jim says:

    So a group that voluntarily decided to join a program intended to reduce crime showed reduced crime. Sounds like a selection bias. How did the group that selected not to participate compare with the control group?
    The study conclusions would be far more believable if the participant and control groups were randomly assigned.

    Thumb up 6 Thumb down 2
    • Joe Dokes says:

      If you were actually read the summary you’d notice that those who initially applied for the program were divided into two groups, one who got the intervention and a control group, the positive response was a comparison between the control and those who received the intervention.

      The reality is that programs can be effective or ineffective. Using double blind studies can help social scientists figure out which ones work and which ones don’t. Thus, studies like this are critical for spending scarce public resources wisely. For example, the DARE program which has spent millions of dollars over the years was finally subject to some scrutiny. The result those who had been in a DARE programe where no better at staying off drugs than those who did not participate in the program. Thus those dollars should be used for something else.

      Regards,

      Joe Dokes

      Well-loved. Like or Dislike: Thumb up 7 Thumb down 2
      • Jim says:

        Joe: This is what I get from reading the abstract
        “We randomly assigned 2,740 youth to programming or to a control group; ”
        Good so far, a random selection.
        Then it says ” about half those offered programming participated, with the average participant attending 13 sessions. ”
        So half of the programing group participated in the programing, and half chose not to participate.. This is self selection, not random. Its poor study design. If the willing participants had been divided into a control and treatment group, it would have be a stronger result.
        The treatment of the others selected for programing that declined to participate is not clear in the abstract. Did they get included in the programing pool, the control group or eliminated from the study statistics? Is the outcome of the decline group different from the treatment or control group?

        Regards

        Thumb up 4 Thumb down 0
      • MW says:

        Just a correction on terminology. This is not a “double blind study” as you state. In a double blind, neither the at-risk youth nor the treatment providers would know which participants were receiving the treatment.

        This is a “randomly controlled trial”. It is common for RCTs to be double blind, especially in pharmaceuticals where a placebo* can be given in place of the drug. On the other hand, often (as in this case) the treatment can’t be concealed, so a non-blinded trial is run instead.

        (* or a well established drug for the same condition, if giving a placebo would be unethical.)

        Thumb up 1 Thumb down 0
  2. James says:

    I can’t help but wonder, though, whether the study is using the proper metric. The program’s success is being measured by a reduction in number of arrests, but that assumes a fixed ratio between crimes & arrests. The reduction might indeed be due to fewer crimes being committed, but it might also be caused either by the police being less interested in pursuing program participants, or by the participants learning how to avoid being caught.

    Thumb up 3 Thumb down 0
  3. Eric M. Jones says:

    “…We randomly assigned 2,740 youth to programming or to a control group; about half those offered programming participated, with the average participant attending 13 sessions.” How many sessions were offered? This is unclear.

    I’m with Jim, Joe and James on this. The study seems badly flawed. People respond positively no matter how or why they are being studied, simply because they’re now ‘special’.

    Thumb up 3 Thumb down 0
  4. Buzz Breedlove says:

    Based on my reading of the full report, it appears that in measuring results in the treatment group the authors ignored results of those in the treatment group who declined services. If so, possibly extreme selection bias. It is difficult for me to imagine serious social scientists thinking this approach is OK. Maybe I misunderstand their measurement approach?

    To Eric’s point about possible positive effects on treatments of being studied. Would he suggest we do an experiment in which controls are totally ignored (how?) and in whic treatment group is not treated but simply studied, in order to measure “study” effect? Alternatively, maybe add study costs to treatment costs to measure B/C ratio? I guess I miss his point.

    Thumb up 1 Thumb down 0