Search the Site

Posts Tagged ‘testing’


More Evidence on Charter Schools

Writing at Slate, Ray Fisman reviews the latest research on the efficacy of charter schools.  The study focuses on students at six Boston schools that had previously demonstrated an ability to improve students’ test scores on the Massachusetts Comprehensive Assessment System.  This time, however, the researchers wanted to evaluate whether the schools really improved student outcomes or just mastered the art of “teaching to the test.” Here’s the breakdown:

The study examines the college readiness of Boston public school students who applied to attend the six charter schools between 2002 and 2008, with projected graduation dates of 2006–2013. In just about every dimension that affects post-secondary education, students who got high lottery numbers (and hence were much more likely to enroll in a charter school) outperformed those assigned lower lottery numbers. Getting into a charter school doubled the likelihood of enrolling in Advanced Placement classes (the effects are much bigger for math and science than for English) and also doubled the chances that a student will score high enough on standardized tests to be eligible for state-financed college scholarships. While charter school students aren’t more likely to take the SAT, the ones who do perform better, mainly due to higher math scores.



Font Improvement

I write all my papers, letters, and exams using the typeface Times New Roman.  As a lunch-table discussion here in England revealed, the University insists on certain typefaces that are dyslexia-friendly, particularly Arial, Trebuchet, and Verdana.  It costs me or any other faculty member nothing to use one of these on exams; non-dyslexic students are not harmed by them, and dyslexic students are better off.  Henceforth, no more Times New Roman on tests — mine will all be in Arial.  A clear Pareto improvement. (HT: MS)



Time Between Tests

A new working paper (abstract; PDF) by Ian Fillmore and Devin G. Pope examines whether “cognitive fatigue” has any impact on exam results. The researchers looked at the number of days students had between AP exams, and found that resting time matters:

In many education and work environments, economic agents must perform several mental tasks in a short period of time. As with physical fatigue, it is likely that cognitive fatigue can occur and affect performance if a series of mental tasks are scheduled close together. In this paper, we identify the impact of time between cognitive tasks on performance in a particular context: the taking of Advanced Placement (AP) exams by high-school students. We exploit the fact that AP exam dates change from year to year, so that students who take two subject exams in one year may have a different number of days between the exams than students who take the same two exams in a different year. We find strong evidence that a shorter amount of time between exams is associated with lower scores, particularly on the second exam. Our estimates suggest that students who take exams with 10 days of separation are 8% more likely to pass both exams than students who take the same two exams with only 1 day of separation.



Evaluating Teachers: What About Doing it the Old-Fashioned Way?

As part of our ongoing obsession with improving public education, we bring you a new study from Jonah E. Rockoff of Columbia Business School and Cecilia Speroni, a former doctoral student at Columbia’s Teachers College, that explores the power of objective and subjective teacher evaluations. While an emphasis on merit pay and test scores can lead to widespread cheating (as covered in this week’s Freakonomics Marketplace podcast), not to mention the occasional Matt Damon outburst, Rockoff and Speroni offer a potential glimmer of hope for the old-fashioned approach: the study finds that subjective teacher evaluations for New York City teachers had strong predictive power for future student performance. Here’s the abstract:



Am I Good Enough to Compete In a Prediction Tournament?

Last spring, we posted on Phil Tetlock’s massive prediction tournament: Good Judgment. You might remember Tetlock from our latest Freakonomics Radio podcast, “The Folly of Prediction.” (You can download/subscribe at iTunes, get the RSS feed, or read the transcript here.)
Tetlock is a psychologist at the University of Pennsylvania, well-known for his book Expert Political Judgment, in which he tracked 80,000 predictions over the course of 20 years. Turns out that humans are not great at predicting the future, and experts do just a bit better than a random guessing strategy.




Lazy Academics

It’s final exam time, and my office is packed with a few of the 520 students in my bigger class. Although I’m pleased by their interest, I ask why they’re spending so much time on my course. The answer is that it’s the only final exam they have.



How Fit Is Your Brain?

We know Freakonomics readers love brain teasers. We hope you’ll test your brain on these five puzzlers. The results, completely anonymous, will be compiled and analyzed by the Vision Lab and the Social Neuroscience and Psychopathology Lab at Harvard University. Have fun, for science!



The Gang Test

Social psychologist Malcolm Klein devised a test for Los Angeles that he says predicts how likely a child is to join a gang, reports the Wall Street Journal. The test, which can be found here in its entirely, asks kids questions like whether they have just broken up with a boyfriend or girlfriend and how many of their friends have used marijuana. The problem: the city won’t know for several years if the predictions are accurate.



High-Stakes Testing

Each year, a million or so high school students pay $45 for the chance to prove themselves with the College Board’s SAT. A good percentage of those students pay for the College Board’s test prep courses as well. All that testing adds up.



The Art of SATergy

My son took the SSAT exam this past Saturday. And while I was sitting in the Choate athletic facility waiting for him to finish, I remembered that Avinash Dixit and Barry Nalebuff‘s new book, The Art of Strategy, has a great example concerning standardized testing. Game theory is so powerful it can help you figure out the correct answer without . . .