Friday, November 12, 2010

Ripley's Believe It or Believe It

Amanda Ripley has an article in the Atlantic about comparing individual states to international test scores
http://www.theatlantic.com/magazine/archive/2010/12/your-child-left-behind/8310/


Given that the Atlantic seems to be publishing a few sophisticated pieces on education, I read it with higher expectations.  In cases like these, that makes me even more angry when the reporters use all the trappings of scientific writing (the language, the graphs) but none of the logic that separates science from pseudoscience and ideology.  I know there is not that much sense to this, but I am personally less disturbed by Jenny McCarthy, or Oprah, or Jerry Falwell, than by Ripley in this garbage masquerading as social science research.   The first group openly attacks the authority of science (and even logic) from outside in the favor or mysticism or religion, whereas the second steals my language, borrows what little legitimacy social science research has, writes "better sampling techniques" and "correlates," and utterly fails to make a logical argument.


Basically, the premise is that even if we separate our high achieving kids, (like, white kids from Massachusetts) they are still middling in the international rankings on (math) tests.  The (barely) unstated conclusion: Our education system is failing everyone, not just the poor kids.  But yeah, there is diversity in the states, and some states do a lot better than others.  For example, Massachusetts seems to be doing better.  Ripley answers why in one short paragraph, and then goes on to speculate what to do next.


Here's my comment in response:


It is worth emphasizing here that Massachusetts' reforms are not what NCLB and RTT are instituting nationwide (did they need to kill their teacher's union, which seems to the Manifesto-writers as the necessary first step?). Their success depends exactly on resisting the "clumsiness" of NCLB and RTT, as well as the clumsy logic in this article.
The short paragraph devoted to what works in Massachusetts (literacy test for teachers, test of students to get out of high school, "moving money around") is in itself a clumsy and simplistic view of reform. The paragraph describes two very specific outcome measures (a literacy test, rather than a credential, a single comprehensive test for students) and one vague input measure ("move money around where it is needed"). Then concludes that "meaningful outcome measures are necessary." What makes a meaningful outcome measure? What makes the "moving money around" successful? What makes Massachusetts' test a model of student accountability, but the NY Regents exam such an apparent failure?

Part of the problem with the debate over education policy is that even the most sophisticated journalists (and Ripley is among them) take these complicated findings and butcher them in the search of a coherent narrative. There are lots of tests, they might not be comparable? Wave your hands a little, "other countries are now more inclusive, better sampling techniques" voila: apples to apples. What, it is hard to compare across languages and cultures for content areas and science? Well, math is convenient, and math is a better predictor of future earnings anyways... voila: let's just use math scores and say it is the best indicator. 
The problem with this approach (and it is Hanushek's too) is that when we make choices of factors and indicators that matter relatively _more_, this approach urges us to throw away the other choice. Teachers matter most? So let's forget about poverty. Teacher credentials don't matter? Then neither does experience. Reduced class size doesn't immediately solve our problems? Stop throwing money at that problem then. 

Reporters need to stop taking Hanushek's (quiet, gentle) word for it and actually question his logic. They will find it to be quite ideological, and not bound by his data. There is a lot of sophisticated hand waving, but it masks an ideological agenda not based on the data. A simple understanding of what an effect size is, what portion of variance explained means, and basic economic (and psychological) research methods would help journalists be more skeptical of listening to "that guy you go to for What's the other side of the story?" This sort of false equivalency is what makes many scientists hate the majority of science reporting, and social science reporting (which is what this is) is no exception.  



No comments: