Friday, November 12, 2010

Ripley's Believe It or Believe It

Amanda Ripley has an article in the Atlantic about comparing individual states to international test scores

Given that the Atlantic seems to be publishing a few sophisticated pieces on education, I read it with higher expectations.  In cases like these, that makes me even more angry when the reporters use all the trappings of scientific writing (the language, the graphs) but none of the logic that separates science from pseudoscience and ideology.  I know there is not that much sense to this, but I am personally less disturbed by Jenny McCarthy, or Oprah, or Jerry Falwell, than by Ripley in this garbage masquerading as social science research.   The first group openly attacks the authority of science (and even logic) from outside in the favor or mysticism or religion, whereas the second steals my language, borrows what little legitimacy social science research has, writes "better sampling techniques" and "correlates," and utterly fails to make a logical argument.

Basically, the premise is that even if we separate our high achieving kids, (like, white kids from Massachusetts) they are still middling in the international rankings on (math) tests.  The (barely) unstated conclusion: Our education system is failing everyone, not just the poor kids.  But yeah, there is diversity in the states, and some states do a lot better than others.  For example, Massachusetts seems to be doing better.  Ripley answers why in one short paragraph, and then goes on to speculate what to do next.

Here's my comment in response:

It is worth emphasizing here that Massachusetts' reforms are not what NCLB and RTT are instituting nationwide (did they need to kill their teacher's union, which seems to the Manifesto-writers as the necessary first step?). Their success depends exactly on resisting the "clumsiness" of NCLB and RTT, as well as the clumsy logic in this article.
The short paragraph devoted to what works in Massachusetts (literacy test for teachers, test of students to get out of high school, "moving money around") is in itself a clumsy and simplistic view of reform. The paragraph describes two very specific outcome measures (a literacy test, rather than a credential, a single comprehensive test for students) and one vague input measure ("move money around where it is needed"). Then concludes that "meaningful outcome measures are necessary." What makes a meaningful outcome measure? What makes the "moving money around" successful? What makes Massachusetts' test a model of student accountability, but the NY Regents exam such an apparent failure?

Part of the problem with the debate over education policy is that even the most sophisticated journalists (and Ripley is among them) take these complicated findings and butcher them in the search of a coherent narrative. There are lots of tests, they might not be comparable? Wave your hands a little, "other countries are now more inclusive, better sampling techniques" voila: apples to apples. What, it is hard to compare across languages and cultures for content areas and science? Well, math is convenient, and math is a better predictor of future earnings anyways... voila: let's just use math scores and say it is the best indicator. 
The problem with this approach (and it is Hanushek's too) is that when we make choices of factors and indicators that matter relatively _more_, this approach urges us to throw away the other choice. Teachers matter most? So let's forget about poverty. Teacher credentials don't matter? Then neither does experience. Reduced class size doesn't immediately solve our problems? Stop throwing money at that problem then. 

Reporters need to stop taking Hanushek's (quiet, gentle) word for it and actually question his logic. They will find it to be quite ideological, and not bound by his data. There is a lot of sophisticated hand waving, but it masks an ideological agenda not based on the data. A simple understanding of what an effect size is, what portion of variance explained means, and basic economic (and psychological) research methods would help journalists be more skeptical of listening to "that guy you go to for What's the other side of the story?" This sort of false equivalency is what makes many scientists hate the majority of science reporting, and social science reporting (which is what this is) is no exception.  

Monday, October 18, 2010

Liberal Arts 2.0! New! Improved! Unbiased and free of any knowledge of Liberal Arts 1.0!

Permit me a little grumpiness and snark.  Pieces like this recent one in Wired, 7 Essential Skills You Didn't Learn In College (look, now with list-power, and SEO-bait!) drive me a little crazy.  They are part of a recent trend in some corners of the smart set to suppose that college needs a complete reinvention.  Look, the New Liberal Arts.  These starry-eyed future watchers operate under the very old assumption that higher education is outdated, outmoded and not preparing our students for their lives in the future.   I am a big fan of, one of the seed beds of this idea, and I am generally sympathetic to the idea that higher education needs to take the modern world into account, but journalistic forays into telling higher education how to do its job don't sit well with me.  
Rather than provocative prognosticating about jobs or skills of the future, this strikes me as a few journalists and social media mavens looking at the world of education (actually, introspecting at their memory of life as a student) and supposing that they have a better idea of how to organize it.  Many academics give a lot of thought to what a liberal arts education means in the modern world, and most try to design their classes to be interesting and applicable to their students lives.  There are arguments within the academic community (for example, around Mark Taylor's provocative proposal to "End the University as We Know It" and Andrew Hacker and Claudia Dreifus' book review) that are worth having, and many informed voices weigh in.  Simply ignoring those and saying "It's the 21st Century, knowing how to read a novel, craft an essay, or calculate the slope of a tangent isn't enough anymore," doesn't serve anybody.  I'll outline what in particular bothers me about this article below, but these concerns also apply to some of the other recent criticisms of higher education curricula.

First, the "skills" offered by these "out-of-the-box" thinkers achieve their apparent novelty by simply being overly broad or overly narrow conceptions of current skills and knowledge.  It would no doubt be totally awesome to be skilled at "Finding" (an actual chapter in this book) just as it would be awesome to be the hitchhiking fingersmith from Roald Dahl's Wonderful Story of Henry Sugar and Six more.  Unfortunately, in the real world, magicians have to learn every trick that they do independently, musicians have to learn each instrument, and I can beat the best squash player in the world in ping pong (or at least I could in college).  Because even the skill of "hitting a small ball with a racquet" is too general.

However, just because magicians have to learn each trick separately, does not mean that there aren't some general rules and principles (Like Penn and Teller's Principles of Sleight of Hand).  For example, in any given  introductory composition class, or in most writing classes across the curriculum, one learns rules of expressing oneself clearly and directly... or ... Writing classes teach you how to write.  You don't need a separate class on "Brevity" or "Writing for New Forms" (Wired Skill #6).  Take a non-fiction writing class, take a creative writing class, take a poetry class, they will put you on your way to making your tweets and your blog posts clear, direct, and interesting.

Second, all of the wonder at the networked world can make us lose sight of the fact that most knowledge has a structure.  Knowledge is not a stream to be poured into a waiting mind, but rather, a building to be constructed.  To teach my students about how the eye works, I need to first teach them a little bit about the nature of light.  And how neural transmission works, and how the two lenses of the eye bend light.  To understand how the brain works, it helps to know what the amygdala, hippocampus, cingulate gyrus, basal ganglia, etc are.   We have prerequisites in the college curriculum not just to limit class size, but because it is the nature of certain knowledge to be dependent on other knowledge.   We might desire to jump right into "Applied Cognition," or an interdisciplinary program about "Water" (one suggestion from Taylor's op-ed), but it makes little sense to talk about water systems engineering without some knowledge of basic principles of physics and engineering.  It makes little sense to talk about Applied Cognition without any knowledge of how and why psychology is a science, and some background facts of cognitive psychology.

Here are a few point-by-point take-downs

Wired Skill #1: Statistical Literacy
(Quotes from the original Wired article are indented)
Why take this course?  We are misled by numbers and our misunderstanding of probability.
What will you learn?  How to parse polls, play the odds, and embrace uncertainty.  
Hey, guess what, we've got that.  You may have missed it, because it is called Statistics.  Statistics literacy is also offered in the Psychology Department and called Research Methods and Statistics.  It is also offered in the Sociology, Political Science, and Economics departments, where it can be called Research Methods.  Make no mistake, it is offering statistical literacy, albeit for that discipline.  Many of these courses use Darrell Huff's How to Lie with Statistics, or one of the other books mentioned.  But actually, one of the best ways to get statistics knowledge and skills is to have a teacher skilled in statistics, design assignments and activities for students with your background knowledge, and goals.  Sometimes an experienced teacher will combine their expertise in the subject matter and their experience with how students learn the topics, and write a textbook (how terribly 20th century of them!  Why don't they just do a wiki?).  Some of these textbooks are fantastic ways to learn about the subject (yes, some of them suck, but that is mostly because it is really hard to write a textbook, not because it is made of paper, and ruled by evil publishing companies)
We use only 10 percent of our brain! That familiar statement is false—there’s no evidence to support it. Still, something about it just sounds right, so we internalize it and repeat it. Such is the power—and danger—of statistics.
Agreed.  This statistic is false.  However, the reason it is false is not based on statistics, but on knowledge of the brain, the relationship between the white matter and the gray matter, etc.  There is an excellent discussion of this in 50 Myths of Popular Psychology, a book my students read in General Psychology.
Our world is shaped by widespread statistical illiteracy.  We fear things that probably won’t kill us (terrorist attacks) and ignore things that probably will (texting while driving)
No.  The reason that we fear terrorist attacks and not texting drivers is a well-known cognitive bias called the availability heuristic.  It has little to do with statistical illiteracy, and more to do with our natural mental tendencies and how we make decisions with emotions.  The natural resistance of our brains to making decisions based on statistics rather than emotions could be taught in a statistics course along with the difference between a mean, a median, and a mode, but cognitive biases are not the same as statistical illiteracy.
Also in this department: Personal Data: The self may be unknowable, but it is not untrackable.  It is now easier than ever to tap into a wealth of data - heart rate, caloric input and output, foot speed, sleep patterns, even your own genetic code - to glean new insights and make better decisions about your health and behavior.
This is on the surface, a wonderful idea, but silly.  Unfortunately, we no longer live in the age of the citizen scientist.  Ben Franklin and Thomas Jefferson could go into their backyards, observe the animals in the creek, and actually contribute to science.  But now, to "glean insight" into any single variable above, you need graduate education in that topic.  This is obviously true for the genetic code, but as someone who collected heart rate data for my dissertation, I can attest to the fact that it is not interpretable without significant training and guidance.  

Wired Skill # 4: Applied Cognition : How the Mind Works and How to Make it Work for You
In just about any college catalog I can find, there is a course (and I have taught it) called "Cognitive Psychology."  This course teachers how the mind works, and how to apply this knowledge to your own life, like using the science of memory to make your studying more effective.  This course often assigns books such as Barry Schwartz's Paradox of Choice (look, he teaches Introduction to Psychology) and Jonah Leher's  How We Decide.  But often, to delve into the experiments themselves, and look at the data (this is science, after all) you need an experienced guide, and yes, sometimes a textbook, with questions, assignments, terms to know, etc.

So, what would I suggest to the authors of this article (hailing from planet Snarkmarket)?  Rather than arguing that the liberal arts are outdated, why not take a look at current liberal arts classes and curricula, and realize that you are actually arguing for their continued vitality in the modern world?  Read some of the academics who are struggling with keeping the liberal arts, rigorous and relevant, without turning them into vocational training programs.  Finally, consider that if we keep hyping the inadequacy of liberal arts 1.0 (are we really only at 1.0, after at least 100 years?) we may not end up with 2.0, but rather, just a lot less of 1.0.

Thursday, October 14, 2010

The Scientific Case for a Liberal Arts Education

Those in academe have no doubt heard that in the face of a tight budget, SUNY-Albany has cut several departments and the tenured professors in them.  French, Italian, classics, Russian and theater will no longer be programs at the flagship state college in New York.  Stanley Fish has an interesting column describing this development, noting that "The Crisis of the Humanities Officially Arrives."  I agree that there is a crisis, but I think it will soon be broader than just the humanities, this action reflects an attitude of thinly-veiled contempt for the liberal arts and for the life of the mind.  While first they have come for the humanities, the arguments used against these particular departments could apply to much of the traditional college curriculum.  For me, it is a good moment to argue for the vitality and utility of the liberal arts, using some arguments for the science of psychology and cognitive neuroscience, as well as some of the humility demanded in studying these fields.  

In considering how to respond, Fish points out several old argument that won't work. 

Well, it won’t do to invoke the pieties [that] ... the humanities enhance our culture; the humanities make our society better — because those pieties have a 19th century air about them and are not even believed in by some who rehearse them. 
And it won’t do to argue that the humanities contribute to economic health of the state — by producing more well-rounded workers or attracting corporations or delivering some other attenuated benefit — because nobody really buys that argument, not even the university administrators who make it.

I think Fish is correct in saying that these arguments won't work, and he resigns himself to the possibility of politics, or of a limited few powerful people pushing some important buttons since they have a personal value of French, or theater.  But I think the rest of us should not breathe a sigh of relief, but attack the assumption that these programs are less necessary than ours.  This logic will quickly lead to our own doorstep, because most of us do not have a firmer foothold than theater, or french, or russian when it comes to direct economic utility, or contribution to society or culture.  When you consider where these arguments take us, you quickly come to the conclusion that people should be in professional training programs as soon as humanly possible.  Why waste time studying y if you know that you are going to do x?  

But, as the title to this post declares, I think there is a strong case for studying many things, including theater, French, and classics.  I believe this case is first made by several studies which I will outline below.  But further, these studies (and many others) should urge us to be humble in the face of our increasing drive towards narrow training at the cost of education, and towards applied pursuits in the search of a specific goal at the cost of basic intellectual inquiry in pursuit of the pleasure of knowing.  This trend of more training and less education is pervasive in our current educational system, and can be seen in K-12 reforms like NCLB and RTT  (which evade politically controversial curriculum changes, but end up coercing teachers to become reading "trainers" rather than seeking to instill a love of reading and knowledge) to other accountability measures, coming soon to a college near you (paywalled Chronicle of Higher Ed piece, but you get the idea).

So, what is the scientific case for French or Italian or Russian?  First, there are diverse cognitive benefits for bilingualism.  Ellen Bialystock's work has documented that bilinguals have beginning troubles with the competing languages, but that this leads to very long term and general benefits for what is called executive functioning (brief Washington Post summarywhich generally concerns distributing one's mental resources.  Bilinguals are therefore more able to ignore distracting information, even in some basic, non-language tasks.  Her recent research suggests that bilingualism can delay the onset of dementia, by an average of 5 years (any economist want to calculate the cost to society on 5 years of dementia?).  Further, it seems that learning two (or more) languages can enhance our concept formation and cognitive flexibility (when we understand how words can have subtly different meanings in different languages, it illuminates the flexible nature of language itself) (short blurb here). 

Second, there are social, cultural and ethical advantages to studying a foreign language (especially study abroad).  Yes, the students love it (but they also seem to drink and party more, so that is no surprise.  But study abroad programs also enhance creativity (this link to the original article in a psych journal probably won't work).  Study abroad also enhances cross-cultural tolerance and a global awareness.  This tends to be a goal of a college education, but it can't be done by tolerance seminars, or even hundreds of generic exhortations.  You can't just learn to be generically tolerant, you have to learn a particular culture.

Finally, I think a take-home message we should all get from the science of why there is value in the humanities (and the liberal arts in general) is that we should be humble in our drive to tie education to specific and direct goals.  This approach is short-sighted, not just because bilingualism improves creativity and prevents cognitive aging, but because most of the effects of any sort of education are very very hard to measure.  We psychologists can assail education research for not providing clear answers on anything, but at some point we have to conclude that the kind of clear answers we want just don't exist.  Assessing the independent value of a good kindergarten experience (for example) is incredibly difficult, if not impossible.  But in our striving for accountability (such a reasonable sounding goal), we are increasingly narrowing our educational goals  to those that are easier to measure.  This first drives out the humanities (theater!  how do you measure outcomes of that?) but eventually it will drive the mind out of the academy and make trainers of us all.  And ironically, I think we'll find that the job training and all those 21st century skills didn't turn out to be "trainable" skills at all, but depended on the broad body of knowledge that we have been working on for over 200 years.

Tuesday, September 21, 2010

Skills and Knowledge, and Evil Standardized Tests

In a recent op-ed in the New York Times, Susan Engel, the director of the teaching program at Williams College decries the awful state of the reliance of our educational system on standardized tests.  I am very sympathetic to this view, but for different reasons than Engel.  She sees the rote memorization that current standardized tests assess as trivial, and suggests:
Instead, we should come up with assessments that truly measure the qualities of well-educated children: the ability to understand what they read; an interest in using books to gain knowledge; the capacity to know when a problem calls for mathematics and quantification; the agility to move from concrete examples to abstract principles and back again; the ability to think about a situation in several different ways; and a dynamic working knowledge of the society in which they live.
And a response by Jonah Lehrer on his blog Frontal Cortex over at Wired:
Lehrer says that the tests are bad, because knowledge is fleeting, and what really matters are perseverance and diligence.  These are the traits of successful students (highly correlated with grades) and employees, why don't we measure these directly?
I was moved to write a response, because I feel both of these pieces represent misconceptions of the nature of the difference between knowledge,  skills, and traits, and the problems with assessment.
First, I really think that Engel has mostly the right idea, and I am very sympathetic to her criticism.  She seems to accept that testing is inevitable, and that we need some metrics of success in schools.  Further, her critique is not of all testing, but rather of the particular form of our current tests which values convenience and ease of interpretation over what we actually value in our children.  However, I feel that she falls into the trap of separating knowledge from skills, and saying that what we really want from education is skills (true enough) and that we can do this without resorting to teaching boring factual knowledge (untrue).
Lehrer cites his own experience an organic chemistry class, in which the professor noted that students will forget all of the material, but that the class (and grades in the class) was a way to identify those students who had enough grit to stay up late and cram tons of facts into their heads for a limited amount of time.  The class was therefore not just a class on organic chemistry, but rather, a class on "learning how to learn."  Unfortunately for the hopes of many in higher education, cognitive psychologists have found that "learning how to learn" any general thing, is not really possible: skills of close reading, critical thinking, and abstract thought are quite specific to the particular background knowledge of the topic.
In Daniel Willingham’s book “Why Don’t Student’s Like School” he devotes a chapter to the evidence behind his claim that “factual knowledge precedes skill.” We think that knowledge is fleeting, or trivial, but we have that impression because once the knowledge begins to be used, it is thought of as skill. But most of the things we think of as skill are based on a foundation of factual knowledge. Lehrer may think that he simply flushed down everything that he learned in organic chemistry, but his skill as a science writer is informed by some of those facts, whether he knows it or not. Likewise with Engel’s suggestion of the elevation of noble skills rather than trivial memorized facts. The skills she imagines :
1) the ability to understand what they read;
Reading comprehension depends critically on background knowledge of the reader. Efforts to independently assess reading skill inevitably find that those who have more background knowledge in the topic area understand more. See Recht and Leslie (1988) for a study which compared “good” readers and “poor” readers on topics which they had background knowledge or not.  Here is Daniel Willingham on why
reading is not a skill.
2) the capacity to know when a problem calls for mathematics and quantification;
Again, this can be surprisingly specific to ones area of expertise and background knowledge.
3) the agility to move from concrete examples to abstract principles and back again;
Again, many studies in cognitive psychology have shown that abstract thinking ability is strikingly specific. Professionals in one domain, which you might think helps them be better “abstract thinkers” or “critical thinkers” are shown to be just merely average when tested on a task outside of their domain which requires abstract thought.
4) the ability to think about a situation in several different ways;
This again is specific on the background knowledge. People are generally limited to relating situations to things they already know. This is limited by background knowledge.
5) and a dynamic working knowledge of the society in which they live.
Here we have knowledge, but not rote memorization, rather, a dynamic working knowledge of society. But again, what makes this knowledge dynamic and working as opposed to static and trivial? For example, what if Engel wanted high school students to be able to reason about race inequity in our current society. Wouldn’t this depend on whether they knew the history of the civil rights movement? Or demographic facts about our country?
But I agree that the emphasis on standardized tests is ill-conceived.  And I agree with Engel's suggestion that we get students interested in using books as a way to gain knowledge.  For me, this is the real tragedy of our current crop of education reform: a totally backward and ham-handed approach to motivation and interest, both from the perspective of the teachers as well as the students.
The current regime of high stakes testing is not a problem because knowledge is trivial, but because constant narrow testing is actually a terrible way to motivate students to get this knowledge. What should we be doing? Getting kids excited about reading. Teaching them interesting content in science, social studies, literature, etc. And yes, improving their vocabulary, and their general background knowledge.  If we did that, I think we would find their test scores magically rising.
Further, we need to recognize that basing their pay on students' test scores is also a terrible way to motivate teachers.  For many teachers (myself included), a primary challenge is to instill as much knowledge, while maintaining motivation and interest.  This is in the context of the fact that our brains were not meant to think (most of our brain power and sophistication is devoted to perception and moving).  One problem with high stakes testing is that a too-strong incentive leads to limited learning, and paradoxically, lower motivation.   Among the first to observe how too large an incentive limits learning was  Edward Tolman, in his classic paper "Cognitive Maps in Rats and Men" (1948).  
If rats are too strongly motivated in their original learning, they find it very difficult to relearn when the original path is no longer correct.
Tolman ends his paper with the following words:
We must, in short, subject our children and ourselves (as the kindly experimenter would his rats) to the optimal conditions of moderate motivation and of an absence of unnecessary frustrations, whenever we put them and ourselves before that great God-given maze which is our human world. I cannot predict whether or not we will be able, or be allowed, to do this; but I can say that, only insofar as we are able and are allowed, have we cause for hope.
In my next post, I will discuss some more about which theory of learning and motivation we have chosen instead of Tolman (and the psychologists who have followed him).

Wednesday, June 02, 2010

Shop Class as Soulcraft vs. the Checklist

I am now reading Matthew Crawford's Shop Class as Soulcraft (original essay with same points, NYT review) and am finding it very interesting, if a little simplistic and categorical, and the evidence too anecdotal (I guess that's what you get from a philosopher: "Here, take this situation, let me reason about it, and make generalizations about all of American corporate culture from that one story").  But I wanted to write this down (to get reactions, and to force myself to put words to paper) because as I was disagreeing with some of his generalizations, I found myself thinking of Atul Gawande's The Checklist (original New Yorker essay, NYT review).
Crawford's big point is that the "knowledge worker" has basically had the life sucked out of him by a corporate culture in which there are no objective criteria of evaluation.   Nothing that he does has an easily observable and demonstrable effect, so it all comes down to rhetoric and feelings.  If you feel good and a part of the team, and everyone has a warm fuzzy feeling about their company and their brand, then you have done your job well.  Crawford compares this to his motorcycle shop (or most trades) where either the bike runs or it doesn't.  Basically, whereas breathless futurists (or educational reformers) have said that we'll all be knowledge workers in the future, so we better go to college and prepare to think for a living, Crawford is saying that this supposed "knowledge work" that awaits us is not Google, but Dilbert and the Office, and it is soul-sapping and bad in all sorts of ways.
"There is a pride of accomplishment in the performance of whole tasks that can be held in the mind all at once, and contemplated as whole once finished.  In most work that transpires in large organizations, one's work is meaningless taken by itself"  p.156
One way that he attacks modern work is its reliance on algorithmic or recipe knowledge.  Algorithms such as these, whether used to write abstracts for professional journal articles (a mind-numbing and stupid job he had for a while) or motorcycle repair manuals (a disaster when someone who does not know about motorcycles just copies and pastes from plans they don't understand, drive Crawford crazy, and illustrate how our modern society no longer values the tacit knowledge and expertise of the expert tradesman.   This I can agree with ... to a point.  It is certainly true for the extreme examples he cites.  But he fails to acknowledge that there are still a fair number of jobs that are a mix of "knowledge work" and trade work.

This is where the Checklist seems superior and more sophisticated to me.  There is a fair amount of tacit knowledge with surgeons (and the pilots, and construction workers profiled in Gawande's book) but a checklist is also an important supplement to their own expertise.  This doesn't have to be soul-deadening or frustrating, as Crawford depicts it, but can free our minds to do the amazing pattern-recognizing that our expertise allows.  Rather than dismissing the algorithm as comparing humans to computers and finding them not rule-following enough, Gawande shows that there are some situations which are so complicated that they need a checklist.  Crawford has a disdain for "teamwork" in the corporate setting, he much prefers the solitary puzzle solving of him vs. the motorcycle, but he doesn't acknowledge that the very fact that the motorcycle exists is due to specialization, teamwork, and yes, some recipe following.

I do agree with Crawford that some knowledge work that by separating us from the effects of our labors, corrupts morals, inhibits learning, and degrades the purpose and value that our work holds.  But I do think that some of this is necessary, and we should try to do the best we can with it (we are not going to back to small businesses making cars, TV's, furniture, appliances, etc).  Also, there are a lot of interesting professions which are somewhere in the middle of the shop class vs. mathematical physics (a convenient straw man throughout the book is his dad, who offers pure equations and formulas, when the world of a 1983 VW carburetor has dirty nuts and bolts).  Doctors need trade knowledge, but they also need to utilize the science of a knowledge worker.  Teachers need to have trade knowledge of their students and what makes them seem happy, but also the science of memory and learning.  If we could acknowledge that many professions are both trade and professional, instead of glorifying one at the expense of the other, I think we would be much better off.

Thursday, February 25, 2010

Learning Styles: What's Being Debunked

I have a response to an article on learning styles from Teacher Magazine
Here is the original article, and here is the link for my response, which you need a username and password to the site for, so the gracious editors have allowed me to post it here as well. By the way, you can sign up for a free account, which I would recommend if you are interested in this sort of stuff. So, here is my piece:

In her recent article “The Bunk of Debunking Learning Styles” Heather Wolpert-Gawron makes a plea for common sense in the face of research findings that contradict her direct observations of learning styles in the classroom. She cites a recent article (“Learning Styles: Concepts and Evidence” by Harold Pashler, Mark McDaniel, Doug Rohrer and Robert Bjork, in the journal Psychological Science in the Public Interest) which has claimed that there is no scientific evidence that learning styles exist, and argues that knowledge she’s gained during her 11-year career in the classroom prove that they do. As a psychological scientist and a son and husband to classroom teachers, I feel the need to respond.

First, it is necessary to clarify the definition of learning styles and the predictions of learning styles theory. Second, I want to pinpoint what the “debunkers” in question are claiming, which I think is more specific than Ms. Gawron-Wolpert describes. Finally, since she seems to believe that basic science is useless when it comes to the practice of teaching, I want to describe how basic cognitive science can apply to teaching. Although the scientific search for evidence of learning styles has yielded no evidence of their existence, basic psychological science can help teachers, even as it steers clear of dictating exactly what works in any individual classroom.

Learning Styles Defined

We must begin with how learning styles have been defined, both in the research literature as well as in educational practice. Learning styles theory does not propose generic differences between how students learn, but asserts a specific kind of difference. A learning style, by the prevailing account, is a preferred mode of learning, distinct from ability, and independent of content area. For example, a visual learner is not necessarily better at learning math or geography than other students, but a better learner when any material is presented visually, compared to other modes of presentation. This may not be Ms. Wolpert-Gawron’s definition of learning style, but it is the definition used by researchers for over 50 years, as well as the educational policy makers who are currently implementing learning styles theory. For example, although multiple intelligences may seem similar to learning styles, Howard Gardiner has made it quite clear that multiple intelligences is a theory of abilities, not of styles. The current learning styles theory defines “mode of learning” as a preferred sensory channel, either visual, auditory, or kinesthetic, but there have been many ways of defining “mode” in the past.
Why is it critical that a learning style be distinct from content and ability? Because one important claim of learning styles theory is that no one learning style is superior to another. If visual learners learned math faster, and kinesthetic learners learned basketball faster, we wouldn’t need to label them with learning styles at all, we could say that one group has mathematical aptitude and the other athletic aptitude. Unlike decisions about what works in any given classroom, which are for individual teachers to make, learning styles is a theory of how the mind works, and it is framed in a way that makes it suited to controlled scientific testing. The key scientific claim for learning styles theory is that we could teach two classrooms of randomly assigned students the same content, but one would be taught “visually”, and one “auditorially.” The visual learners should do better than the auditory learners in the visual classroom and vice versa in the auditory classroom. If everyone does better in the visual classroom, then we would conclude that the content is more suitable for visual presentation. If the “visual learners” do better in both classrooms, then you have identified an ability, not a style. This “matching styles to instruction” pattern of relative differences in learning is the evidence that the authors of the paper above searched for in the scientific literature. Several studies claim to support learning styles, but did not perform this critical test. Those few that did satisfy this design failed to find evidence for learning styles.

What the Debunkers Do, and Why

These researchers have identified the central claim of learning styles theory, and failed to find any scientific evidence for this particular claim, despite many relevant studies. Ms. Wolpert-Gawron accuses them of invalidating the practice of differentiating learners at all. She suggests that they don’t mention the “alternative —that of teaching all students the same way.” This is not the alternative that the scientists have in mind. One representative quote from the article is, “it is undeniable that the instruction that is optimal for a given student will often need to be guided by the aptitude, prior knowledge, and cultural assumptions that student brings to a learning task.” In other words, obviously students differ, just not by learning style.
To refute her view of the debunkers, Ms. Wolpert offers many examples from her own experience which show that learners are different. While this is not the claim of the scientists, why should they bother debunking this theory at all, if many people define it as generally as Ms. Wolpert? Those scientists who debunk learning styles do so in order to remove the obstacles to teachers’ focusing their attention on dimensions of learners that both science and practice have identified as critical. In their words, “assuming that people are enormously heterogeneous in their instructional needs may draw attention away from the body of basic and applied research on learning that provides a foundation of principles and practices that can upgrade everybody’s learning.” Learning styles theory distracts teachers from principles and practices that we all agree are successful.
Ms. Wolpert-Gawron clearly agrees, stating that the engagement of all students is crucial to learning, but she maintains that a learning styles approach fosters attention to student engagement. Perhaps this is true for the way that she has defined learning styles, but it is not true for learning styles theory as a scientific theory of mind, as it is applied to many teacher evaluations, or state standards. Enforcing attention to learning styles directs teachers to a particular method of student engagement, and necessarily away from another. For example, to illustrate a certain concept, one could tell a very engaging story, related to the lives of the students themselves. But if one were a teacher with intense time pressure, meetings galore, and multiple classes to prepare (which is to say, any teacher), learning styles theory would encourage attention to the sensory modality of the story (visual, auditory, or kinesthetic), rather than to the meaning of its content, its intrinsic interest and its appropriateness for the particular lesson of the day. This doesn’t seem to trouble many teachers, who like Ms Wolpert-Gawron have been happily ignoring the central claims of learning styles theorists. However, this may not be the case with beginning teachers, or teachers who are stringently evaluated by arbitrary criteria based on the myth of learning styles.

The Role of Basic Cognitive Science in the Classroom

The misunderstanding of the scientific claims does undermine the subsequent article, but as a cognitive scientist who often reports research findings to my family of teachers, I feel it is important to address and confront the gaps that she mentions (and doesn’t mention) between research and practice, as well as her clear disdain for the scientists in question and their unwelcome incursion into her classroom. She is not unique in this attitude, nor is it limited to learning styles. This gap between basic research in cognitive psychology and the practice of teaching has negative consequences for each side. In their distrust of basic science, teachers miss an opportunity to improve their students’ learning by applying their expertise on relevant dimensions of learning. In allowing this distrust to exist, scientists undermine the public’s trust in the value of the basic science to understanding human behavior. Just as the science of medicine need not undermine the expertise of a doctor, the science of psychology need not invalidate practice-based knowledge, but rather supplement it with general information about theories of the mind and learning, without direct prescriptions for what to do in a certain classroom situation.
In order to repair this distrust, scientists must first summarize our findings for audiences outside of our community, with an eye toward informing educational practice. In doing so, we need to describe our basic science findings as theories of how the mind works, not straightforward recipes for educational reform. Daniel Willingham’s recent book “Why Don’t Students Like School,” may not do a great job answering the question in the title, but it serves as an excellent summary of consensus views in cognitive science as they apply to education (the learning styles and multiple intelligences chapter is particularly cogent and insightful). But we can’t stop there. We must also dispel myths, and we in psychology have a larger set of myths to dispel than others. When these myths exist, they are corrosive to science, because while seeming to represent science (“well, it says it’s a theory”) they do not provide the measurable, reliable results that science demands. These myths are perpetuating identity theft of science, calling themselves science and wrecking havoc on our credit scores, yet many scientists don’t connect the bankruptcy of public trust in science with the myths that we let roam freely. In the case of learning styles, minimal evidence has been exaggerated and marketed to educators and administrators, outside of the checks of the scientific process. As scientists we must take greater efforts to reign in this misapplication of science. The recent article on learning styles that Ms. Wolpert-Gawron refers to is an example of this, as is another excellent book “50 Great Myths of Popular Psychology,” by Scott Lilienfeld, Steven Jay Lynn, John Ruscio and Barry Beyestein (learning styles is number 18). What’s more, the journal in which the learning styles article appeared, “Psychological Science in the Public Interest,” is a journal with the laudable goal of improving “giving psychology away,” with both topics and authors commissioned after a careful nomination process.
In addition to summarizing the scientific consensus, and dispelling myths, the basic science of learning should clearly state the questions that we do not know the answer to, and get out of the way of expert teachers. Experienced teachers certainly have knowledge that science does not. Given that practice-based knowledge is practical knowledge, gained by classroom experience, it can sometimes be specific to the population a teacher serves, rather than a general knowledge of how people learn (just like baseball players are not necessarily experts in the general rules of projectile motion). It is not basic scientists but political reformers who are turning scientific theories into coarse criteria for evaluating teachers based on test scores, or a simple checklist. Basic psychological scientists are in general cautious, as well as skeptical of attempts to directly apply general theories to particular classroom situations. What Ms. Wolpert-Gawron is interpreting as science telling her what she sees in her classroom is in fact a summary statement of scientists telling her what they don’t see, despite having looked in the best ways they know how. The authors of the study, in my mind, are attempting to empower teachers to use the principles of learning that they know work, while encouraging them to steer clear of myths, which may have had a scientific-seeming provenance (if it’s from Harvard…), but have not received rigorous scientific support for critical claims.
I argue that basic science can concern itself with general mechanisms, and teachers can practice applied science in their own classrooms, but what happens when there seems to be a direct confrontation? How do we decide between the scientist in his sterile lab vs. the expert teacher with 11 years experience and 2500 students? In other words, why should experienced teachers let scientists tell them what is and isn’t a myth when common sense dictates otherwise? Because despite the fact that personal experience is very compelling and convincing, human beings are notoriously bad at direct observation of complex relationships. Our stone-aged brains notice patterns that aren’t there, seek out evidence that confirms our preconceived notions (called the confirmation bias), and ignore evidence that might prove us wrong. This is just as true for surgeons and scientists as it is for teachers, and the controlled observation, whether in a scientific lab, or through a double-blind study, or using randomized assignment to experimental groups, is absolutely critical element to the success of science in explaining and predicting complex human phenomena. This holds equally true whether it be the spread of disease or the process of learning. The history of common sense has been remarkably wrong, even in those experts who have seen thousands of cases. Scientists are people too, and so we don’t trust our own observations any more than anyone else’s, but rather use them to inform what should be tested in a controlled study. If controlled study after controlled study fails to observe, or offers contrary evidence to our most cherished beliefs, we have no choice but to give them up.
The goal then, is a collaboration to arrive at the most relevant dimensions in learning, and the most effective way of teaching, respecting the expertise of the teacher, but accepting that in some cases, science can point out where myths exist. For example, the science of cognitive psychology can point to the necessity of practice (for example, through drilling) and background knowledge for deeper learning as well as the ways in which motivation and engagement are critical to learning, but cannot offer an ideal way to balance these two in a American History lesson for English Language Learners. The science can note that there is considerable evidence for the organization and meaning of a lesson having a large effect on learning, and little evidence that the color of the ink, or whether the words are on a page or on a blackboard have any effect. This does not mean that it doesn’t matter in any classroom, just that teachers should be cautious in choosing to spend time on choosing the color of the ink and err towards thinking about the structure and meaning of their lessons. However, only the teacher can apply these considerations to their subject and the particular students in front of them. As Ms. Wolpert-Gawron notes, teaching learning styles is far more difficult than not, but science here is trying to offer a way to make things easier. Just as science in medicine can call a doctor’s attention to a set of manageable indexes of health, science in education should aim to suggest to our teachers a set of relevant dimensions of learning, with the understanding that teaching is immensely complex. While basic science can offer theories and insight into how learning works, no one knows the particular students in front of her better than the experienced teacher.