Sunday, June 26, 2011

The web is best when it's arguing

Since I just got back into Twitter I've been thinking more about the strengths and weaknesses of the web. I am going to focus on what I think the internet does right, instead of belaboring what the net is doing to kids brains when they spend all day on Facebook (back in my day, we spent our useless teen time on the phone!). Yes, this is continuing to riff on my Oral Culture vs. Literate Culture post from last week.

I know that looking at Facebook comment wars doesn't back this up, but I think when the internet really shines is arguments. The recent theory by Hugo Mercier and Dan Sperber about how human reasoning evolved for the purpose of argument uses as evidence findings that people get more reasonable when they argue. In other words, we reason better when we are trying to persuade. I am sure that is not a general rule, but when I read a good point-counterpoint I am struck by how much I can learn -- not just by a journalist saying "he says-she says" but a real back and forth, with supporting evidence from each side, not limited to what will fit in a 1000 word newspaper piece, as interpreted by a generalist.

Yves Smith
Ok, exhibit A: Yves Smith shreds Ezra Klein's review of Inside Job. Here's a bio of Yves Smith if you didn't know who she was. Basically what this amounts to is an expert with 25 years of experience taking down a hasty journalistic essay, piece by piece. Yes, it has some scholarly ass-kicking, reminiscent of the letters section of the NYRB, but it backs it up with links and evidence. Smith is relentlessly critical of Klein, but retains an appreciation for the storytelling ability of Michael Lewis, Klein's apparent "expert" in explaining the financial crisis. The piece is very long, but I found it very illuminating, and synthesized a lot of my conclusions about the financial crisis: We should not blame a few evil greedy bankers at the top, nor should we shake our heads and say that our international banking system is so complicated no one could have predicted it. This was a systemic diffusion of responsibility, much like the bystander effect, a well-documented phenomena in social psychology in which people are less likely to aid someone in need when there are many bystanders. Further, it was a complete capture of the levers of regulation by the industry meant to be regulated. By the way, some comments are typical bickering, but there was one that included a long review of Inside Job, by Dave Stratman, who seemed to be basing it on his earlier review.

Zeynep Tefukci
Why did I like this? Because it was not just someone explaining the financial crisis, but arguing with other people, explaining why they were wrong (or incomplete). Why does it show what's great about the web? Because I think the role of the generalist journalist is (and should be) declining. Previously, popular modes of mass communication were limited by space (newspaper columns) and time (deadlines, no journalist can be expert in everything they cover) in a way that they are not now. I don't need the New York Times to employ a sociologist of technology, but when an issue comes up, I would rather read Zeynep Tefukci than Bill Keller when considering the effects of twitter. She is a professional in that field, knows a lot of the evidence, thinks about this stuff when she goes to bed at night, and has several days to follow many of the different subthreads of Keller's argument. Keller, a really really smart guy, can read the articles that Tefukci sends, criticize them and then cites as evidence in response the following:
My sense of Facebook, not based on research but based on some experience and observation, is that for some people Facebook creates a kind of friendship that is more superficial than the kind that grows out of hours spent together in one another’s company. Of course, social media is a way to keep in touch with real friends and expand your network of more casual, less intimate relationships. But it also makes it possible to feel like you have a meaningful social life when, in reality, you are missing something. I did not offer this as a scientific fact but as an observation and a concern.
He ends the letter with a smarmy, "and now I have to go earn a paycheck." This is exactly it. If I am interested in this issue, I want somebody like Tefukci, who looks for evidence, interprets it within a context of other evidence, who is dogged and persistent. If some evidence is lacking, then look for more. (Here is more, by the way). I want to read someone who wants to stick around and argue, I don't want someone to make a few "observations," share a few "concerns," and then leave. I don't want a smarter generalist, who's got great "general critical thinking skills" and is an expert at making anything readable, a nice narrative, and interesting. I want someone who has found this issue interesting for 25 years, and is going to find this issue interesting for another 25. I'll skim a little, forgive typos, ignore a little jargon here and there, follow and appreciate the links. And I'll tune in more and more, to my trusted sources on specific issues, not a single trusted source on general issues.

Thursday, June 23, 2011

Practical Wisdom and College Teaching


Twas love at first sight! (At the 6th grade science fair)
My last post was a review of Practical Wisdom, a book by Barry Schwartz and Kenneth Sharpe, about how wisdom relies on dedicated practice, and how that practice is being stifled by our educational, legal and medical institutions. I limited the post to a straightforward review of the book, but it neatly fits into a narrative of why I love my job so much. I have the freedom to practice what I love. The particular kind of intellectual and professional freedom I enjoy as a college professor is what drew me to the profession, and it continues to sustain me.





At the beginning of every semester ... and at the end.
For me, the first element of practice is the freedom to fail without immediate consequences (besides the sinking realization that one has failed). In my teaching, I have tried different pedagogical approaches. I gave many straight lectures in general psychology, interactive demonstrations in my sensation and perception laboratory class, project based learning in research methods, and almost pure discussion in a cognitive science seminar. All of them have failed one way or another. I take solace in the recent study by MIT economists, led by Pierre Azoulay, who suggest that scientists given more latitude to fail produce more "hits" and more "duds." In other words, the freedom to produce duds is critical to producing hits. Failure is not the opposite of success, but the raw material of practical wisdom. And I am amassing some serious raw material here.

Don't mistake this for an overall judgment of my failure as a teacher. The students felt like they learned something, and they did actually learn something from my past classes. But I have always been painfully aware of the huge missed potential for learning and inspiration. The gap between those two quantities have always gnawed away at my soul as I read over the final exams, or question the students in future classes. Any teacher who thinks the students learn their content perfectly should have the (always humbling) experience of having that student in a class a year later. But I learn from these failures, and I love being in a field where I can learn from failures without fear (i.e. practice).

It is impossible not to be wowed by the Lilac Chaser
But what have I learned from these failures? I have gotten pretty good at describing phenomena in Sensation and Perception to beginning undergraduates. I have a good sense of which topics will blow their minds, which will  bore them but are necessary for deeper conceptual understanding, and which exam questions provoke the deep studying. I have learned that class discussions can be critically important on the class chemistry, and that I can trust some students to do the reading, but most will try to see if they can get by without, before reading. I have learned some cultural differences between the places I have taught. There is justified grumbling about a group project with out of class planning when some students are commuters.  I have learned that senior science majors don't always like grappling with historical primary documents the way I do.

But most of what I have learned cannot be translated into words. This is embedded in the very nature of the practice of any craft, and in the nature of the practical wisdom that Aristotle described : it is context-specific and escapes general, rule-based summary. There is no one right pedagogy, there is no one right way to teach General Psych, or one right way to teach kindergarten. There are a few principles, that are right, most of the time, in most situations, which are good at guiding better practice. What I have learned is likely a lot more specific than even I realize. It may be specific to my teaching persona, or to that particular class, or to my subject, or to the age of my students. I am always surprised at how different 18-year-olds are from 22-year-olds, and this means that teaching each cannot simply conform to a few general principles.

Next post:
A history of Psychology in beards
The second element of developing practical wisdom is balancing tradeoffs. Schwartz and Sharpe emphasize this, and I couldn't agree more. While I would love to "solve" teaching general psychology, the reason I find it so engaging is that it is not merely a puzzle. What works one semester doesn't work as well the next. There are trade-offs between depth (I could spend a whole class period on a debate about policy implications of Claude Steele's stereotype threat research! or have them memorize nonsense syllables like Ebbinghaus!) and breadth (Do I have to skip evolutionary psych, or health psych, or stress?). There are trade-offs between what is fun (let's talk about personality!) and the science behind the fun (why is the Big Five better than Myers-Briggs? Well, there's some math, and some research methods, and the nature of likert scales...Hey why are your eyes rolling that way, this stuff is great!). It is sometime hard to tell the difference when to a total psych science geek like myself it is ALL fun (I know I can totally make them love the difference between predictive validity and construct validity). There are trade-offs between extrinsic motivation (do this for a grade) and intrinsic motivation (do this because you are curious, or you want to get better at it). Part of practicing is continuing to balance trade-offs, and learning which tradeoffs are necessary in a given situation. In general, I feel that freshmen often need more extrinsic motivation (for example, regular quizzes to get them to read) whereas you can trust seniors a little bit more to work on their own. This is, of course, relative.

It's not turtles, it's practice,
all the way down
Finally, I think a critical piece of this freedom to develop practical wisdom, is an acceptance of the complexity and uncertainty of assessment and evaluation. On one hand, to get better at teaching, to learn from my failures, I need to know how I failed. On the other, assessing learning is a vastly complex and noisy undertaking. Most assessments of learning require wise interpretation. This interpretation improves with practice too. My rubrics continue to evolve, just as my evaluation of them. If the environment that I am teaching in fails to acknowledge that, and tries to precisely measure my effectiveness, I will adjust my teaching to fit that precise measure. Tie my salary to number of students and to student evaluations (as some powerful interests in Texas are encouraging), I will do my best to make my courses draw more students (hello sloganeering course titles), make students happy (goodbye rigorous writing assignments) and not worry as much about how much they learn. I exaggerate to make a point, here, not all students are drawn like flies to the Chemistry of Wine, or tricked into enrolling in the Psychology of Illusion, and some do appreciate long writing assignments. The free marketers who say I should be more customer-focused may be right, but my customer is my student ... ten to twenty years later, not the first-time-away-from-home-man-child in front of me. Show me someone who is successful at selling a product whose benefits are enjoyed ten to twenty years later to 18-22 year olds, and we can talk. Until then, please spare me your increased accountability.

More pressure to precisely measure the learning narrows teaching, and narrows learning. The successful assessments that I have seen are formative, not evaluative. No matter the evaluation, the instant that pay, or hiring and firing gets tied to them, they become hammers instead of scalpels. We may be able to boost that particular metric if we push really hard, but that often comes at a cost of other outcomes that may be harder to measure but are not necessarily less valued. Sometimes, that outcome is acceptable. In Atul Gawande's The Checklist, he describes how pilot checklists have ensured the remarkable safety record of the commercial airliner. But I suspect this emphasis on safety has come at a cost to innovation, energy efficiency, and price (whatever happened to Valujet?). We may be ok with an overwhelming priority on safety for airplanes, but in higher education, narrow accountability will crowd out many other worthy goals, not to mention people who value academic freedom and the ability to cultivate their practical wisdom.

Ultimately, part of the reason I love what I do because I can feel myself getting better at it (the teaching and the scholarship, if not the pithy blog posts) and I can enjoy the fruits of my labor. These fruits are not merit based pay, but the tiny lights of inspiration, of comprehension, of curiosity, going off in the deep recesses of my students' minds. The sparkly warm glow of reading about a new finding in embodied perception. A profession is defined by practical wisdom, and practical wisdom is not dispensed like manna from the talented, but generated by accident by people just trying to get better at something they find interesting.

Tuesday, June 21, 2011

Book Review: Practical Wisdom

Practical Wisdom
By Barry Schwartz and Kenneth Sharpe

The premise of this book, written by a psychologist (Barry Schwartz, also author of Paradox of Choice) and a political scientist (Kenneth Sharpe), is that we have a society with too many rules and perverse incentives that discourage the cultivation and use of practical wisdom. This may sound at first glance like a modern rehashing of a libertarian perspective, that we should let individual freedom and market forces lead us all to greater happiness and stop meddling government rules and bureaucrats.
But this book deftly shows how de-humanizing incentives can corrupt and undermine practical wisdom in many large institutions, whether they be a public school or a private hospital.  The book is split into four sections. First, what is practical wisdom and why do we need it? Second, the machinery of wisdom. Third, the war on wisdom, and finally, sources of hope.

What is practical wisdom? It begins, say the authors, with Aristotle, who described what he called phronesis. Aristotle’s wisdom in Nichomean Ethics was not a theoretical system of moral rules, but a specific, practical ethics, impossible to describe in general terms. Doing the right thing was not a matter of just knowing the right rules, but knowing the right thing to do, in the right circumstances, with the right person at the right time. And this takes practice. This moral dimension of practice is a compelling one for me, and resonates with how I approach becoming a better teacher.

In the machinery of wisdom, they cover several bits of familiar (for me) territory in modern research in psychological science. First, that decision-making is critically dependent on our emotions. Without emotions, there are no decisions, and without empathy, there is no wisdom. Second, science has a desire to find hidden patterns, the unseen rules that drive our clockwork universe. Science has been wuite successful in this effort, but our human worlds, of education, of law, of medicine, are not at all clockwork, and so complex and uncertain, that rule-based approaches are doomed to fail. Examples of this abound in artificial intelligence research, where we have thought that giving robots cameras, microphones, and an amazing analytical processor would quickly give us computers that could do simple human tasks, like navigate our environment, understand language and recognize objects and faces. But even these simple tasks have proven to be monstrously difficult, because we are doing what our brain does best in these cases, which is to cope with uncertainty and make good educated guesses.  Most computers, while amazing when given a good set of rules in a rigid, predictable environment, are terrible in situations that are context-dependent or uncertain.
So what is the war on wisdom?  The authors begin with judges and mandatory minimums. Mandatory sentencing guidelines erode judges’ ability to make individual decisions based on the circumstances. In other words, it takes away their power to judge. Doctors, through financial incentives are nudged into doing more procedures and seeing more patients per day. Teachers are given strict guidelines on what to teach on what days.  Or if they are not, are nudged towards teaching to a specific high stakes test.

Why do we have this war on wisdom? Of course no one is anti-wisdom, but well-intentioned efforts designed to encourage other characteristics have had horrible side-effects on practical wisdom. In medicine, a value of higher patient autonomy has led doctors to present options, but refuse to give their own (expert) opinions. In law, the system where lawyers are strictly advocates for their clients, rather than also representatives of the court has led to a disregard for the truth and wise solutions. Further, the desire to fully account for their time, and the competitive nature of making partner, has shaped the legal profession for the worse. The “science” of accountability in the legal profession has eroded the wisdom of the profession.  In teaching, seeking to consistently train new teachers and set minimum standards has led to undermining teachers’ ability to learn through practicing their craft.

What are the sources of hope? In law, the authors praise special veterans courts, where judges design sentences and programs to balance the goals of rehabilitation and safety.  In legal training they cite a clinical approach to teaching law, preparing law students using a mentoring apprenticeship. Like many in education,  they propose a portfolio system for evaluating students and teachers, with flexible criteria, allowing teachers to work within their curriculum, with their own judgment.  

While the prose and argument was sometimes a bit lengthy (he says in his typically long-winded blog post), I really recommend this book. It integrated disparate thoughts I have had on large political questions that don’t seem to be engaged by any politicians, or any political party.  I can see how simply leaving people entirely alone to practice (whether it be teaching, judging or healing) could be  corrupting, but the dehumanizing system we currently have is corrupting in a different way, and we seem to be heading further down that road.
In the next post, I am going to pivot on this, and try to integrate some of the insights from this book with another of my intellectual touchstones, Atul Gawande’s The Checklist, and apply them to my own teaching practices.

Other resources:

One more supporting link (maybe more to come)

Thursday, June 16, 2011

Reasonable Doubts about "The Brain on Trial"

I agree with a lot of what David Eagleman has to say in "The Brain on Trial" in this month's Atlantic, and I find him an interesting and provocative thinker. But I had a few reservations.
First, let me say that I have been reading a lot about him lately, and I read his book about how the internet can save civilization (on the iPad), and it was interesting.
Some of these ideas have been stewing since he gave a talk on the topic of neuro-law here at Randolph-Macon College a few months ago. I had the unique pleasure of getting to talk to him for about 45 minutes in my office. I was a bit star struck at the time, but would have been even more so had I known he was about to be profiled in the New Yorker (by Burkhard Bilger, no less!).

Ok, a quick overview for those of you who don't want to follow any of the links above:
He argues that we should redesign our legal system to reflect how much we know about the brain.
What relevant findings does modern neuroscience tell us? First, the amount of free will we have is on a spectrum. No argument there, we don't hold children as blameworthy as adults, and we make allowances for accidental deaths, or the insanity defense. But second (and this is important) we don't have as much free will as we think we do. A lot of modern psychology and neuroscience show how our unified sense of our conscious thoughts controlling our actions is an illusion. Not only that, but in certain situations we we can look at the brain, and tell how much free will someone has (tumor in the frontal lobe, you lose control of yourself). And we can look at a loss of free will, and predict what we are going to see in the brain (you are losing control of yourself, maybe something is wrong with your frontal lobe).

How should this be applied to the legal system? Eagleman urges to drop assessment of blameworthiness, or intent altogether, and instead look to the future, at reducing recidivism. If we can tell whether someone is likely to commit another crime, that should influence how our legal system deals with them. He argues that we have a lot of good actuarial data on predicting recidivism, and that as neuroscience matures further, we will get even better predictions about who will commit another crime. If we care about the safety of our populace, and the reform of our criminals, why don't we position the legal system to prevent crimes, instead of simply reacting to them?

What's not to like? I agree that our legal system could be better, and more fair, if we took into account recidivism rates, and incorporated crime prevention programs like drug treatment instead of simply punishing and imprisoning. And I agree that we should recognize that dealing with lead paint is a great crime prevention program. But here's where I am skeptical of Eagleman's neuro-optimism:

We don't need neuroscience for any of this.

We can (and should) make our legal system forward-looking because we should recognize that a legal system should not just punish wrong-doing but reduce the circumstances that lead to crime. Exposure to lead paint, drug addiction, and PTSD are all things to be treated, not to be punished.

Social programs and policies should be designed with the knowledge that we have less free will than we think we do. Watson and Skinner said this 75 years ago. Yes, they overreached, and Chomsky and the computer revolution led us away from rats, pigeons and the brain as a black box and towards the mind as an information processor.  Another way of putting this is: the environment is powerful. Eagleman believes his knowledge of brain areas and circuits puts him in a different league than Skinner and the behaviorists, but his argument is not that different. And he is not really any closer to scanning someone's brain and predicting a complex behavior.

As Michael Gazzaniga, an esteemed neuroscientist, mentioned off-hand at his talk at the Association for Psychological Science this year: There are frontal lobe patients who lose control and kill people, but many more who don't. And plenty of serial killers don't have frontal lobe abnormalities. Basically, while we can make a probabilistic judgment that damage to the frontal lobe is more likely to impair your judgment than damage anywhere else, we can't make an individual judgment that one person's actions are due to a particular element of their brain anatomy or chemistry.

Finally, there is one piece of the article that for me represents a lot of what bugs me about his approach (and it  is not uncommon in modern neuroscience). He describes a prefrontal workout, "in which the frontal lobes practice squelching the short-term brain circuits." In what amounts to a super charged, fMRI -sophisticated biofeedback system, you see the pattern of brain activity that corresponds with craving, and then you try to reduce that pattern of brain activity. This just seems odd and incredibly indirect to me. We don't want to reduce brain patterns. We want to reduce cravings. There are some decent ways of reducing cravings, at least for cigarettes. Eagleman ignores effective psychological treatments and therapies, while trumpeting the dubious triumphs of Prozac, thinking that at some point we will understand the brain enough to directly hack its circuitry. We certainly know a lot more about how the biology of the brain relates to behavior than we did 20 or 30 years ago.

But sometimes the best way to change behavior, is to change behavior, and measure behavior.

In the end, I am a big fan of David Eagleman. His energetic mind, his boundless joy and love of science and his optimism are a wonder to behold. For someone with such confidence in his ideas of applying science to solve the problems of society, he was disarmingly modest and patient with all the questions I saw him answer during his short stay in Ashland. I hope he succeeds in his aim of making our legal system more forward thinking, environmental, and less black and white when it comes to free will. But I wish the wide-eyed optimism about what neuroscience can do was tempered with some acknowledgement that neuroscience isn't the last word on the human condition.

Wednesday, June 15, 2011

How the internet make us smart, but makes us feel like idiots

My last post was about a science blog-world kerfuffle. Since I had spent hours looking over these comments, on a few different blogs, trying to pull together the story, and another hour or two writing my post, applying my expert knowledge on how important content knowledge is, I thought I might offer to write a guest blog at the Sci Am blogs about it, and what is to be learned from it all.

The gracious editor responded promptly and gently let me down. Of course, I was two weeks late, and had only grazed the surface of this debate, which played out over more blogs than I realized, as well as on twitter. So I was struck by the irony. Here was a conversation that I felt made me smarter, as I could see the thought processes on display as scientists, journalists and and a collection of otherwise very smart people publicly  turned over these ideas in their heads. But in retrospect I realize now that I was only seeing half the conversation, and not even fully comprehending the context and background of that half that I did see. In other words, even as I felt like I was probing deeply, I was made painfully aware that I was scratching the surface.

Which is an odd paradox of the internet. On the one hand, you can learn just about anything you want with a few felicitous keystrokes. On the other, as you do, you realize how ignorant you really are. Kind of similar to grad school that way. Just as you think you know a lot about a topic, you find someone who has spent years exploring a small slice of that topic.

The thing for me is that this is the _best_ way to use the internet. Otherwise, we run the risk of creating our own information bubble, nodding our heads at people who agree with us but don't challenge us, and failing to confront the boundlessness of our ignorance. The price we pay for valuing education is that we must also seek out ignorance and misunderstanding.

But confronting our own ignorance is uncomfortable and feels shameful. Well. At least for me it did. But I tell myself that the sting is necessary to prod us to learn more... and eventually, confront more ignorance and start the whole damn cycle again.

Tuesday, June 14, 2011

Getting smart about Wise Crowds, or Some stuff even really smart people don't know about science

I found a discussion between scientists, science writers, and journalists online that I found really fascinating, I thought I would share thoughts here. It relates to a few common themes of mine: that scientific thinking is unnatural, and that statistical thinking is unnatural, the role of content knowledge in critical thinking, and that we should have some role for expertise in interpreting scientific results.

The discussion began with a column by neuroscience writer Jonah Lehrer, about a phenomenon called "the wisdom of crowds." Basically, the idea is that when estimating something very uncertain, sometimes the average crowd response can be "wiser" or more accurate, than even most expert responses. The classic example is from Francis Galton, who observed that the average crowd response for estimating the weight of a steer was better than most of the butchers who guessed. Lehrer's column was about a paper which showed that this wisdom of crowds phenomenon can be reduced (i.e. crowds get less wise) by making the crowd interact with each other.

Ok, so we're fine so far. The brouhaha begins when Peter Freed, M.D., an actual neuroscientist, writes a ranty blog post entitled "Jonah Lehrer is Not a Neuroscientist." He calls Lehrer to task for using "for example" when cherry picking a certain description of data from the paper. In other words, Lehrer acted as if the number he was citing was representative, when it was not. Freed uses this as a way of highlighting the difference between scientists, who look at data all day, and science writers, who do not. But Freed himself makes an error (and he confesses, and makes it part of his blog post) in confusing the median and the mean as measures of central tendency in this case.


Ok, now, maybe I have lost some of you, and I am going to back up just a second, because this is where I think it gets interesting. It depends on a (relatively) nuanced understanding of the three words I used above: median, mean and central tendency.

The vast majority of science that I am aware of takes a set of observations, and looks to describe those observations using some sort of quantitative measure. We don't want to compare apples and oranges, but once we are measuring all apples, we have a set of numbers, and we summarize those numbers to describe the group as a whole. Most of us are familiar with the word "average," and we don't give it any thought. It seems as if an average (adding up all of the numbers and dividing by the number of observations) is a natural, real, description of the set of apples we have in front of us.

But there are different kinds of ways to describe a group of numbers. We can say the most frequent response (called the mode, as in, "9 out of 10 dentists ranked it first"), or the middle response (called the median, when you take the SAT three times, and get a 2000, 2100, and a 1000, you want to count the median response, 2000, not the mean, which would be 1700). The most common is the mean, which most of us consider synonymous with average., as in, "When Bill Gates is in the room, everyone in that room is, on average, a millionaire." This is not even getting into the difference between the geometric mean and the arithmetic mean.

So first, a key point here is that none of these descriptions are more "true" or accurate descriptions of a set of data, they are simply one of many descriptions. This is a mistake made by Nicholas Carr  in his blog post discussing Lehrer's column, as he argues that
As soon as you start massaging the answers of a crowd in a way that gives more weight to some answers and less weight to other answers, you're no longer dealing with a true crowd, a real writhing mass of humanity. You're dealing with a statistical fiction. You're dealing, in other words, not with the wisdom of crowds, but with the wisdom of statisticians. There's absolutely nothing wrong with that - from a purely statistical perspective, it's the right thing to do - but you shouldn't then pretend that you're documenting a real-world phenomenon.


Carr draws the line between the real-world itself (which he identifies with the arithmetic mean) and a statistical fiction. None other than Kevin Kelly (among others) takes issue with Carr on this point in the comments, which I think are really worth a read. 


Given that we accept that all descriptions of a set of data are to some degree interpretation, not simply observation, where do we go from here? In a follow up post, Freed describes the lambasting he received from some of his critics, and relates a funny (fictional) story about his third grade class as it relates to the choice between median and mean.  The critical moment is when the principal writes to the statistical research firm: "You may know a lot about statistics, but you don’t know anything about third graders. You get an F, for Fired"


Which leads me to my main point. Statistics is not a purely "scientific" position or a purely aesthetic decision, as Freed claims in a comment on another blog post, by the physicist Chad Orzell. The statistics that scientists use reflects both a basic understanding of distributions of data (what it means when there are a lot of extreme responses, leading to skew, or other non-normal conditions). But the statistics used also depend critically on some knowledge of the phenomena itself. When cognitive psychologists analyze reaction time data, they not only look at the distribution of responses (which will just about always be right skewed) but also consider the task that they are reacting to. What does it mean when most people take 1 second to respond, but a few take 2 minutes? Is that "real" data, or did they fall asleep, or answer their cell phone?


As Orzel points out, there are valid criticisms that one can make about use of statistics in pop science, but at some point, you have to engage with the actual science of the article. In this case, there is a rich literature of making a decision under uncertainty, and the phenomenon of the wisdom of crowds. To criticize the pop science writing, you need to know something about that science, which Freed seems not to. Not only that, Freed celebrates it:
Here’s my deep point. I don’t care about straight psychology – straight psychology is, not to pull punches, over.  I care about neuroscience. And Lehrer was not trying to be a neuroscientist in this article.  This was a straight-up psychology article.  But modern neuroscience, his chosen wheelhouse – particularly the subfields of behavioral and affective and cognitive and social neuroscience – is radically more complex than straight psychology.


Where are the lessons in all of this?
Lesson #1: Even really smart people make mistakes about science. Even scientists make mistakes about science, especially when not directly in their area of expertise. We should be wary of treading into another field of science, proclaiming it "over" and declaring that our domain is "radically more complex." The scientists working in that field are smart people who have found it to be plenty complex. 
Lesson #2: When smart authors and their critics engage in a well-moderated comment section, it can be an amazing way to learn. The commenters on the posts above are generally articulate and educated about the topics. Lehrer responds to Freed in his comment section. Nicholas Carr's post in particular features an honest and thoughtful exploration and arguing with people. To me, this supports the recent theory that people are more reasonable when they argue because human reason exists not to discover truth, but to persuade other people.
Lesson #3: You can learn a lot by reading a few good posts and comment sections, but there is still great value in subject matter expertise. In this case, as Orzel entreats us, actually read the article by the scientists who did the study in the first place. But we should also recognize that many of the sentences in that article are the result of thousands of thousands of hours of work by many experts in this field.


Oh, and also, Jonah Lehrer pretty much had it right in the first place. Which he does most of the time, in the limited amount of space he has for a newspaper column. I remain a fan.


A little postscript: Someone pointed out that I was too hard on Freed, who was gracious in engaging his commenters, and offered a good model for how a scientist reads a paper. He also pointed out that Lehrer brought up the wisdom of crowds paper as a way to join the whole "internet is making us stupid" crowd, which I agree is not true. I am not an Eagleman/Kelly/Shirky, "the internet is going to save civilization" optimist either. But I find many of the traditional journalists decrying the internet (twitter, blogs etc) mostly unconvincing. He also pointed out that this was old news (like two weeks ago!) and that there was a lot to it that I missed. So, now I am on twitter. We'll see how that goes.

Monday, June 06, 2011

Oral Culture vs. Literate Culture

I've been reading The Information by James Gleick, (NYT review, exerpt) which, like most Gleick books that I have read (Chaos, Genius) is an absolute nerdgasm of science, technology, and history. But it gave me a context of thinking about Rachel's latest foray into the big time, a little kerfluffle with mostly-progressive Matt Yglesias.
In an early chapter, Gleick describes the transition from an oral culture to a literate culture, and all of the changes in human thought that came along with that transition. It was no accident that Aristotle and Plato were basically inventing logic; Gleick argues that logic wasn't supported by a culture based on oral traditions. In some anthropological studies today, he cites evidence that some recently discovered non-writing cultures do not recognize syllogisms:
All bears from the north are white.
My bear Fozzie is from the north.
What color is Fozzie?

People from pre-literate cultures will answer: I don't know, I have never seen Fonzy. Gleick lays out the case that a number of the modes of abstract thought that we now take for granted were developed only because of the advent of written words, which were two times removed from the world (they signified spoken words, which signified the world). It is a really fascinating book, and incredibly well written. I know it is thick, but I am loving it so far.

Ok, so back to Yglesias. I see modern day blogging as bringing some of the elements of oral culture back. Gleick cites Marshall McLuhan as the first to bring this up, and there is no shortage of screeds against blogging, twitter, or how google is making us stupid. To me, Yglesias' blog embodies the dual capability of this streaming mentality. On one hand, looking at his economic and political pieces, you can see a commentator reflecting on incoming news, but also accumulating specific knowledge, and applying this growing knowledge. Despite his youth, his political pieces reflect a wide reading and knowledge in politics. His financial pieces likewise reflect (and document) his growing expertise in interpreting the US economy. This is where I see one amazing benefit of the internet - it enables amazingly fast learning of specific knowledge. This goes as well for applications like twitter, which can organize communities of like-minded people, and enable sharing of information.

But Yglesias' education reporting shows the "dark side" of blogging. He is not a teacher or a parent, has not even attended any public schools, and yet he leverages his blogger credibility to write about the deficiencies of public education and benefits of a particular approach (the No Excuses model). He reads a report on KIPP and interprets it in a way that doesn't reflect any knowledge of how parenting works, how KIPP works, or even how educational research works. His recent posts on education reflect a facility and fluency with language ("labor force success," "bourgeois modes of behavior") but a lack of sophistication or knowledge when it comes to the common characteristics of public education for the poorest or lowest scoring students, the dimensions of choice for a curriculum, or even the difference between things that parents can teach through explicit instruction and the things children must learn through modeling or simply maturing. There is science for each of these, as well as some common sense through experience, Yglesias knows neither.

It is this side of blogging which I see as reflecting an oral culture. Within an oral culture, it is difficult to evaluate individual claims of a speaker, rather, we must judge the overall credibility of the speaker. Within an oral culture, it is difficult to bridge different kinds of evidence, trying to differentiate exactly what makes KIPP's approach different from that of traditional public schools. Within an oral culture, there is less emphasis on a historical approach, where people might ask, "Have people tried elements of No Excuses before? What were their results?"

The interesting thing is that blogging is not a unitary, oral culture activity. Some blogging involves deep explorations of a topic, developing a theme or a content area over time. But the kind of blogging that Yglesias does, at least in education, does not seem to match this. It is memory-less, in that he is repeating the same things that David Brooks wrote about Harlem Promise Academy two years ago, without reading any of the responses.

Thursday, June 02, 2011

The role of theory in science

One final thought about evolutionary psych, in the wake of the Kanazawa debacle.
I spend a lot of time thinking about how to get generally smart and educated people to understand the nature of science. Despite many people's eagerness to dismiss young earth creationists as either deluded or cranks, I think their beliefs are built upon a misconception of science that is remarkably prevalent. When strongly held beliefs meet scientific evidence, the beliefs generally win.

 At the end of my history of psychology course, we read Keith Stanovich's excellent "How to Think Straight About Psychology". It is an incredibly readable philosophy of science book, applied to psychology. But even at the end, I have students who I know still doubt evolution, or hold strikingly pseudoscientific or unscientific beliefs. This is not because they are stupid, or haven't studied enough, but because science is quite often counterintuitive, and in most of our school science curriculum, we don't really teach how science works as much as a large set of science facts. Not to diminish the power of facts, because you can't organize anything if there are no facts, but science is not just a series of facts.

Darwin's first writing of theory of evolution,
in 1837, 21 years before Origin was published
One main misconception of science is the relationship between theories and facts. Evolution is a theory. So is evolution as applied to psychology. What "work" do these theories do? I think a good way of looking at a theory is as a framework for past facts and a way of finding future facts. In science, we might replace facts with observations, or experiments, since observations and experiments are the root of all facts in science. So, how can we evaluate theories as good or bad? First, a theory has to explain some portion of the current set of facts. Darwin's theory explained his finches, and other animals, but it also fit in with a larger set of observations of animals of other explorers. But we can't ignore that Darwin's theory also fit with the growing set of observations in geology about the age of the earth, about dating different fossils, or even explaining coral reefs, a mystery of the time (described beautifully in Steven Johnson's "Where Good Ideas Come From". Darwin's theory was a pretty good theory for explaining the morphology and current behavior of animals from around the world. But Darwin was dubious of his own theory, and delayed publishing it for nearly 20 years. Why? Partly because he knew the conflict it would stir with religion, but also because he wanted to collect more evidence. In other words, Darwin's theory predicted future facts and observations. If species change, and in a gradual way, and this way corresponds with environmental changes, then there should be more fossils of transitional forms. This is a key part of evolutionary theory, and of any theory. You should be able to find more facts that fit into this theory. In other words, a theory should generate hypotheses.

So where does that leave us with evolutionary psychology? Well, evolution explains a great deal of human and animal biology. How does it do with psychology? I am not a big fan of many of the evolutionary and genetic accounts of intelligence, partly because they do not square with my egalitarian beliefs, but also because they may explain some facts, but they do a crappy job of predict any new ones. If the IQ gap between blacks and whites is genetic and fixed, why is there so much evidence that IQ can be changed? Why is there the Flynn effect?

While popular imagination of science hears evolutionary psych and thinks of Kanazawa, or the Bell Curve, the theories of evolutionary psychology are many and diverse. Instead of disparaging the whole field, we should try to be more like scientists ourselves, and tease apart which individual claims of a theory have merit, and which don't. Unfortunately, this often means learning more about the set of facts that the science is trying to organize, or trusting the scientists who know those facts. Too often, instead of learning more, we look down the hole and see darkness, and assume that it is shallow, instead of asking those who have dug, who had reached, who have probed and prodded, to tell us how deep it goes.

Runtime thoughts: Psychology: The Unnatural-est Science of All

Two of my goals this summer are to write more and to run more. Since I don't run with headphones, it gives me a good chance to turn over some ideas in my head. Even if no one reads them, I figured regular writing here after my runs would help me get the writing juices going.

One of the main ways that I think of science is that it is an unnatural way of thinking. What does this mean? The pieces of this I have previously thought of as the collection of shortcuts and biases that human thought entails. To take just one, once we get an idea, it is very hard to dislodge, because we only search for evidence that confirms our idea. Further, when we see evidence against our beliefs, we tend to minimize or ignore this evidence. This phenomena, called the confirmation bias shows how in our mind, we fudge the difference between facts, ideas and beliefs. It also makes science really hard to do, because science is supposed to be built on both positive and negative evidence. Science, while it is a search for truth, it is done by human brains, which aren't necessarily built to search for truth, but just to search for "close enough." This makes science very hard. The history of science is littered with examples where we thought we had the truth ("we are the center of the universe!" "the earth is only 4000 years old" "The ghost of falling made that apple fall" "look at the pretty phlogiston in the fireplace") only to be revealed as quite silly in retrospect.
Recently, there was a provocative paper in a great journal called Behavioral and Brain Sciences, by Hugo Mercier and Dan Sperber arguing that human reasoning evolved not to discover truth, but to persuade other people. The great thing about BBS is that they publish a long target article, followed by invited commentary from the community of experts. Basically the best, curated, edited comment feed in science. Of course, the bad thing about BBS is that they publish a long target article, followed by invited commentary from the community of experts, an issue is often over a hundred pages long. And, it is subscription only.
Here is an excellent summary by Chris Mooney over at Discover, and a great comment feed, with Mercier chiming in to argue with some philosophers (the puns, they make themselves!). They make sense of a number of the heuristics and biases, as well as some new experimental evidence in making this argument (too meta... I'm melting...). For me, this has interesting implications for the history and philosophy of science. Science progresses when it makes better predictions about the world (Galileo, Copernicus, Keppler, Newton ultimately won because they made more accurate predictions about the behavior of physical bodies). But human reasoning works by trying to convince other minds.
So why is psychology the most unnatural science? Because its predictions are not the behavior of falling bodies, or planets, but rather other people. If human reasoning evolved to convince the human mind, sometimes this can get in the way of understanding how the human mind works.