washingtonpost.com > Education > The Answer Sheet > "It is a miracle that curiosity survives formal education”. --Einstein > Learn everything you need to stay sane during the school years with veteran education writer Valerie Strauss and her guests.
Cognitive scientist Daniel Willingham, a psychology professor at the University of Virginia, takes a hard look at the Los Angeles Times project that evaluated teachers by using test score data. Willingham is the author of “Why Don’t Students Like School?”
After this was initially posted and several readers added their own comments, Willingham responded, with a correction and an explanation. I have included his addition at the end of this post.
By Daniel Willingham in the Washington post
“It is dangerous to be right in matters on which established authorities are wrong,” Voltaire said. When it comes to education policy, being right has not looked dangerous in the last few days, but it has certainly looked futile.
In case you’ve been off the grid, the Los Angeles Times hired a researcher to analyze seven years of math and English standardized test scores from the LA school district. Then the Times published an article about the results, profiling some of the “best” and “worst” teachers by name. They’ve also constructed a searchable database that allows one to find the results for any teacher in the district. (Whether the database will be open-access is not clear to me.)
The results are based on a “value-added” measure of teacher performance. The idea, roughly, is that one uses the student’s earlier test score as a control for the later, so that one measures learning (that is, change across time) and not just students’ absolute performance.
The writers of the Times article are either uninformed or disingenuous about the status of the value-added measures. They write: “Though controversial among teachers and others, the method has been increasingly embraced by education leaders and policymakers across the country, including the Obama administration.”
The “others” include most researchers looking into the matter.
Value added models work well when you’re trying to evaluate a school. They work much less well when you’re trying to evaluate an individual class.
Here’s a thought experiment. You and I are equally good fourth grade teachers, but I am more clever about looking professional in front of parents during back to school night. As a consequence, parents think I’m a better teacher. Who will have better value-added measures?
Arguably, I will. If I have a reputation as the better teacher, parents who are more involved in their child’s education will go to the principal and request me. If I have a principal who accedes to such requests, I’ll have a classroom with a higher proportion of kids with supportive, involved parents.
Value added models assume that kids are assigned to classrooms at random. They aren’t.
This is just one problem with value-added models. There are others, which I’ve written about before. Were the writers for the Times unaware of such problems? Were the editors, who put the story on the front page?
I doubt it. I think their reasoning might be revealed in the story’s subheadline: “A Times analysis, using data largely ignored by the LAUSD, looks at which educators help students learn, and which hold them back.“ LAUSD is the Los Angeles Unified School District.
I’m guessing that the editors at the Times are frustrated by the inaction of the LAUSD on teacher evaluation, (or on school quality in general) and they are trying to goad them into doing something.
How should teachers and their unions respond to this?
The head of the Los Angeles teachers' union has called for a boycott of the paper. Righteous indignation is a natural response to injustice, but it’s ineffective and it can slide all too easily into a victim mentality and excessive talk of “what they are doing to us.”
A classmate of mine in graduate school studied negotiation and went to work for the U.S. State Department. He was part of a team working on negotiating water rights between Israel and Jordan.
He was describing the positions of each country in the negotiations when I interrupted him, saying that one of the countries’ bargaining position was predicated on a completely false history of the region.
He said “Dan, you’re thinking that who is right and who is wrong has some bearing on negotiations. It doesn’t. All that matters is each party’s negotiating position and their negotiating strength.”
Teachers and teachers unions would, I believe, do well to bear this in mind.
When it comes to value-added measures, teachers and unions are right. The models aren’t reliable enough to evaluate individual teachers. But right now that doesn’t matter much.
The mood today is that something has to be done about incompetent teachers. We’ve seen that mood in districts in New York City and Washington D.C. and now we’re seeing it in Los Angeles.
We’re also seeing it at the federal level. Education Secretary Arne Duncan said that the publishing of the individual teacher’s scores is just fine.
The people who feel that something must be done are right. In most districts there is not a mechanism by which to ensure that incompetent teachers are not teaching.
I have said before that if teachers didn’t take on the job of evaluating teachers themselves, someone else would do the job for them. The fact that the method is they are using is inadequate is important, and should be pointed out, but it’s not enough.
No one knows better than teachers how to evaluate teachers. This is the time to do more than cry foul. This is the time for the teacher’s unions to make teacher evaluation their top priority. If they don’t, others will.
Now, it may be too late.
Willingham posted the following in the comments section after a number of readers reacted to the above piece:
As a couple of you pointed out here and others pointed out in private emails to me, I was in error on the random assignment question. I was thinking about the consequences of students not being randomly assigned to classrooms—which is a problem, the seriousness of which is controversial—and I simply didn’t do my homework to make sure that I was right about how that’s treated in the models. I apologize for the error.
I stand by my larger point that VA models are not ready to be used in high-stakes personnel decisions, and I think I’m correct in characterizing that as the opinion of most people who are developing and testing the models.
I didn’t mean for this blog to be about the limitations of VA models, which is why I gave just one example. What I meant to emphasize was that teachers and their unions are unhappy that politicians and now institutions--the LA Times--are imposing evaluation schemes of which they (teachers and union officials) disapprove. But this imposition is a consequence of the profession failing to regulate itself. Indeed, when I hear complaints that teachers don’t get enough respect, that’s one of the things I think of; other professions do a much better job of protecting their own status, and one of the ways they do that is some assurance of quality in its membership. The teaching profession can take that job seriously itself, or its members can watch while someone else does it.
Posted by: DanielTWillingham | August 17, 2010 4:05 PM