The Los Angeles Times teacher rankings bomb has ignited a firestorm of controversy, a little light and a lot of heat.
By Charles Kerchner - Research professor, Claremont Graduate University - in The Huffington Post
August 23, 2010 -- Pundits have lined up and taken sides. Education secretary Arne Duncan has supported the pending release data on 6,000 Los Angeles Unified School District teachers, ranking them by the "value-added" shown by student test scores. United Teachers Los Angeles is predictably outraged, and its president, A. J. Duffy, has called for members to boycott The Times. American Federation of Teachers president Randi Weingarten met with the paper's editorial board to urge them not to publish or post the list.
She probably should have saved her breath. There is little doubt that the Times is intent on publishing the names. But they shouldn't. Here's why:
First, there is a difference between public officials and public employees. Public officials are fair game for just about anything, from their expense accounts to their sex lives. Traditionally, journalism has had a different relationship with public employees. We recognize that exposing rogue cops and racist firefighters falls within the purview of journalism, but we haven't seen their performance rankings listed. That's considered an internal personnel matter, just as it is with employees in the private sector.
There is a reason for privacy. It's the same reason that none of the academic public policy researchers and statisticians in Los Angeles has published this test score analysis even though we have had access to the underlying data for years. The answer is that personal ethics and the institutional review boards that govern research involving humans would not allow it. Jason Felch, one of the writers of the teacher evaluation story, wrote in response to a blog post, that: "It was not an academic publication, it was investigative reporting done in the public interest with public records."
Not good enough. I understand shaming mayors, school board members, even superintendents, but using a test that teachers were never told that they were to be evaluated by to publicly shame them doesn't pass the "all the news that's fit to print" test. There is a public interest in exposing the school district's inadequate evaluation system. There is no public interest in shaming teachers. It's just mean-spirited.
Second, the data and the method used are highly prone to error. I support using value-added assessment and refining its techniques. I cheered when I first saw it used about two decades ago. At last, I thought, here was a method that offered the promise of recognizing teachers who taught poor kids who had struggled in school and with whom schools had struggled. The statistical method has the possibility of leveling the playing field so that a teacher in Pacific Palisades and one in Boyle Heights can be compared on how much their students learn, not where they started.
But the statistics are prone to error and need to be used in combination with other indicators to gain an accurate picture of teacher effectiveness. To begin with, value-added calculations are no better than the data used to calculate them: garbage in, garbage out, as statisticians say. LAUSD student data records are notoriously prone to mistakes. I have analyzed thousands LAUSD student records and remember well the task of seeing that data from one year matched the next, that students had actually taken classes from the teachers that were listed in the student record. I am sure that Richard Buddin did a careful job with the 1.5 million student records he analyzed, but I wouldn't bet that the data were as clean as they need to be to call out the names of individual teachers.
The second source of error comes from the statistical techniques used in calculating value-added measures. There is much controversy among academic statisticians about which of the many value-added calculation techniques yields the best results. As with other powerful statistical techniques, the answers one gets depends on the techniques used. At the very least, we should know how sensitive the results are to the techniques used and the assumptions made during the calculations. Buddin's technical analysis, which is impenetrable to the lay reader, doesn't give us much help.
The propensity of value-added techniques to produce errors has been recognized for a long time, and a recent U.S. Institute for Education Sciences report concluded that the error rate could be upward of 30 percent. Moreover, rankings tend to be unstable from year to year: this year's high ranking teacher might be more poorly ranked next year, and vice versa.
I suspect that these limitations will have no effect at all on the Times' decision to publish the teachers' names, and that teachers will be angry. I am too.
But my anger would not be directed at the Times. Even if the newspaper overstepped the bounds of public and private, the school board, the district, and United Teachers Los Angeles are the culpable parties in this little drama. They dithered for decades, avoiding the question of a robust teacher evaluation system. More seriously, they failed to put in place a system of reliable data feedback so that schools and teachers could get smarter about how well they are teaching.
In one respect, the continuing newspaper series has made public what educators and parents, teachers and unionists, have known for decades. Some teachers are much more effective than others. It has also laid bare the education system's dogged refusal to build on that knowledge. Shame on them.
No comments:
Post a Comment