Themes in the News for the week of Feb. 7-11, 2011 by UCLA IDEA | http://bit.ly/dSIvid
02-11-2011 - Imagine yourself a parent who has just received an amazing tool for gauging the best schools, principals and teachers—a Zagat guide to education. Can the guide help you find that elusive quality called “teacher effectiveness”? What do you find there, and can you trust it?
First, know that there is a 50-50 chance that an entry in your new “value-added” teacher-rating guide is inaccurate—that some “highly effective teachers” can be quite average and that some lower-ranked teachers are among the best at the school. Restaurants, at least, are rated across many dimensions—cost, food quality, service, presentation, atmosphere and more, and we can still be disappointed when we show up for a meal that doesn’t suit our tastes. Further, the Zagat Guide to restaurants openly relies on opinions from many different sources. Not so your one-dimensional teacher guide which purports to be scientific.
A new report has confirmed fears of inaccurate teacher ratings in the Los Angeles Times’ report published last summer. University of Colorado researchers Derek Briggs and Ben Domingue used the same data the Times used and concluded that an unacceptable number of the ratings stood a good chance of being wrong (Washington Post, Los Angeles Times, Thoughts on Public Education, KPCC, Education Week).
For example, when the researchers looked at student reading scores, they found that more than half of the teachers could have different effectiveness ratings than the Times reported. In math, their analysis differed for almost 40 percent of teacher ratings. Interestingly, they noted changes at the extremes of the spectrum from most effective to least effective. With an alternate research model, the researchers found for reading scores that 8.1 percent of teachers that the Times considered ineffective could be effective; and 12.6 percent of the Times’ effective teachers might be ineffective. That’s a lot of room for error, considering that one purpose of value-added formulas is to drive overall school improvement by promoting the practices of highly ranked teachers and re-training or firing lower-ranked teachers.
“It may well be the case that all value-added models are flawed, but some are more flawed than others,” lead researcher Briggs was quoted in the Washington Post. “One of the things we show in our report is that the choice of what ‘control’ variables are included or excluded from the model can matter a great deal to inferences about teacher effectiveness.”
Since receiving the Colorado report, the Times has stood by its support for publishing the test results—even going so far as to claim that the new analysis agrees with the paper’s original representation. In fact, the researchers strenuously disagree (Voice of San Diego)
smf: The researchers point-by-point ‘strenuous disagreement’ is here.
see also 4LAKidsNews: smf’s 2¢
Last year, legislation in New York, Tennessee, Colorado and other states altered teacher evaluations to include some measure of student test performance. Washington is also currently considering it. This new report should give pause to state and federal legislators and policymakers who are eager to adopt value-added analysis as a primary means of evaluating and firing teachers (Washington Post blog).
Everyone agrees that improving academic achievement is an aspect of teacher effectiveness. But, beyond that, there is wide disagreement about how to measure improvement and how much a particular teacher is individually responsible for it. Also, there are other tools to judge the effectiveness of teachers, such as parent and student surveys, principal observations, quality of classwork, review of lesson plans, National Board certification processes (Washington Post blog). However, these methods can only offer a steady build-up of school environments where teachers can do their best work. They do not promise quick, easy, or cheap solutions, or a quick tool that convinces parents that school reformers are getting tough.