Saturday, September 18, 2010


Who Are The Real Education “Reformers”?

Richard Kahlenberg

by Richard Kahlenberg from Taking Note: A Century Foundation Group blog |

September 17, 2010 | All sorts of people are interested in education reform – very few are content with the status quo. Yet in the press, only those who embrace a particular type of reform get the label.  To be a “reformer” you have to embrace ideas that teachers and their unions don’t like – ideas such as non-unionized charter schools and teacher pay based on test scores.

Consider, for example, a recent article in the New York Times depicting the battle in three New York state Senate primary races.  On the one hand were hedge fund managers and supporters of non-unionized charter schools who were identified as favoring “education reform” on four occasions, “school reform” on another, and simply “reform” on yet another.  Opponents of charter schools were never given that label, even though teacher unions and others who don’t think the track record of charter schools is very good in fact favor lots of reforms – such as teacher peer review to weed out bad educators; rigorous national standards; expanded pre-K programs; reducing economic and racial isolation in schools, and on and on.

What’s particularly galling in the Times story is that in any other context, it is doubtful that the paper would have employed the good-guy “reformer” label to a group of extremely wealthy hedge fund managers who wrote enormous checks to influence the political process, while withholding any positive label from a grass roots effort by workers to resist change that they thought would be harmful to both them and their clients (schoolchildren.)   (Reality check: research finds only 17% of charter schools outperform regular public schools.)

Fortunately, rank and file voters appear to see through this false labeling.  In New York, all three so-called “reform” candidates lost. 

See N.Y.  TIMES: Local Losses Show Hurdles for School Reformers
Wednesday, September 15, 2010 9:00 PM
Despite donations from Wall Street investors, three charter-school advocates lost their New York State Senate races by large margins |


Learning from the Los Angeles Times

Gordon Macinnes

by Gordon Macinnes from Taking Note: A Century Foundation Group blog |

September 17, 2010 - The Los Angles Times ignited a local firestorm by publishing its rankings for six thousand teachers in the Los Angeles United School District (LAUSD) on August 30. Its reporter team employed the “value-added method” (VAM) on seven years of test results for students in third through fifth grades, connected those results to classroom teachers, and graded teachers on a spectrum from “most” to “least” effective. If a student’s performance on the California fifth grade math test jumped eleven or more percentile points from last year’s fourth grade math test, the teacher was labeled “most” effective; if it fell by eleven or more points, the teacher was on the “least” effective list.

With all the controversy around VAM, there is a growing consensus between hard-core pay-for-performance advocates and teacher union activists:

  • the current teacher evaluation system is close to useless, since just about every teacher is judged to be at least “satisfactory” if not “excellent”;
  • VAM is a potentially promising method that might improve teacher evaluation and development, but it requires additional research and refinement;
  • even when fine-tuned and more reliable, VAM should never be the sole measure of teacher effectiveness (the disagreement focuses on whether VAM should count for 30 percent or 50 percent of a teacher’s evaluation); and,
  • the other factors that should be incorporated in an improved teacher evaluation system are much squishier as they rely on professional and personal judgments from classroom observations or analysis of student work that are not uniform and quantifiable like standardized tests.

Remember these points of consensus in the analysis that follows.

Secretary Duncan supported the disclosure with the question, “What’s to hide?” Randi Weingarten, president of the American Federation of Teachers, declared she was “disturbed that teachers will now be unfairly judged by incomplete data masked as comprehensive evaluations.” The local president criticized the Los Angeles Times for “journalistic irresponsibility” for making public “deeply flawed judgments about a teacher’s effectiveness.”

The Los Angeles Times’ name-the-names disclosure accelerates and focuses the growing national discussion about the value-added model as a vehicle for improving teacher evaluation. Finally, proponents argue, we have data from standardized state tests that can be used to make evaluations of teacher effectiveness more objective. It seems hard to argue with using the results from uniform, validated tests to grade teachers as well as their students.

To get a better handle on that question, I took a look at the results for Fries Avenue Elementary School in Wilmington, the port district of Los Angeles, from whence I was promoted from sixth grade a long time ago (none of my teachers are around to be evaluated). I also checked to see how our rivals at Gulf Avenue Elementary a few blocks away performed. Here is what I learned:

  • Despite all the qualifiers offered by the Los Angeles Times about the incompleteness of VAM and the warning by its respected RAND scholar/consultant that VAM should not be employed as the sole measure of a teacher’s effectiveness, a parent visiting the Los Angeles Times’ database would be given a single measure of their child’s teacher—results from the California assessments with a category for effectiveness. The reporter team argues that “no single number” is used to measure teachers. Correct. Two numbers are offered—reading and math results—plus the label. No other information is offered, and the reader would be hard-pressed to find verification that other factors should be included to be fair in assessing teacher performance.

The Los Angeles Times follows the script of VAM advocates: “Well, of course, VAM is not yet sophisticated enough to be used as the evaluation standard for teachers, but let’s just take a look at how VAM works with the following teachers/schools.” In fact, the policies of the Obama administration mandate that states have no statutory or regulatory obstacle to tying teachers to standardized test results. This requirement was one of only four absolute pre-conditions for applying for the Race to the Top. Subsequently, Education Secretary Arne Duncan offered the now-standard qualifier that he never intended that test results would be the only measure of teacher effectiveness.

  • The school profiles are clear about enrollment, economic status, and ethnicity, but are confusing regarding how academic performance figures in ranking the schools. Fries and Gulf are almost identical: both about 97 percent Latino, 90 percent receive free and reduced lunch, and about 58 percent English Learners, with their test scores almost as closely matched. Fries out-performs Gulf on reading by eight points, and Gulf is better at math by nine points, but Gulf is ranked as a “4/10” on the California performance index, while Fries is a “3/10.” Then to further confuse parents, Fries is characterized as “more effective than average” at instruction, while Gulf is just “average.”
  • There is nothing random about student classroom assignments, an essential prerequisite for reaching reliable conclusions about individual teachers. For a fuller discussion of the problems created by this fact, Bruce Baker’s “School Finance 101” blog is a valuable stop. Let us consider what happens with the poverty indicator being free or reduced lunch eligibility. As we have discovered with dozens of evaluations, the intensity of poverty in any school or classroom can affect outcomes significantly. It would be useful to know the concentration of “free” versus “reduced” lunch students. Then, considering that over half of all students are English Learners, we have no confirmation that they are randomly distributed among all teachers. It makes a huge difference if one teacher has two English Learner students while others have ten or twelve. And, there is no evidence at all—at least not in the Los Angeles Times profile—of students classified as disabled.

You can bet that classroom assignments next year will be anything but random. Many parents now will lobby the principal to have their child placed only in a classroom taught by a “most” or “more” effective teacher.

Another note on the potential unreliability of the Los Angeles Times’ disclosure is that there is no way to determine the educational influence of other teachers such as reading specialists, bilingual, special education, or English as a Second Language teachers who may offer “pull-out” or in-class tutoring. How should the contribution of these specialized teachers be measured? No one knows how to do that. There also may be summer or after-school programs offered by the schools or community organizations that emphasize reading and math instruction that some students may take and others may not. This information is absent. Most importantly, there is no way to capture the influence of the home environment and the intensity of encouragement offered by parents.

  • The Los Angeles Times’ disclosures are limited to teachers who have taught long enough to have had at least sixty students take the state tests. One would expect that elementary schools, with their self-contained, grade-level classrooms, would have a high percentage of teachers evaluated. In fact, only about one-third of teachers at Gulf and Fries made the cut. This underscores a very large problem for pay-for-performance advocates: the vast majority of teachers do not teach a subject or a grade level that is tested. Bruce Baker’s analysis of New Jersey teacher certification and classroom assignments suggests that at least 80 percent could not be evaluated using standardized tests, and that the same proportion holds true in Illinois, Missouri, Wisconsin, and other states.

VAM advocates are not anxious to emphasize this pattern. In most states (California is an exception), teachers from pre-kindergarten through third grade are excluded because there are no tests given to establish the baseline until third grade. There is no way to use a high school exit examination in math to judge the contributions of teachers of algebra, algebra II, pre-calculus, and geometry. The same applies to science tests that consolidate physics, chemistry, biology, geology, and environmental studies in one test. No one would think it fair to use these results to judge the teacher of biology in a course given two years before the test. In most states, at most grade levels, there are no tests for social studies or science. In no state are there tests for art, music, drama, dance, physical education, psychology, sociology, wellness, media studies, woodworking, computers, business, and so on.

The take-home lesson is that the value-added method is not ready for prime time. Yes, VAM can be used to help identify teachers who might need tailored support, by listing teachers whose students score in the bottom percentiles. This attention would be a part of teacher development and retention objectives, not public accountability.

If teacher unions are willing to have 30 percent of a teacher’s evaluation based on test results (when available), the hard question is how to fashion the remaining 70 percent. Observing teachers as they teach surely should be a part of that effort, but being fair and certain about the reliability of the observers is a problem that must be worked out locally. Analyzing samples of student essays, science experiments, math problem-solving, or spoken French might also be included, but, again, must be worked out locally. These suggestions will not be warmly received by those who believe that teaching is an easy, almost mechanical craft, the results of which can be fairly captured by a single test given once or twice a year.

The Los Angeles Times has added to the evidence that, for the value-added method, it is back to the drawing board.

No comments: