Tuesday, September 03, 2013

U P D A T E D: GUESSES AND HYPE GIVE WAY TO DATA IN STUDY OF EDUCATION + smf’s 2¢

By GINA KOLATA . The New York Times | http://nyti.ms/17y4cuO

Tuesday, September 3, 2013 :: What works in science and math education?

Until recently, there had been few solid answers — just guesses and hunches, marketing hype and extrapolations from small pilot studies.

But now, a little-known office in the Education Department is starting to get some real data, using a method that has transformed medicine: the randomized clinical trial, in which groups of subjects are randomly assigned to get either an experimental therapy, the standard therapy, a placebo or nothing.

update :: just to confuse the issue further

Think Education is like Medicine?  Think again!

Commentary by James H. Nehring in EdWeek | http://bit.ly/17DqauE

Aug 28, 2013  ::  There are three bad ideas popular among education writers in the United States right now.
  • First is the idea that American public education should learn from the medical profession.
  • Second is the idea that better skills are the route to higher income.
  • And third is the instructional core, an idea that teaching consists of three elements—teacher, student, and content.

For each of these ideas, there is a better way that will set us on a more constructive path.

Take the idea that American education should learn from the medical profession. This is appealing because over the last hundred years, doctors—bolstered with rigorous medical education, high professional standards, scientific research, and a growing arsenal of powerful drugs—have shown amazing results in healing sick people. It seems logical that if we brought all the same elements to bear on teaching, we could produce similarly impressive results. Indeed, the extensive training of physicians is a useful model, but there's a problem. Education is not like medicine.

In medicine, a doctor treats one patient at a time for a physical or psychological malady. Educators, on the other hand, see large numbers of students all at once, for an extended period of time.

Doctors work mainly in the realm of the biological and chemical. Educators work mainly in the realm of behavior and attitudes.

If we want to compare education to medicine, we should look instead at the field of public health. Teaching children and adolescents is akin to what a community health professional faces in trying to get people to brush their teeth, eat less junk food, and exercise more. While this comparison is more apt, it is less appealing since the United States has epidemic rates of preventable diseases stemming from our poor habits regarding diet and exercise.

Public health in America is a disaster, no doubt for a host of complex reasons that go well beyond anything public health professionals have or have not done. (continue reading.>>>)

The findings could be transformative, researchers say. For example, one conclusion from the new research is that the choice of instructional materials — textbooks, curriculum guides, homework, quizzes — can affect achievement as profoundly as teachers themselves; a poor choice of materials is at least as bad as a terrible teacher, and a good choice can help offset a bad teacher’s deficiencies.

So far, the office — the Institute of Education Sciences — has supported 175 randomized studies. Some have already concluded; among the findings are that one popular math textbook was demonstrably superior to three competitors, and that a highly touted computer-aided math-instruction program had no effect on how much students learned.

Other studies are under way.

Cognitive psychology researchers, for instance, are assessing an experimental math curriculum in Tampa, Fla. The institute gives schools the data they need to start using methods that can improve learning. It has a What Works Clearinghouse — something like a mini Food and Drug Administration, but without enforcement power — that rates evidence behind various programs and textbooks, using the same sort of criteria researchers use to assess effectiveness of medical treatments. Without well-designed trials, such assessments are largely guesswork.

“It’s as if the medical profession worried about the administration of hospitals and patient insurance but paid no attention to the treatments that doctors gave their patients,” the institute’s first director, Grover J. Whitehurst, now of the Brookings Institution, wrote in 2012.

But the “what works” approach has another hurdle to clear: Most educators, including principals and superintendents and curriculum supervisors, do not know the data exist, much less what they mean.

A survey by the Office of Management and Budget found that just 42 percent of school districts had heard of the clearinghouse. And there is no equivalent of an F.D.A. to approve programs for marketing, or health insurance companies to refuse to pay for treatments that do not work. Nor is it clear that data from rigorous studies will translate into the real world.

There can be many obstacles, says Anthony Kelly, a professor of educational psychology at George Mason. Teachers may not follow the program, for example. “By all means, yes, we should do it,” he said. “But the issue is not to think that one method can answer all questions about education.” In this regard, other countries are no further along than the United States, researchers say.

They report that only Britain has begun to do the sort of randomized trials that are going on here, with the assistance of American researchers.

As Peter Tymms, the director of the International Performance Indicators in Primary Schools center at Durham University in England, wrote in an e-mail: “The wake-up call was a national realization, less than a decade ago,” that all the money spent on education reform “had almost no impact on basic skills.” Suddenly, scholars who had long argued for randomized trials began to be heard.

In the United States, the effort to put some rigor into education research began in 2002, when the Institute of Education Sciences was created and Dr. Whitehurst was appointed the director. “I found on arriving that the status of education research was poor,” Dr. Whitehurst said. “It was more humanistic and qualitative than crunching numbers and evaluating the impact.

“You could pick up an education journal,” he went on, “and read pieces that reflected on the human condition and that involved interpretations by the authors on what was going on in schools. It was more like the work a historian might do than what a social scientist might do.” At the time, the Education Department had sponsored exactly one randomized trial.

That was a study of Upward Bound, a program that was thought to improve achievement among poor children. The study found it had no effect. So Dr. Whitehurst brought in new people who had been trained in more rigorous fields, and invested in doctoral training programs to nurture a new generation of more scientific education researchers.

He faced heated opposition from some people in schools of education, he said, but he prevailed.

The studies are far from easy to do.

“It is an order of magnitude more complicated to do clinical trials in education than in medicine,” said F. Joseph Merlino, president of the 21st Century Partnership for STEM Education, an independent nonprofit organization. “In education, a lot of what is effective depends on your goal and how you measure it.” Then there is the problem of getting schools to agree to be randomly assigned to use an experimental program or not. “There is an art to doing it,” Mr. Merlino said.

“We don’t usually go and say, ‘Do you want to be part of an experiment?’ We say, ‘This is an important study; we have things to offer you.’  ” As the Education Department’s efforts got going over the past decade, a pattern became clear, said Robert Boruch, a professor of education and statistics at the University of Pennsylvania.

Most programs that had been sold as effective had no good evidence behind them. And when rigorous studies were done, as many as 90 percent of programs that seemed promising in small, unscientific studies had no effect on achievement or actually made achievement scores worse.

For example, Michael Garet, the vice president of the American Institutes for Research, a behavioral and social science research group, led a study that instructed seventh-grade math teachers in a summer institute, helping them understand the math they teach — like why, when dividing fractions, do you invert and multiply? The teachers’ knowledge of math improved, but student achievement did not.

“The professional development had many features people think it should have — it was sustained over time, it involved opportunities to practice, it involved all the teachers in the school,” Dr. Garet said. “But the results were disappointing.” The findings were added to the What Works Clearinghouse.

“There was a joke going around that it was the ‘What Doesn’t Work’ Clearinghouse,” said John Easton, the current director of the Institute of Education Sciences. Jon Baron, the president of the Coalition for Evidence-Based Policy, a nonprofit, nonpartisan organization, said the clearinghouse “shows why it is important to do rigorous evaluations.” “Most programs claim to be evidence-based,” he said, but most have no good evidence that they work.

Now, though, with a growing body of evidence on what works, researchers wonder how they can get educators and the public to pay attention.

“It’s fascinating what a secret this is,” said Robert Slavin, director of the Center for Research and Reform in Education at Johns Hopkins University. “If you talk to your seatmate on an airplane,” he continued, “100 times out of 100 they will not have heard of it. Invariably they will have loads of opinions about what schools should or shouldn’t do, and they are utterly unaware and uninterested in the idea that there is actual evidence.”

Educators often are not much better, Dr. Slavin said. Too often, they are swayed by marketing or anecdotes or the latest fad. And “invariably,” he added, “folks trying to sell a program will say there is evidence behind it,” even though that evidence is far from rigorous.

Dr. Merlino agreed. “A lot of districts go by the herd mentality,” he said, citing the example of a Singapore-based math program now in vogue that has never been rigorously compared with other programs and found to be better.

“Personal anecdote trumps data.” There are solutions, Dr. Slavin said. The federal government or states could require school districts to use programs that work — when sufficient data are available — or forfeit funds. But “there is very little political drive for that to happen,” he said. Yet he retains a grain of optimism because the Obama administration — as well as the Bush administration, which established the Institute of Education Sciences — says its goal is to enable schools to use programs that have been shown to work. “Sooner or later,” Dr. Slavin said, “this has to become consequential.”


2cents smf:  My first reaction is ‘here we go again’ – being data-driven-to-destruction – made even more unsettling by the content of the  third paragraph:

…using a method that has transformed medicine: the randomized clinical trial, in which groups of subjects are randomly assigned to get either an experimental therapy, the standard therapy, a placebo or nothing.

…which describes a double blind  randomized clinical trial where half the test subjects get the magic cure and the other half get the same-old/same-old. 

The test subjects being actual children. Your children. My children. Our children.

  • What kind of strategy is there anticipated or in place for the kids who got the short end of the deal, whether they got the magic cure that didn’t work …or the placebo when the cure worked – to recover the lost opportunity?  Especially when we know that student recovery is the only thing the system does worse than communicating with parents?
  • What is the cost in lost opportunity for how many kids?
  • What is the cost in money to do the remediation for the half of the test subjects who didn’t get the luck of draw?  And who pays for it?

Are parents and guardians informed that their children are test subjects in a clinical trial?  Or is this a social experiment on the unknowing – like Eugenics or any of the “scientific” social engineering experiments performed on inmates in the bad old days? The CIA experiments wit torture, senory deprivation and LSD (MK-ULTRA) come to mind  Or the Stateville Penitentiary Malaria Study.

I probably am overreacting.

Here’s another concern, based on Paragraph 4:

“…one conclusion from the new research is that the choice of instructional materials — textbooks, curriculum guides, homework, quizzes — can affect achievement as profoundly as teachers themselves; a poor choice of materials is at least as bad as a terrible teacher, and a good choice can help offset a bad teacher’s deficiencies.”

Once we get beyond the “bad teacher” bogeyman* – let’s apply what we seem to be learning here to the content of the LAUSD iPads for Everyone initiative.

Where is the evidence that the Pearson content – which rather than being new is actually a couple of years old (thus predating the Common Core) the right/best/’least bad’  content? 

How has it been tested against this new scientific paradigm of educational material selection?

_____

* OMG! What could be worse?

No comments: