Thursday, May 21, 2015


Commentary By Kentaro Toyama | The chronicle of Higher Education |


James Yang for The Chronicle

May 19, 2015  ::  In 2004, I moved to India to help found a new research lab for Microsoft. Based in Bangalore, it quickly became a hub for cutting-edge computer science. My own focus shifted with the move, and I began to explore applications of digital technologies for the socioeconomic growth of poor communities. India struggles to educate its billion-plus population, so during the five years that I was there, my team considered how computers, mobile phones, and other devices could aid learning.

Sadly, what we found was that even when technology tested well in experiments, the attempt to scale up its impact was limited by the availability of strong leadership, good teachers, and involved parents — all elements that are unfortunately in short supply in India’s vast but woefully underfunded government school system. In other words, the technology’s value was in direct proportion to the instructor’s capability.

Over time, I came to think of this as technology’s Law of Amplification: While technology helps education where it’s already doing well, technology does little for mediocre educational systems; and in dysfunctional schools, it can cause outright harm.

When I returned to the United States and took an academic post, I saw that the idea applies as much to higher education in America as it does to general education in India. This past semester, I taught an undergraduate course called "IT and Global Society." The students read about high-profile projects like One Laptop Per Child and the TED-Prize-winning Hole-in-the-Wall program. Proponents argue that students can overcome educational hurdles with low-cost digital devices, but rigorous research fails to show much educational impact of technology in and of itself, even when offered free.

My students — all undergrads and digital natives — were at first surprised that technology did so little for education. They had a deep sense that they benefited from digital tools. And they were right to have that feeling. As relatively well-off students enrolled at a good university, they were all but guaranteed a solid education; being able to download articles online and exchange emails with their professors amplified the fundamentals.

But their personal intuition didn’t always transfer to other contexts. In fact, even in their own lives, it was easy to show that technology by itself didn’t necessarily cause more learning. To drive this point home, I asked them a series of questions about their own experience:

"How many of you have ever tried to take a free course on the Internet?" Over half the class raised their hands.

"And how many completed it?" All the hands went down.

"Why didn’t you continue?" Most students said they didn’t get past two or three online lectures. Someone mentioned lack of peer pressure to continue. Another suggested it wasn’t worth it without the credits. One student said simply, "I’m lazy. Even in a regular class, I probably wouldn’t do my homework unless I felt the disapproval of the professor."

In effect, the students demonstrated an informal grasp of exactly what studies about educational technologies often find. So, if my tech-immersed undergraduates could intuit the limits of educational technology, why do educators, policy makers, and entrepreneurs keep falling for its false promise?

One problem is a widespread impression that Silicon Valley innovations are necessarily good for society. We confuse business success with social value, though the two often differ. Just for example, how is it that during the last four decades we have seen an explosion of incredible technologies, but America’s poverty rate hasn’t decreased and inequality has skyrocketed? Any idea that more technology in and of itself cures social ills is obviously flawed. Yet without a good framework for thinking about technology and society, it’s easy to get caught up in hype about new gadgets.

The Law of Amplification provides one such framework: At heart, it affirms that technology is a tool, which means that any positive effects depend on well-intentioned, capable people. But this also means that good outcomes are never guaranteed. What amplification predicts is that technological effects follow underlying social currents.

MOOCs offer a convenient example. Proponents cite the potential for MOOCs to lower the costs of education, based on the assumption that low-cost content is what is needed. Of course, the Internet offers dirt-cheap replicability, and it undeniably amplifies content producers’ ability to reach a mass audience. But if free content were all that was needed for an education, everyone with broadband connectivity would be an Ivy League Ph.D.

The real obstacle in education remains student motivation. Especially in an age of informational abundance, getting access to knowledge isn’t the bottleneck, mustering the will to master it is. And there, for good or ill, the main carrot of a college education is the certified degree and transcript, and the main stick is social pressure. Most students are seeking credentials that graduate schools and employers will take seriously and an environment in which they’re prodded to do the work. But neither of these things is cheaply available online.

Arizona State University’s recent partnership with edX to offer MOOCs is an attempt to do this, but if its student assessments fall short (or aren’t tied to verified identities), other universities and employers won’t accept them. And if the program doesn’t establish genuine rapport with students, then it won’t have the standing to issue credible nudges. (Automated text-message reminders to study will quickly become so much spam.) For technological amplification to lower the costs of higher education, it has to build on student motivation, and that motivation is tied not to content availability but to credentialing and social encouragement.

The Law of Amplification’s least appreciated consequence, however, is that technology on its own amplifies underlying socioeconomic inequalities. To begin with, the rich will always be able to afford more technology, and low-cost technology in no way solves that. There is no digital keeping up with the Joneses.

But even an equitable distribution of technology aggravates inequality. Students with poor high-school preparation will always find it hard to learn things their prep-school peers can ace. Low-income families will struggle to pay registration fees that wealthy households barely notice. Blue-collar workers doing hard manual labor may not have the energy to take evening courses that white-collar professionals think of as a hobby. And these things are even more true online than offline. Sure, educational technologies can lower costs for everyone, but it’s those with existing advantages who are best positioned to capitalize on them.

In fact, studies confirm exactly this: Well-educated men with office jobs disproportionately complete MOOC courses, while lower-income young adults barely enroll. The primary effect of free online courses is to further educate an already well-educated group who will pull away from less-educated others. The educational rich just get richer.

So what is to be done? Unfortunately, there is no technological fix, and that is perhaps the hardest lesson of amplification. More technology only magnifies socioeconomic disparities, and the only way to avoid that is nontechnological: Either resolve the underlying inequities first, or create policies that favor the less advantaged.

Kentaro Toyama is an associate professor at the University of Michigan’s School of Information, a fellow of the Dalai Lama Center for Ethics and Transformative Values at MIT, and the author of Geek Heresy: Rescuing Social Change From the Cult of Technology, published this month by PublicAffairs.

No comments: