Monday, October 10, 2011

most instructional technology sucks...,

NYTimes | The Web site of Carnegie Learning, a company started by scientists at Carnegie Mellon University that sells classroom software, trumpets this promise: “Revolutionary Math Curricula. Revolutionary Results.”

The pitch has sounded seductive to thousands of schools across the country for more than a decade. But a review by the United States Department of Education last year would suggest a much less alluring come-on: Undistinguished math curricula. Unproven results.

The federal review of Carnegie Learning’s flagship software, Cognitive Tutor, said the program had “no discernible effects” on the standardized test scores of high school students. A separate 2009 federal look at 10 major software products for teaching algebra as well as elementary and middle school math and reading found that nine of them, including Cognitive Tutor, “did not have statistically significant effects on test scores.”

Amid a classroom-based software boom estimated at $2.2 billion a year, debate continues to rage over the effectiveness of technology on learning and how best to measure it. But it is hard to tell that from technology companies’ promotional materials.

Many companies ignore well-regarded independent studies that test their products’ effectiveness. Carnegie’s Web site, for example, makes no mention of the 2010 review, by the Education Department’s What Works Clearinghouse, which analyzed 24 studies of Cognitive Tutor’s effectiveness but found that only four of those met high research standards. Some firms misrepresent research by cherry-picking results and promote surveys or limited case studies that lack the scientific rigor required by the clearinghouse and other authorities.

“The advertising from the companies is tremendous oversell compared to what they can actually demonstrate,” said Grover J. Whitehurst, a former director of the Institute of Education Sciences, the federal agency that includes What Works.

School officials, confronted with a morass of complicated and sometimes conflicting research, often buy products based on personal impressions, marketing hype or faith in technology for its own sake.

“They want the shiny new one,” said Peter Cohen, chief executive of Pearson School, a leading publisher of classroom texts and software. “They always want the latest, when other things have been proven the longest and demonstrated to get results.”

Carnegie, one of the most respected of the educational software firms, is hardly alone in overpromising or misleading. The Web site of Houghton Mifflin Harcourt says that “based on scientific research, Destination Reading is a powerful early literacy and adolescent literacy program,” but it fails to mention that it was one of the products the Department of Education found in 2009 not to have statistically significant effects on test scores.

Similarly, Pearson’s Web site cites several studies of its own to support its claim that Waterford Early Learning improves literacy, without acknowledging the same 2009 study’s conclusion that it had little impact.

And Intel, in a Web document urging schools to buy computers for every student, acknowledges that “there are no longitudinal, randomized trials linking eLearning to positive learning outcomes.” Yet it nonetheless argues that research shows that technology can lead to more engaged and economically successful students, happier teachers and more involved parents.

“To compare this public relations analysis to a carefully constructed research study is laughable,” said Alex Molnar, professor of education at the National Education Policy Center at the University of Colorado. “They are selling their wares.”