CounterPunch | You may not recognize names like Amy Cuddy, Kristina Durante, or Brian
Wansink but if you listen to NPR, watch TED talks, or read popular
online news sites or local and national outlets such as the New York
Times, you have probably stumbled across their work. They are among a
growing number of academics who have produced one or more exciting,
novel, too-amazing-to-be-true research studies that have caught the
attention of the media and have been widely disseminated through
American culture to the point that we may have internalized their
findings as fact. Yet their work has since been debunked, shown to be
unscientific and irreproducible. It is all part of what has been dubbed
the “replication crisis” in science. Since replication is one of the
basic tenets of science, failure to reproduce the results of a study
(especially after several attempts) indicates a lack of support for the
original findings. How does this happen time and time again, and what
does it say about science and the news media?
Case 1 – Amy Cuddy
Amy Cuddy’s famous study on how an assertive “power pose” could
elevate testosterone levels and increase a person’s confidence and
risk-taking was published in the prestigious Psychological Science,
one of the top journals in that field. Then a professor in the Harvard
Business School, Cuddy went on to give the second most-popular TED talk
ever, sign a book deal, and travel around the world commanding huge fees
on the lecture circuit based on the general theme of her study. In the
meantime, other skeptical researchers Joe Simmons and Uri Simonsohn questioned the veracity of her claims and Eva Ranehill and collegues failed to replicate the results of the study. One of Cuddy’s co-authors, Dana Carney, has since withdrawn her support of the study, saying “I do not believe the effects are real.” But Cuddy, having voluntarily left her academic position, still stands by her work.
In truth, not only is the power pose study a replication failure, it
is a failure of peer review. No one needs a particularly specialized
expertise to see some of the problems with the study. One glance at the methods section of the paper
and you see the sample size of 42, hardly sufficient or statistically
powerful. In addition, like in many studies, specific subjective proxies
were used to indicate a much more general, supposedly objective,
finding. Here, risk taking was measured by participants’ willingness to
perform a certain gambling task. Yet one’s interest in gambling is not
necessarily directly proportional to one’s interest in other risky
activities. Further, participants’ levels of confidence were
self-reported on a scale of 1-5. Self-reporting is always error prone,
because your level of “2” may not be equivalent to my level of “2.” And
yet, all of these subjective measurements are treated as concrete
quantifiable data. Finally, the study assumed no cultural differences;
demonstrations of power or confidence might not be viewed as beneficial
and positive as they are assumed to be in the American culture.
You can see how the reliability of the study deteriorates under
scrutiny. But no study is perfect. One of the biggest problems with this
study and many similar ones is not just how unreliable the results are,
but that the results are treated as generalizable to everyone
everywhere. If Cuddy had defined the results as provisional and
contingent upon certain assumptions, and circumstances, then her
research might have been more defendable, but instead she presented her
shoddy science as universal immutable fact. This practice appears to be
too widespread.
0 comments:
Post a Comment