physorg | Imagine you're a scientist. You're interested in testing the
hypothesis that playing violent video games makes people more likely to
be violent in real life. This is a straightforward theory, but there are
still many, many different ways you could test it. First you have to
decide which games count as "violent". Does Super Mario Brothers count
because you kill Goombas? Or do you only count "realistic" games like
Call of Duty? Next you have to decide how to measure violent behaviour.
Real violence is rare and difficult to measure, so you'll probably need
to look at lower-level "aggressive" acts – but which ones?
Any scientific study in any domain from astronomy to biology to
social science contains countless decisions like this, large and small.
On a given project a scientist will probably end up trying many
different permutations, generating masses and masses of data.
The problem is that in the final published paper – the only thing you
or I ever get to read – you are likely to see only one result: the one
the researchers were looking for. This is because, in my experience,
scientists often leave complicating information out of published papers,
especially if it conflicts with the overall message they are trying to
get across.
In a large recent study
around a third of scientists (33.7%) admitted to things like dropping
data points based on a "gut feeling" or selectively reporting results
that "worked" (that showed what their theories predicted). About 70%
said they had seen their colleagues doing this. If this is what they are
prepared to admit to a stranger researching the issue, the real numbers
are probably much, much higher.
It is almost impossible to overstate how big a problem this is for
science. It means that, looking at a given paper, you have almost no
idea of how much the results genuinely reflect reality (hint: probably not much).
Pressure to be interesting
At this point, scientists probably sound pretty untrustworthy. But
the scientists aren't really the problem. The problem is the way science
research is published. Specifically the pressure all scientists are
under to be interesting.
This problem comes about because science, though mostly funded by
taxpayers, is published in academic journals you have to pay to read.
Like newspapers, these journals are run by private, for-profit
companies. And, like newspapers, they want to publish the most interesting, attention-grabbing articles.
This is particularly true of the most prestigious journals like Science and Nature.
What this means in practice is that journals don't like to publish
negative or mixed results – studies where you predicted you would find
something but actually didn't, or studies where you found a mix of
conflicting results.
0 comments:
Post a Comment