guardian | The
BBC Reith Lectures in 1967 were given by Edmund Leach, a Cambridge
social anthropologist. “Men have become like gods,” Leach began. “Isn’t
it about time that we understood our divinity? Science offers us total
mastery over our environment and over our destiny, yet instead of
rejoicing we feel deeply afraid.”
That was nearly half a century ago, and yet Leach’s opening lines
could easily apply to today. He was speaking before the internet had
been built and long before the human genome had been decoded, and so his
claim about men becoming “like gods” seems relatively modest compared
with the capabilities that molecular biology and computing have
subsequently bestowed upon us. Our science-based culture is the most
powerful in history, and it is ceaselessly researching, exploring,
developing and growing. But in recent times it seems to have also become
plagued with existential angst as the implications of human ingenuity
begin to be (dimly) glimpsed.
The title that Leach chose for his Reith Lecture – A Runaway World
– captures our zeitgeist too. At any rate, we are also increasingly
fretful about a world that seems to be running out of control, largely
(but not solely) because of information technology and what the life
sciences are making possible. But we seek consolation in the thought
that “it was always thus”: people felt alarmed about steam in George
Eliot’s time and got worked up about electricity, the telegraph and the
telephone as they arrived on the scene. The reassuring implication is
that we weathered those technological storms, and so we will weather
this one too. Humankind will muddle through.
But in the last five years or so even that cautious, pragmatic
optimism has begun to erode. There are several reasons for this loss of
confidence. One is the sheer vertiginous pace of technological change.
Another is that the new forces at loose in our society – particularly
information technology and the life sciences – are potentially more
far-reaching in their implications than steam or electricity ever were.
And, thirdly, we have begun to see startling advances in these fields
that have forced us to recalibrate our expectations.
A classic example is the field of artificial intelligence (AI),
defined as the quest to enable machines to do things that would require
intelligence if performed by a human. For as long as most of us can
remember, AI in that sense was always 20 years away from the date of
prediction. Maybe it still is. But in the last few years we have seen
that the combination of machine learning, powerful algorithms, vast
processing power and so-called “Big Data” can enable machines to do very
impressive things – real-time language translation, for example, or
driving cars safely through complex urban environments – that seemed
implausible even a decade ago.
And this, in turn, has led to a renewal of excited speculation about
the possibility – and the existential risks – of the “intelligence
explosion” that would be caused by inventing a machine that was capable
of recursive self-improvement. This possibility was first raised in 1965
by the British cryptographer IJ Good, who famously wrote: “The first
ultraintelligent machine is the last invention that man need ever make,
provided that the machine is docile enough to tell us how to keep it
under control.” Fifty years later, we find contemporary thinkers like Nick Bostrom and Murray Shanahan taking the idea seriously.
0 comments:
Post a Comment