fp | The term “robotics revolution” evokes images of the future: a
not-too-distant future, perhaps, but an era surely distinct from the
present. In fact, that revolution is already well under way. Today,
military robots appear on battlefields, drones fill the skies,
driverless cars take to the roads, and “telepresence robots” allow
people to manifest themselves halfway around the world from their actual
location. But the exciting, even seductive appeal of these
technological advances has overshadowed deep, sometimes uncomfortable
questions about what increasing human-robot interaction will mean for
society.
Robotic technologies that collect, interpret, and respond
to massive amounts of real-world data on behalf of governments,
corporations, and ordinary people will unquestionably advance human
life. But they also have the potential to produce dystopian outcomes. We
are hardly on the brink of the nightmarish futures conjured by
Hollywood movies such as The Matrix or The Terminator,
in which intelligent machines attempt to enslave or exterminate humans.
But those dark fantasies contain a seed of truth: the robotic future
will involve dramatic tradeoffs, some so significant that they could
lead to a collective identity crisis over what it means to be human.
This
is a familiar warning when it comes to technological innovations of all
kinds. But there is a crucial distinction between what’s happening now
and the last great breakthrough in robotic technology, when
manufacturing automatons began to appear on factory floors during the
late twentieth century. Back then, clear boundaries separated industrial
robots from humans: protective fences isolated robot workspaces,
ensuring minimal contact between man and machine, and humans and robots
performed wholly distinct tasks without interacting.
Such barriers
have been breached, not only in the workplace but also in the wider
society: robots now share the formerly human-only commons, and humans
will increasingly interact socially with a diverse ecosystem of robots.
The trouble is that the rich traditions of moral thought that guide
human relationships have no equivalent when it comes to robot-to-human
interactions. And of course, robots themselves have no innate drive to
avoid ethical transgressions regarding, say, privacy or the protection
of human life. How robots interact with people depends to a great deal
on how much their creators know or care about such issues, and robot
creators tend to be engineers, programmers, and designers with little
training in ethics, human rights, privacy, or security. In the United
States, hardly any of the academic engineering programs that grant
degrees in robotics require the in-depth study of such fields.
0 comments:
Post a Comment