nautil.us | Technology is, in other words, enabling criminals to target anyone
anywhere and, due to democratization, increasingly at scale. Emerging
bio-, nano-, and cyber-technologies are becoming more and more
accessible. The political scientist Daniel Deudney has a word for what
can result: “omniviolence.” The ratio of killers to killed, or “K/K
ratio,” is falling. For example, computer scientist Stuart Russell has
vividly described
how a small group of malicious agents might engage in omniviolence: “A
very, very small quadcopter, one inch in diameter can carry a one-or
two-gram shaped charge,” he says. “You can order them from a drone
manufacturer in China. You can program the code to say: ‘Here are
thousands of photographs of the kinds of things I want to target.’ A
one-gram shaped charge
can punch a hole in nine millimeters of steel, so presumably you can
also punch a hole in someone’s head. You can fit about three million of
those in a semi-tractor-trailer. You can drive up I-95 with three trucks
and have 10 million weapons attacking New York City. They don’t have to
be very effective, only 5 or 10% of them have to find the target.”
Manufacturers will be producing millions of these drones, available for
purchase just as with guns now, Russell points out, “except millions of
guns don’t matter unless you have a million soldiers. You need only
three guys to write the program and launch.” In this scenario, the K/K
ratio could be perhaps 3/1,000,000, assuming a 10-percent accuracy and
only a single one-gram shaped charge per drone.
Will emerging technologies make the state system obsolete? It’s hard to see why not.
That’s
completely—and horrifyingly—unprecedented. The terrorist or psychopath
of the future, however, will have not just the Internet or drones—called
“slaughterbots” in this video
from the Future of Life Institute—but also synthetic biology,
nanotechnology, and advanced AI systems at their disposal. These tools
make wreaking havoc across international borders trivial, which raises
the question: Will emerging technologies make the state system obsolete?
It’s hard to see why not. What justifies the existence of the state,
English philosopher Thomas Hobbes argued, is a “social contract.” People
give up certain freedoms in exchange for state-provided security,
whereby the state acts as a neutral “referee” that can intervene when
people get into disputes, punish people who steal and murder, and
enforce contracts signed by parties with competing interests.
The
trouble is that if anyone anywhere can attack anyone anywhere else,
then states will become—and are becoming—unable to satisfy their primary
duty as referee. It’s a trend toward anarchy, “the war of all against
all,” as Hobbes put it—in other words a condition of everyone living in
constant fear of being harmed by their neighbors. Indeed, in a recent paper, “The Vulnerable World Hypothesis,” published in Global Policy,
the Oxford philosopher Nick Bostrom argues that the only way to defend
against a global catastrophe is to employ a universal and invasive
surveillance system, what he calls a “High-tech Panopticon.” Sound
dystopian? It sure does to me. “Creating and operating the High-tech
Panopticon would require substantial investment,” Bostrom writes, “but
thanks to the falling price of cameras, data transmission, storage, and
computing, and the rapid advances in AI-enabled content analysis, it may
soon become both technologically feasible and affordable.” Bostrom is
well-aware of the downsides—corrupt actors in a state could exploit this
surveillance for totalitarian ends, or hackers could blackmail
unsuspecting victims. Yet the fact is that it may still be a better
option than suffering one global catastrophe after another.
How can societies counterattack omniviolence? One strategy
could be a superintelligent machine—essentially, an extremely powerful
algorithm—that’s specifically designed to govern fairly. We could then
put the algorithm in political charge and, insofar as it governs as
something like a “Philosopher King,” not worry constantly about the data
collected being misused or abused. Of course, this is a fantastical
proposal. Even the real-world use of AI in the justice system is fraught
with problems.
But at this point, do we have a better idea for preventing the collapse
of the state system under the weight of widespread technological
empowerment?
0 comments:
Post a Comment