Sunday, November 16, 2014
the myth of AI
edge | A lot of us were appalled a few years ago when the American Supreme
Court decided, out of the blue, to decide a question it hadn't been
asked to decide, and declare that corporations are people. That's a
cover for making it easier for big money to have an influence in
politics. But there's another angle to it, which I don't think has been
considered as much: the tech companies, which are becoming the most
profitable, the fastest rising, the richest companies, with the most
cash on hand, are essentially people for a different reason than that.
They might be people because the Supreme Court said so, but they're
essentially algorithms.
If you look at a company like Google or Amazon and many others, they
do a little bit of device manufacture, but the only reason they do is to
create a channel between people and algorithms. And the algorithms run
on these big cloud computer facilities.
The distinction between a corporation and an algorithm is fading.
Does that make an algorithm a person? Here we have this interesting
confluence between two totally different worlds. We have the world of
money and politics and the so-called conservative Supreme Court, with
this other world of what we can call artificial intelligence, which is a
movement within the technical culture to find an equivalence between
computers and people. In both cases, there's an intellectual tradition
that goes back many decades. Previously they'd been separated; they'd
been worlds apart. Now, suddenly they've been intertwined.
The idea that computers are people has a long and storied history. It
goes back to the very origins of computers, and even from before.
There's always been a question about whether a program is something
alive or not since it intrinsically has some kind of autonomy at the
very least, or it wouldn't be a program. There has been a domineering
subculture—that's been the most wealthy, prolific, and influential
subculture in the technical world—that for a long time has not only
promoted the idea that there's an equivalence between algorithms and
life, and certain algorithms and people, but a historical determinism
that we're inevitably making computers that will be smarter and better
than us and will take over from us.
That mythology, in turn, has spurred a reactionary, perpetual spasm
from people who are horrified by what they hear. You'll have a figure
say, "The computers will take over the Earth, but that's a good thing,
because people had their chance and now we should give it to the
machines." Then you'll have other people say, "Oh, that's horrible, we
must stop these computers." Most recently, some of the most beloved and
respected figures in the tech and science world, including Stephen
Hawking and Elon Musk, have taken that position of: "Oh my God, these
things are an existential threat. They must be stopped."
In the past, all kinds of different figures have proposed that this
kind of thing will happen, using different terminology. Some of them
like the idea of the computers taking over, and some of them don't. What
I'd like to do here today is propose that the whole basis of the
conversation is itself askew, and confuses us, and does real harm to
society and to our skills as engineers and scientists.
A good starting point might be the latest round of anxiety about
artificial intelligence, which has been stoked by some figures who I
respect tremendously, including Stephen Hawking and Elon Musk. And the
reason it's an interesting starting point is that it's one entry point
into a knot of issues that can be understood in a lot of different ways,
but it might be the right entry point for the moment, because it's the
one that's resonating with people.
The usual sequence of thoughts you have here is something like:
"so-and-so," who's a well-respected expert, is concerned that the
machines will become smart, they'll take over, they'll destroy us,
something terrible will happen. They're an existential threat, whatever
scary language there is. My feeling about that is it's a kind of a
non-optimal, silly way of expressing anxiety about where technology is
going. The particular thing about it that isn't optimal is the way it
talks about an end of human agency.
But it's a call for increased human agency, so in that sense maybe
it's functional, but I want to go little deeper in it by proposing that
the biggest threat of AI is probably the one that's due to AI not
actually existing, to the idea being a fraud, or at least such a poorly
constructed idea that it's phony. In other words, what I'm proposing is
that if AI was a real thing, then it probably would be less of a threat
to us than it is as a fake thing.
What do I mean by AI being a fake thing? That it adds a layer of
religious thinking to what otherwise should be a technical field. Now,
if we talk about the particular technical challenges that AI researchers
might be interested in, we end up with something that sounds a little
duller and makes a lot more sense.
By
CNu
at
November 16, 2014
17 Comments
Labels: egregores
Subscribe to:
Post Comments (Atom)
The Hidden Holocausts At Hanslope Park
radiolab | This is the story of a few documents that tumbled out of the secret archives of the biggest empire the world has ever known, of...
-
theatlantic | The Ku Klux Klan, Ronald Reagan, and, for most of its history, the NRA all worked to control guns. The Founding Fathers...
-
dailybeast | Of all the problems in America today, none is both as obvious and as overlooked as the colossal human catastrophe that is our...
-
Video - John Marco Allegro in an interview with Van Kooten & De Bie. TSMATC | Describing the growth of the mushroom ( boletos), P...