edge | The big question that I'm asking myself these days is how can we make
a human artificial intelligence? Something that is not a machine, but
rather a cyber culture that we can all live in as humans, with a human
feel to it. I don't want to think small—people talk about robots and
stuff—I want this to be global. Think Skynet. But how would you make
Skynet something that's really about the human fabric?
The first thing you have to ask is what's the magic of the current AI? Where is it wrong and where is it right?
The good magic is that it has something called the credit assignment
function. What that lets you do is take stupid neurons, these little
linear functions, and figure out, in a big network, which ones are doing
the work and encourage them more. It's a way of taking a random bunch
of things that are all hooked together in a network and making them
smart by giving them feedback about what works and what doesn't. It
sounds pretty simple, but it's got some complicated math around it.
That's the magic that makes AI work.
The bad part of that is, because those little neurons are stupid, the
things that they learn don't generalize very well. If it sees something
that it hasn't seen before, or if the world changes a little bit, it's
likely to make a horrible mistake. It has absolutely no sense of
context. In some ways, it's as far from Wiener's original notion of
cybernetics as you can get because it's not contextualized: it's this
little idiot savant.
But imagine that you took away these limitations of current AI.
Instead of using dumb neurons, you used things that embedded some
knowledge. Maybe instead of linear neurons, you used neurons that were
functions in physics, and you tried to fit physics data. Or maybe you
put in a lot of stuff about humans and how they interact with each
other, the statistics and characteristics of that. When you do that and
you add this credit assignment function, you take your set of things you
know about—either physics or humans, and a bunch of data—in order to
reinforce the functions that are working, then you get an AI that works
extremely well and can generalize.
In physics, you can take a couple of noisy data points and get
something that's a beautiful description of a phenomenon because you're
putting in knowledge about how physics works. That's in huge contrast to
normal AI, which takes millions of training examples and is very
sensitive to noise. Or the things that we've done with humans, where you
can put in things about how people come together and how fads happen.
Suddenly, you find you can detect fads and predict trends in
spectacularly accurate and efficient ways.
Human behavior is determined as much by the patterns of our culture
as by rational, individual thinking. These patterns can be described
mathematically, and used to make accurate predictions. We’ve taken this
new science of “social physics” and expanded upon it, making it
accessible and actionable by developing a predictive platform that uses
big data to build a predictive, computational theory of human behavior.
The idea of a credit assignment function, reinforcing “neurons” that
work, is the core of current AI. And if you make those little neurons
that get reinforced smarter, the AI gets smarter. So, what would happen
if the neurons were people? People have lots of capabilities; they know
lots of things about the world; they can perceive things in a human way.
What would happen if you had a network of people where you could
reinforce the ones that were helping and maybe discourage the ones that
weren't?
0 comments:
Post a Comment