NYTimes | Imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password.
Except it’s not your mother. The voice on the other end of the phone call just sounds deceptively like her.
It
is actually a computer-synthesized voice, a tour-de-force of artificial
intelligence technology that has been crafted to make it possible for
someone to masquerade via the telephone.
Such a situation is still science fiction — but just barely. It is also the future of crime.
The
software components necessary to make such masking technology widely
accessible are advancing rapidly. Recently, for example, DeepMind, the
Alphabet subsidiary known for a program that has bested some of the top
human players in the board game Go, announced
that it had designed a program that “mimics any human voice and which
sounds more natural than the best existing text-to-speech systems,
reducing the gap with human performance by over 50 percent.”
The
irony, of course, is that this year the computer security industry,
with $75 billion in annual revenue, has started to talk about how
machine learning and pattern recognition techniques will improve the
woeful state of computer security.
But there is a downside.
“The
thing people don’t get is that cybercrime is becoming automated and it
is scaling exponentially,” said Marc Goodman, a law enforcement agency
adviser and the author of “Future Crimes.” He added, “This is not about
Matthew Broderick hacking from his basement,” a reference to the 1983
movie “War Games.”
The
alarm about malevolent use of advanced artificial intelligence
technologies was sounded earlier this year by James R. Clapper, the
director of National Intelligence. In his annual review of security, Mr.
Clapper underscored the point that while A.I. systems would make some
things easier, they would also expand the vulnerabilities of the online
world.
0 comments:
Post a Comment