HuffPo | Artificial intelligence poses an "extinction risk" to human civilisation, an Oxford University professor has said.
Almost everything about the development of genuine AI is uncertain, Stuart Armstrong at the Future of Humanity Institute said in an interview with The Next Web.
That includes when we might develop it, how such a thing could come about and what it means for human society.
But without more research and careful study, it's possible that we
could be opening a Pandora's box. Which is exactly the sort of thing
that the Future of Humanity Institute, a multidisciplinary research hub
tasked with asking the "big questions" about the future, is concerned
with.
"One of the things that makes AI risk scary is that it’s one of the
few that is genuinely an extinction risk if it were to go bad. With a
lot of other risks, it’s actually surprisingly hard to get to an
extinction risk," Armstrong told The Next Web.
0 comments:
Post a Comment