Video - Dr. strangelove survival plan.
Guardian | Designer viruses, potent new weapons, hurtling asteroids... all have the potential to obliterate humanity. So how do scientists plan for such catastrophes? Scientists have good reason to be weary of fanciful speculation over the safety of their experiments, but some academics claim there are valuable lessons to be learned from the LHC experience, ones that could save us from more realistic catastrophes before the century is out. Nick Bostrom, director of the Future of Humanity Institute at Oxford University, says that advances in fields such as weapons technology, artificial intelligence and synthetic biology (which has already given researchers the tools to create viruses from scratch) could lead to what he calls "existential threats". These are catastrophes that play out on an unprecedented scale, ones that have the potential to bring an end to the human story, either by wiping us out completely, or by "permanently and drastically destroying our future potential".
The creation of a lethal synthetic virus that kills on a global scale is but one potential risk that Bostrom highlights. Breakthroughs in physics could lead to new weapons that increase the dangers of war, he says, while advances in computing could see the advent of machines that can improve their own intelligence, and surpass that of humans. Even attempts to manipulate the atmosphere to combat global warming might backfire and trigger a global disaster.
Bostrom says the LHC should be seen as a test case, used by society to learn how to deal with events and technologies that may genuinely threaten our existence in the future. "So far, we haven't done very well, but events surrounding the LHC could stimulate us into getting our act together for next time, when the threats need to be taken more seriously," he says. "I think the danger from particle accelerators is extremely small, but there will be other areas that will cause major existential risks and we need to learn how to deal with these situations in a rational way."
Existential threats are nothing new. Schoolchildren learn that an asteroid strike wiped out three quarters of Earth's species 65m years ago and promptly ended the reign of the dinosaurs. There have been at least four other mass extinctions, each one the result of an epic natural disaster. The point that intrigues researchers such as Bostrom is that society is bad at identifying dangers such as these, and even worse at preparing for them. In an essay published in the Journal of Evolution and Technology in 2002, Bostrom expressed dismay at how little research has been done on serious threats to humanity, writing: "There is more scholarly work on the life-habits of the dung fly than on existential risks." Little has changed since, he says.
Guardian | Designer viruses, potent new weapons, hurtling asteroids... all have the potential to obliterate humanity. So how do scientists plan for such catastrophes? Scientists have good reason to be weary of fanciful speculation over the safety of their experiments, but some academics claim there are valuable lessons to be learned from the LHC experience, ones that could save us from more realistic catastrophes before the century is out. Nick Bostrom, director of the Future of Humanity Institute at Oxford University, says that advances in fields such as weapons technology, artificial intelligence and synthetic biology (which has already given researchers the tools to create viruses from scratch) could lead to what he calls "existential threats". These are catastrophes that play out on an unprecedented scale, ones that have the potential to bring an end to the human story, either by wiping us out completely, or by "permanently and drastically destroying our future potential".
The creation of a lethal synthetic virus that kills on a global scale is but one potential risk that Bostrom highlights. Breakthroughs in physics could lead to new weapons that increase the dangers of war, he says, while advances in computing could see the advent of machines that can improve their own intelligence, and surpass that of humans. Even attempts to manipulate the atmosphere to combat global warming might backfire and trigger a global disaster.
Bostrom says the LHC should be seen as a test case, used by society to learn how to deal with events and technologies that may genuinely threaten our existence in the future. "So far, we haven't done very well, but events surrounding the LHC could stimulate us into getting our act together for next time, when the threats need to be taken more seriously," he says. "I think the danger from particle accelerators is extremely small, but there will be other areas that will cause major existential risks and we need to learn how to deal with these situations in a rational way."
Existential threats are nothing new. Schoolchildren learn that an asteroid strike wiped out three quarters of Earth's species 65m years ago and promptly ended the reign of the dinosaurs. There have been at least four other mass extinctions, each one the result of an epic natural disaster. The point that intrigues researchers such as Bostrom is that society is bad at identifying dangers such as these, and even worse at preparing for them. In an essay published in the Journal of Evolution and Technology in 2002, Bostrom expressed dismay at how little research has been done on serious threats to humanity, writing: "There is more scholarly work on the life-habits of the dung fly than on existential risks." Little has changed since, he says.
0 comments:
Post a Comment