Saturday, August 01, 2009

missing the singularity

H+ | For more than a decade the artificial life community has developed metrics for computer power, complex systems intelligence and a broader metaphysics that is distinctly different from what we hear from most advocates of "The Singularity movement." When contrasted with theoretical Singularity works like Nick Bostrom's "Are We Living In A Computer Simulation," artificial life challenges Singularity thinking in a number of ways. I would like to offer two particular challenges.

1: Survival is a far better metric of intelligence than replicating human intelligence, and...

2: There are a number of examples of vastly more intelligent systems (in terms of survival) than human intelligence.

These challenges have not come through a priori philosophical posturing but are the result of years of simulation and the iterative understanding and discourse that has come through the artificial life community.

The primacy of human intelligence is one of the last and greatest myths of the anthropomorphic divide -- the division between humans and all other (living) things. Like most fallacies, it provides careers and countless treatises regarding paradoxes that can be explored at great length, leading to the warm and fuzzy conclusion that the human is still on top. If only it were so.

First Insight: Survival is intelligence.
When choosing a metric for survival intelligence, I was drawn to Teddy Roosevelt's analysis of hunting big game in the 1900s. Roosevelt's analysis related to the size of bullet (or caliber of bullet) required to stop a large animal. I was interested in a measure of the number of humans required to stop a vastly complex system. If there was to be a similar caliber of intelligence based on stopping a vastly complex system, why not make it a human centric metric. To paraphrase Roosevelt:

It took but ten humans to slay this system.

Due to the rough nature of the approximation, I employed a base-10 logarithmic approach. If it took a human to slay the system, the survival intelligence value would be zero. If it took ten, the survival intelligence value would be one. If it took a hundred humans, the survival intelligence value would be two.

My second insight comes from the need to normalize the definition of simulation. When the physicist, the biologist, the lawyer or the accountant goes to work, they don't have a bright glaring light shining down on them, constantly reminding them that what they are doing is not, in fact, reality but is based on the broad constraints that have historically and intellectually been applied to them. Through my editorial duties with Biota.org, I raised the idea that simulation authors should stop holding a marked division between what they did and reality. In fact, what was needed was a pluralistic view of simulation. The definition I offered was simple:

Second Insight: A simulation is any environment with applied constraints.
This definition showed that nearly everything was fair game for simulation analysis. The legal system, the road system, the health care system, the financial system, even the internet could be analyzed and parametrized with the insights from studying simulations.

Combining the metric of intelligence for survival and the idea that nearly anything is fair game for this metric, let's explore a couple of examples.

0 comments:

Permanently Neutered - Israel Disavows An Attempt At Escalation Dominance

MoA  |   Last night Israel attempted a minor attack on Iran to 'retaliate' for the Iranian penetration of its security screen . T...