Wednesday, February 23, 2011

it's me or it's chaos!

Time | There's been virtually no reliable information coming out of Tripoli, but a source close to the Gaddafi regime I did manage to get hold of told me the already terrible situation in Libya will get much worse. Among other things, Gaddafi has ordered security services to start sabotaging oil facilities. They will start by blowing up several oil pipelines, cutting off flow to Mediterranean ports. The sabotage, according to the insider, is meant to serve as a message to Libya's rebellious tribes: It's either me or chaos.

Two weeks ago this same man had told me the uprisings in Tunisia and Egypt would never touch Libya. Gaddafi, he said, had a tight lock on all of the major tribes, the same ones that have kept him in power for the past 41 years. The man of course turned out to be wrong, and everything he now has to say about Gaddafi's intentions needs to be taken in that context.

The source went on and told me that Gaddafi's desperation has a lot to with the fact that he now can only count on the loyalty of his tribe, the Qadhadhfa. And as for the army, as of Monday he only has the loyalty of approximately 5,000 troops. They are his elite forces, the officers all handpicked. Among them is the unit commanded by his second youngest son Khamis, the 32nd Brigade. (The total strength of the regular Libyan army is 45,000.)

My Libyan source said that Gaddafi has told people around him that he knows he cannot retake Libya with the forces he has. But what he can do is make the rebellious tribes and army officers regret their disloyalty, turning Libya into another Somalia. "I have the money and arms to fight for a long time," Gaddafi reportedly said.

As part of the same plan to turn the tables, Gaddafi ordered the release from prison of the country's Islamic militant prisoners, hoping they will act on their own to sow chaos across Libya. Gaddafi envisages them attacking foreigners and rebellious tribes. Couple that with a shortage of food supplies, and any chance for the rebels to replace Gaddafi will be remote.

My Libyan source said that in order to understand Gaddafi's state of mind we need to understand that he feels deeply betrayed by the media, which he blames for sparking the revolt. In particular, he blames the Qatari TV station al-Jazeera, and is convinced it targeted him for purely political motivations. He also feels betrayed by the West because it has only encouraged the revolt. Over the weekend, he warned several European embassies that if he falls, the consequence will be a flood of African immigration that will "swamp" Europe. (Comment on this story.)

Pressed, my Libyan source acknowledged Gaddafi is a desperate, irrational man, and his threats to turn Libya into another Somalia at this point may be mostly bluffing. On the other hand, if Gaddafi in fact enjoys the loyalty of troops he thinks he has, he very well could take Libya to the brink of civil war, if not over.

wikileaks cables portray libyan profligacy


Video - Interesting take on Libyan first family.

NYTimes | After New Year’s Day 2009, Western media reported that Seif al-Islam el-Qaddafi, a son of the Libyan leader Col. Muammar el-Qaddafi, had paid Mariah Carey $1 million to sing just four songs at a bash on the Caribbean island of St. Barts.

In the newspaper he controlled, Seif indignantly denied the report — the big spender, he said, was his brother, Muatassim, Libya’s national security adviser, according to an American diplomatic cable from the capital, Tripoli.

It was Muatassim, too, the cable said, who had demanded $1.2 billion in 2008 from the chairman of Libya’s national oil corporation, reportedly to establish his own militia. That would let him keep up with yet another brother, Khamis, commander of a special-forces group that “effectively serves as a regime protection unit.”

As the Qaddafi clan conducts a bloody struggle to hold onto power in Libya, cables obtained by WikiLeaks offer a vivid account of the lavish spending, rampant nepotism and bitter rivalries that have defined what a 2006 cable called “Qadhafi Incorporated,” using the State Department’s preference from the multiple spellings for Libya’s troubled first family.

The glimpses of the clan’s antics in recent years that have reached Libyans despite Col. Qaddafi’s tight control of the media have added to the public anger now boiling over. And the tensions between siblings could emerge as a factor in the chaos in the oil-rich African country.

Though the Qaddafi children are described as jockeying for position as their father ages — three sons fought to profit from a new Coca-Cola franchise — they have been well taken care of, cables say. “All of the Qaddafi children and favorites are supposed to have income streams from the National Oil Company and oil service subsidiaries,” one cable from 2006 says.

A year ago, a cable reported that proliferating scandals had sent the clan into a “tailspin” and “provided local observers with enough dirt for a Libyan soap opera.” Muatassim had repeated his St. Barts New Year’s fest, this time hiring the pop singers BeyoncĂ© and Usher. An unnamed “local political observer” in Tripoli told American diplomats that Muatassim’s “carousing and extravagance angered some locals, who viewed his activities as impious and embarrassing to the nation.”

curious counter-narrative...,

Guardian | Who among the first evangelists of the internet foresaw this? When they gushingly described the still emerging technology as "transformational", it was surely the media or information, rather than political, landscapes they had in mind. And yet now it is the hard ground of the Middle East, not just our reading habits or entertainment options, that is changing before our eyes – thanks, at least in part, to the internet.

Take the Tunisia uprising that started it all. Those close to it insist a crucial factor was not so much the WikiLeaks revelations of presidential corruption that I mentioned here last week, but Facebook. It was on Facebook that the now legendary Boazizi video – showing a vegetable seller burning himself to death – was posted, and on Facebook that subsequent demonstrations were organised. Who knows, if the people of Tunis one day build a Freedom Square, perhaps they'll make room for a statue of Mark Zuckerberg. If that sounds fanciful, note the Egyptian newborns named simply "Facebook". (Not that we should get carried away with the notion of internet as liberator: dictators have found it useful, too.)

But what about the rest of us, those unlikely ever to go online to organise an insurrection? What has been the transformative effect on us? Or to borrow the title of the latest of many books chewing on this question, how is the internet changing the way you think?

Given the subject I thought it wise to engage in a little light crowd-sourcing, floating that question on Twitter. As if to vindicate the "wisdom of crowds" thesis often pressed by internet cheerleaders, the range of responses mirrored precisely the arguments raised in the expert essays collected by editor John Brockman in the new book.

There are the idealists, grateful for a tool that has enabled them to think globally. They are now plugged into a range of sources, access to which would once have required effort, expense and long delays. It's not just faraway information that is within reach, but faraway people – activists are able to connect with like-minded allies on the other side of the world. As Newsnight's Paul Mason noted recently: "During the early 20th century people would ride hanging on the undersides of train carriages across borders just to make links like these."

It's this possibility of cross-border collaboration that has the internet gurus excited, as they marvel at open-source efforts such as the Linux computer operating system, with knowledge traded freely across the globe. Richard Dawkins even imagines a future when such co-operation is so immediate, so reflexive, that our combined intelligence comes to resemble a single nervous system: "A human society would effectively become one individual," he writes.

arab democratic revolution far from over


Video - Amy Goodman speaks with Marwan Bishara, senior political analyst at Al Jazeera English, and MIT Professor Emeritus Noam Chomsky.

Guardian | It is self-evidently democratic. To be sure, other factors, above all the socio-economic, greatly fuelled it, but the concentration on this single aspect of it, the virtual absence of other factional or ideological slogans has been striking. Indeed, so striking that, some now say, this emergence of democracy as an ideal and politically mobilising force amounts to nothing less than a "third way" in modern Arab history. The first was nationalism, nourished by the experience of European colonial rule and all its works, from the initial great carve-up of the "Arab nation" to the creation of Israel, and the west's subsequent, continued will to dominate and shape the region. The second, which only achieved real power in non-Arab Iran, was "political Islam", nourished by the failure of nationalism.

And it is doubly revolutionary. First, in the very conduct of the revolution itself, and the sheer novelty and creativity of the educated and widely apolitical youth who, with the internet as their tool, kindled it. Second, and more conventionally, in the depth, scale and suddenness of the transformation in a vast existing order that it seems manifestly bound to wreak.

Arab, yes – but not in the sense of the Arabs going their own away again. Quite the reverse. No other such geopolitical ensemble has so long boasted such a collection of dinosaurs, such inveterate survivors from an earlier, totalitarian era; no other has so completely missed out on the waves of "people's power" that swept away the Soviet empire and despotisms in Latin America, Asia and Africa. In rallying at last to this now universal, but essentially western value called democracy, they are in effect rejoining the world, catching up with history that has left them behind.

If it was in Tunis that the celebrated "Arab street" first moved, the country in which – apart from their own – Arabs everywhere immediately hoped that it would move next was Egypt. That would amount to a virtual guarantee that it would eventually come to them all. For, most pivotal, populous and prestigious of Arab states, Egypt was always a model, sometimes a great agent of change, for the whole region. It was during the nationalist era, after President Nasser's overthrow of the monarchy in 1952, that it most spectacularly played that role. But in a quieter, longer-term fashion, it was also the chief progenitor, through the creation of the Muslim Brotherhood, of the "political Islam" we know today, including – in both the theoretical basis as well as substantially in personnel – the global jihad and al-Qaida that were to become its ultimate, deviant and fanatical descendants.

But third, and most topically, it was also the earliest and most influential exemplar of the thing which, nearly 60 years on, the Arab democratic revolution is all about. Nasser did seek the "genuine democracy" that he held to be best fitted for the goals of his revolution. But, for all its democratic trappings, it was really a military-led, though populist, autocracy from the very outset; down the years it underwent vast changes of ideology, policy and reputation, but, forever retaining its basic structures, it steadily degenerated into that aggravated, arthritic,deeply oppressive and immensely corrupt version of its original self over which Hosni Mubarak presided. With local variations, the system replicated itself in most Arab autocracies, especially the one-time revolutionary ones like his, but in the older, traditional monarchies too.

And, sure enough, Egypt's "street" did swiftly move, and in nothing like the wild and violent manner that the image of the street in action has always tends to conjure up in anxious minds. As a broad and manifestly authentic expression of the people's will, it accomplished the first, crucial stage of what surely ranks as one of the most exemplary, civilised uprisings in history. The Egyptians feel themselves reborn, the Arab world once more holds Egypt, "mother of the world", in the highest esteem. And finally – after much artful equivocation as they waited to see whether the pharaoh, for 30 years the very cornerstone of their Middle East, had actually fallen – President Obama and others bestowed on them the unstinting official tributes of the west.

These plaudits raise the great question: if the Arabs are now rejoining the world what does it mean for the world?

Tuesday, February 22, 2011

the mushroom in christian art

In The Mushroom in Christian Art, author John A. Rush uses an artistic motif to define the nature of Christian art, establish the identity of Jesus, and expose the motive for his murder. Covering Christian art from 200 CE (common era) to the present, the author reveals that Jesus, the Teacher of Righteousness mentioned in the Dead Sea Scrolls, is a personification of the Holy Mushroom, Amanita muscaria. The mushroom, Rush argues, symbolizes numerous mind-altering substances—psychoactive mushrooms, cannabis, henbane, and mandrake—used by the early, more experimentally minded Christian sects.

Drawing on primary historical sources, Rush traces the history—and face—of Jesus as being constructed and codified only after 325 CE. The author relates Jesus’s life to a mushroom typology, discovering its presence, disguised, in early Christian art. In the process, he reveals the ritual nature of the original Christian cults, rites, and rituals, including mushroom use. The book authoritatively uncovers Jesus’s message of peace, love, and spiritual growth and proposes his murder as a conspiracy by powerful reactionary forces who would replace that message with the oppressive religious-political system that endures to this day. Rush’s use of the mushroom motif as a springboard for challenging mainstream views of Western religious history is both provocative and persuasive.

the neuronal replicator hypothesis


MIT Press | We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuronal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cognitive architectures. Replicators overcome some limitations of selectionist models of neuronal search. Hebbian learning is combined with replication to structure exploration on the basis of associations learned in the past. Neuromodulatory gating of sets of bistable neurons allows patterns of activation to be copied with mutation. If the probability of copying a set is related to the utility of that set, then an evolutionary algorithm can be implemented at rapid timescales in the brain. Populations of neuronal replicators can undertake a more rapid and stable search than can be achieved by serial modification of a single solution. Hebbian learning added to neuronal replication allows a powerful structuring of variability capable of learning the location of a global optimum from multiple previously visited local optima. Replication of solutions can solve the problem of catastrophic forgetting in the stability-plasticity dilemma. In short, neuronal replication is essential to explain several features of flexible cognition. Predictions are made for the experimental validation of the neuronal replicator hypothesis.

hebbian theory

Wikipedia | Hebbian theory describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in 1949, it is also called Hebb's rule, Hebb's postulate, and cell assembly theory, and states:
Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.
The theory is often summarized as "cells that fire together, wire together"[citation needed], a simplified and figurative way of putting the theory. It attempts to explain "associative learning", in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. Such learning is known as Hebbian learning.

Monday, February 21, 2011

a really elaborate hardware sales pitch...,

The Economist | Four years in the making, Watson is the brainchild of David Ferrucci, head of the DeepQA project at IBM’s research centre in Yorktown Heights, New York. Dr Ferrucci and his team have been using search, semantics and natural-language processing technologies to improve the way computers handle questions and answers in plain English. That is easier said than done. In parsing a question, a computer has to decide what is the verb, the subject, the object, the preposition as well as the object of the preposition. It must disambiguate words with multiple meanings, by taking into account any context it can recognise. When people talk among themselves, they bring so much contextual awareness to the conversation that answers become obvious. “The computer struggles with that,” says Dr Ferrucci.

Another problem for the computer is copying the facility the human brain has to use experience-based short-cuts (heuristics) to perform tasks. Computers have to do this using lengthy step-by-step procedures (algorithms). According to Dr Ferrucci, it would take two hours for one of the fastest processors to answer a simple natural-language question. To stand any chance of winning, contestants on “Jeopardy!” have to hit the buzzer with a correct answer within three seconds. For that reason, Watson was endowed with no fewer than 2,880 Power750 chips spread over 90 servers. Flat out, the machine can perform 80 trillion calculations a second. For comparison’s sake, a modern PC can manage around 100 billion calculations a second.

For the contest, Watson had to rely entirely on its own resources. That meant no searching the internet for answers or asking humans for help. Instead, it used more than 100 different algorithms to parse the natural-language questions and interrogate the 15 trillion bytes of trivia stored in its memory banks—equivalent to 200m pages of text. In most cases, Watson could dredge up answers quicker than either of its two human rivals. When it was not sure of the answer, the computer simply shut up rather than risk losing the bet. That way, it avoided impulsive behaviour that cost its opponents points.

Your correspondent finds it rather encouraging that a machine has beaten the best in the business. After all, getting a computer to converse with humans in their own language has been an elusive goal of artificial intelligence for decades. Making it happen says more about human achievement than anything spooky about machine dominance. And should a machine manage the feat without the human participants in the conversation realising they are not talking to another person, then the machine would pass the famous test for artificial intelligence devised in 1950 by Alan Turing, a British mathematician famous for cracking the Enigma and Lorenz ciphers during the second world war.

It is only a matter of time before a computer passes the Turing Test. It will not be Watson, but one of its successors doubtless will. Ray Kurzweil, a serial innovator, engineer and prognosticator, believes it will happen by 2029. He notes that it was only five years after the massive and hugely expensive Deep Blue beat Mr Kasparov in 1997 that Deep Fritz was able to achieve the same level of performance by combining the power of just eight personal computers. In part, that was because of the inexorable effects of Moore’s Law halving the price/performance of computing every 18 months. It was also due to the vast improvements in pattern-recognition software used to make the crucial tree-pruning decisions that determine successful moves and countermoves in chess.

ephaptic coupling

Cordis | Researchers believed neurons in the brain communicated through physical connections known as synapses. However, EU-funded neuroscientists have uncovered strong evidence that neurons also communicate with each other through weak electric fields, a finding that could help us understand how biophysics gives rise to cognition.

The study, published in the journal Nature Neuroscience, was funded in part by the EUSYNAPSE ('From molecules to networks: understanding synaptic physiology and pathology in the brain through mouse models') project, which received EUR 8 million under the 'Life sciences, genomics and biotechnology for health' Thematic area of the EU's Sixth Framework Programme (FP6).

Lead author Dr Costas Anastassiou, a postdoctoral scholar at the Californian Institute of Technology (Caltech) in the US, and his colleagues explain how the brain is an intricate network of individual nerve cells, or neurons, that use electrical and chemical signals to communicate with one another.

Every time an electrical impulse races down the branch of a neuron, a tiny electric field surrounds that cell. A few neurons are like individuals talking to each other and having small conversations. But when they all fire together, it's like the roar of a crowd at a sports game.

That 'roar' is the summation of all the tiny electric fields created by organised neural activity in the brain. While it has long been recognised that the brain generates weak electrical fields in addition to the electrical activity of firing nerve cells, these fields were considered epiphenomenon - superfluous side effects.

Nothing was known about these weak fields because, in fact, they are usually too weak to measure at the level of individual neurons; their dimensions are measured in millionths of a metre (microns). Therefore, the researchers decided to determine whether these weak fields have any effect on neurons.

Experimentally, measuring such weak fields emanating from or affecting a small number of brain cells was no easy task. Extremely small electrodes were used in close proximity to a cluster of rat neurons to look for 'local field potentials', the electric fields generated by neuron activity. The result? They were successful in measuring fields as weak as one millivolt (one millionth of a volt).

Commenting on the results, Dr Anastassiou says: 'Because it had been so hard to position that many electrodes within such a small volume of brain tissue, the findings of our research are truly novel. Nobody had been able to attain this level of spatial and temporal resolution.'

What they found was surprising. 'We observed that fields as weak as one millivolt per millimetre robustly alter the firing of individual neurons, and increase the so-called "spike-field coherence" - the synchronicity with which neurons fire with relationship to the field,' he says.

Sunday, February 20, 2011

you won't find consciousness in the brain


How Does The Brain Produce Consciousness


NewScientist | MOST neuroscientists, philosophers of the mind and science journalists feel the time is near when we will be able to explain the mystery of human consciousness in terms of the activity of the brain. There is, however, a vocal minority of neurosceptics who contest this orthodoxy. Among them are those who focus on claims neuroscience makes about the preciseness of correlations between indirectly observed neural activity and different mental functions, states or experiences.

This was well captured in a 2009 article in Perspectives on Psychological Science by Harold Pashler from the University of California, San Diego, and colleagues, that argued: "...these correlations are higher than should be expected given the (evidently limited) reliability of both fMRI and personality measures. The high correlations are all the more puzzling because method sections rarely contain much detail about how the correlations were obtained."

Believers will counter that this is irrelevant: as our means of capturing and analysing neural activity become more powerful, so we will be able to make more precise correlations between the quantity, pattern and location of neural activity and aspects of consciousness.

This may well happen, but my argument is not about technical, probably temporary, limitations. It is about the deep philosophical confusion embedded in the assumption that if you can correlate neural activity with consciousness, then you have demonstrated they are one and the same thing, and that a physical science such as neurophysiology is able to show what consciousness truly is.

Many neurosceptics have argued that neural activity is nothing like experience, and that the least one might expect if A and B are the same is that they be indistinguishable from each other. Countering that objection by claiming that, say, activity in the occipital cortex and the sensation of light are two aspects of the same thing does not hold up because the existence of "aspects" depends on the prior existence of consciousness and cannot be used to explain the relationship between neural activity and consciousness.

This disposes of the famous claim by John Searle, Slusser Professor of Philosophy at the University of California, Berkeley: that neural activity and conscious experience stand in the same relationship as molecules of H2O to water, with its properties of wetness, coldness, shininess and so on. The analogy fails as the level at which water can be seen as molecules, on the one hand, and as wet, shiny, cold stuff on the other, are intended to correspond to different "levels" at which we are conscious of it. But the existence of levels of experience or of description presupposes consciousness. Water does not intrinsically have these levels.

We cannot therefore conclude that when we see what seem to be neural correlates of consciousness that we are seeing consciousness itself. While neural activity of a certain kind is a necessary condition for every manifestation of consciousness, from the lightest sensation to the most exquisitely constructed sense of self, it is neither a sufficient condition of it, nor, still less, is it identical with it. If it were identical, then we would be left with the insuperable problem of explaining how intracranial nerve impulses, which are material events, could "reach out" to extracranial objects in order to be "of" or "about" them. Straightforward physical causation explains how light from an object brings about events in the occipital cortex. No such explanation is available as to how those neural events are "about" the physical object. Biophysical science explains how the light gets in but not how the gaze looks out.

Many features of ordinary consciousness also resist neurological explanation. Take the unity of consciousness. I can relate things I experience at a given time (the pressure of the seat on my bottom, the sound of traffic, my thoughts) to one another as elements of a single moment. Researchers have attempted to explain this unity, invoking quantum coherence (the cytoskeletal micro-tubules of Stuart Hameroff at the University of Arizona, and Roger Penrose at the University of Oxford), electromagnetic fields (Johnjoe McFadden, University of Surrey), or rhythmic discharges in the brain (the late Francis Crick).

These fail because they assume that an objective unity or uniformity of nerve impulses would be subjectively available, which, of course, it won't be. Even less would this explain the unification of entities that are, at the same time, experienced as distinct. My sensory field is a many-layered whole that also maintains its multiplicity. There is nothing in the convergence or coherence of neural pathways that gives us this "merging without mushing", this ability to see things as both whole and separate.

And there is an insuperable problem with a sense of past and future. Take memory. It is typically seen as being "stored" as the effects of experience which leave enduring changes in, for example, the properties of synapses and consequently in circuitry in the nervous system. But when I "remember", I explicitly reach out of the present to something that is explicitly past. A synapse, being a physical structure, does not have anything other than its present state. It does not, as you and I do, reach temporally upstream from the effects of experience to the experience that brought about the effects. In other words, the sense of the past cannot exist in a physical system. This is consistent with the fact that the physics of time does not allow for tenses: Einstein called the distinction between past, present and future a "stubbornly persistent illusion".

There are also problems with notions of the self, with the initiation of action, and with free will. Some neurophilosophers deal with these by denying their existence, but an account of consciousness that cannot find a basis for voluntary activity or the sense of self should conclude not that these things are unreal but that neuroscience provides at the very least an incomplete explanation of consciousness.

I believe there is a fundamental, but not obvious, reason why that explanation will always remain incomplete - or unrealisable. This concerns the disjunction between the objects of science and the contents of consciousness. Science begins when we escape our subjective, first-person experiences into objective measurement, and reach towards a vantage point the philosopher Thomas Nagel called "the view from nowhere". You think the table over there is large, I may think it is small. We measure it and find that it is 0.66 metres square. We now characterise the table in a way that is less beholden to personal experience.

Science begins when we escape our first-person subjective experience

Thus measurement takes us further from experience and the phenomena of subjective consciousness to a realm where things are described in abstract but quantitative terms. To do its work, physical science has to discard "secondary qualities", such as colour, warmth or cold, taste - in short, the basic contents of consciousness. For the physicist then, light is not in itself bright or colourful, it is a mixture of vibrations in an electromagnetic field of different frequencies. The material world, far from being the noisy, colourful, smelly place we live in, is colourless, silent, full of odourless molecules, atoms, particles, whose nature and behaviour is best described mathematically. In short, physical science is about the marginalisation, or even the disappearance, of phenomenal appearance/qualia, the redness of red wine or the smell of a smelly dog.

Consciousness, on the other hand, is all about phenomenal appearances/qualia. As science moves from appearances/qualia and toward quantities that do not themselves have the kinds of manifestation that make up our experiences, an account of consciousness in terms of nerve impulses must be a contradiction in terms. There is nothing in physical science that can explain why a physical object such as a brain should ascribe appearances/qualia to material objects that do not intrinsically have them.

Material objects require consciousness in order to "appear". Then their "appearings" will depend on the viewpoint of the conscious observer. This must not be taken to imply that there are no constraints on the appearance of objects once they are objects of consciousness.

Our failure to explain consciousness in terms of neural activity inside the brain inside the skull is not due to technical limitations which can be overcome. It is due to the self-contradictory nature of the task, of which the failure to explain "aboutness", the unity and multiplicity of our awareness, the explicit presence of the past, the initiation of actions, the construction of self are just symptoms. We cannot explain "appearings" using an objective approach that has set aside appearings as unreal and which seeks a reality in mass/energy that neither appears in itself nor has the means to make other items appear. The brain, seen as a physical object, no more has a world of things appearing to it than does any other physical object.

particles that flock


Video - Video made to be used in the explanation of experiments being carried out at the CERN LHC

ScientificAmerican | In its first six months of operation, the Large Hadron Collider near Geneva has yet to find the Higgs boson, solve the mystery of dark matter or discover hidden dimensions of spacetime. It has, however, uncovered a tantalizing puzzle, one that scientists will take up again when the collider restarts in February following a holiday break. Last summer physicists noticed that some of the particles created by their proton collisions appeared to be synchronizing their flight paths, like flocks of birds. The findings were so bizarre that “we’ve spent all the time since [then] convincing ourselves that what we were see ing was real,” says Guido Tonelli, a spokesperson for CMS, one of two general-purpose experiments at the LHC.

The effect is subtle. When proton collisions result in the release of more than 110 new particles, the scientists found, the emerging particles seem to fly in the same direction. The high-energy collisions of protons in the LHC may be uncovering “a new deep internal structure of the initial protons,” says Frank Wilczek of the Massachusetts Institute of Technology, winner of a Nobel Prize for his explanation of the action of gluons. Or the particles may have more interconnections than scientists had realized. “At these higher energies [of the LHC], one is taking a snapshot of the proton with higher spatial and time resolution than ever before,” Wilczek says.

When seen with such high resolution, protons, according to a theory developed by Wilczek and his colleagues, consist of a dense medium of gluons—massless particles that act inside the protons and neutrons, controlling the behavior of quarks, the constituents of all protons and neutrons. “It is not implausible,” Wilczek says, “that the gluons in that medium interact and are correlated with one another, and these interactions are passed on to the new particles.”

If confirmed by other LHC physicists, the phenomenon would be a fascinating new finding about one of the most common particles in our universe and one scientists thought they understood well. Full-monty at arXiv.

Saturday, February 19, 2011

more than a feeling...,

Wired | Natural selection has nothing to worry about.

Let’s begin with energy efficiency. One of the most remarkable facts about the human brain is that it requires less energy (12 watts) than a light bulb. In other words, that loom of a trillion synapses, exchanging ions and neurotransmitters, costs less to run than a little incandescence. Or look at Deep Blue: when the machine was operating at full speed, it was a fire hazard, and required specialized heat-dissipating equipment to keep it cool. Meanwhile, Kasparov barely broke a sweat.

The same lesson applies to Watson. I couldn’t find reliable information on its off-site energy consumption, but suffice to say it required many tens of thousands of times as much energy as all the human brains on stage combined. While this might not seem like a big deal, evolution long ago realized that we live in a world of scarce resources. Evolution was right. As computers became omnipresent in our lives — I’ve got one dissipating heat in my pocket right now — we’re going to need to figure out how to make them more efficient. Fortunately, we’ve got an ideal prototype locked inside our skull.

The second thing Watson illustrates is the power of metaknowledge, or the ability to reflect on what we know. As Vaughan Bell pointed out a few months ago, this is Watson’s real innovation:

Answering this question needs pre-existing knowledge and, computationally, two main approaches. One is constraint satisfaction, which finds which answer is the ‘best fit’ to a problem which doesn’t have mathematically exact solution; and the other is a local search algorithm, which indicates when further searching is unlikely to yield a better result – in other words, when to quit computing and give an answer – because you can always crunch more data.

Our brain comes preprogrammed with metaknowledge: We don’t just know things — we know we know them, which leads to feelings of knowing. I’ve written about this before, but one of my favorite examples of such feelings is when a word is on the tip of the tongue. Perhaps it occurs when you run into an old acquaintance whose name you can’t remember, although you know that it begins with the letter J. Or perhaps you struggle to recall the title of a recent movie, even though you can describe the plot in perfect detail.

What’s interesting about this mental hiccup is that, even though the mind can’t remember the information, it’s convinced that it knows it. We have a vague feeling that, if we continue to search for the missing word, we’ll be able to find it. (This is a universal experience: The vast majority of languages, from Afrikaans to Hindi to Arabic, even rely on tongue metaphors to describe the tip-of-the-tongue moment.) But here’s the mystery: If we’ve forgotten a person’s name, then why are we so convinced that we remember it? What does it mean to know something without being able to access it?

This is where feelings of knowing prove essential. The feeling is a signal that we can find the answer, if only we keep on thinking about the question. And these feelings aren’t just relevant when we can’t remember someone’s name.

what is watson?


Video - IBM researcher discusses the technology behind its language-parsing machine.

Friday, February 18, 2011

a tipping point is nearing

American Thinker | We are facing a tipping point. There will soon be a crisis affecting US citizens beyond any experienced since the Great Depression. And it may happen within the year. This past week three awful developments put a dagger into the hope for a growth-led recovery, which held promise of possibly averting a debt and currency implosion crushing the American economy.

The first was a little-noticed, but tragic, series of events in the newly elected House of Representatives. The speaker, Mr. Boehner, had given the task of fashioning the majority's spending cut agenda to Representative Paul Ryan (R-Wisconsin), a rising conservative star representing the vocal wing of fiscal conservatives in the House. Promising to cut $100 billion of government spending, Mr. Boehner spoke before the elections of the urgency to produce immediately when Republicans took control.

The second awful development to occur last week was the employment report from the Labor Department, describing employment conditions in the U.S. economy in January, 2011. The report was packed with statistics, all pointing to anemic growth with a modest pickup in manufacturing employment. The little-noticed (not by the bond market) aspect of the report was the "benchmark" revisions, an attempt to get the total picture more accurate each year than simply adding up all the monthly change numbers. This year's benchmark revisions showed two alarming things: a decline from previously reported employment in December 2010 of nearly 500,000 jobs, and a reduction in the workforce of a similar amount.

The third development of the last week which received much less press than the Egyptian crisis is the "new normal" in Social Security. The CBO released a report disclosing that the net cash flow for the Social Security trust fund -- excluding interest received from the book entry bonds it holds in U.S. debt -- will be negative $56 billion in 2011, and for every year hence even more so. This is the train wreck that was supposed to happen in 2020. It is upon us now. Any limp action by conservatives to bring this program into solvency can be expected only to slow the raging river of red ink this behemoth program (along with its twin Godzilla, Medicare) spills on U.S. citizens. With no political will to fix them, these "entitlements" will obligate Americans to borrow more and more money from China--to honor promises we simply refuse to admit we can't keep.

So why do these developments argue for a crisis of Great Depression proportions? Because they speak unequivocally of our pathway to insolvency, and the potential of currency failure via hyperinflation, despite the hopes of conservatives and market participants to see a halt of such direction. Housing prices, the foundation of so much of private citizen debt loads, are destined for stagnation -- not inflation -- as the supply of homes is far greater than the demand -- 11% of the nation's homes stand empty today. When the world begins to recognize that there is no fix for America's borrowings, a fast and brutal exodus from our currency and bonds can send us a shock in mere weeks or months.

Unlike the Great Depression, however, we will enter such a shock in a weakened state, with few producers among us and record mountains of debt. More cataclysmic is the specter of inadequate food, as less than 4% of us farm, and those that do may cease to be as productive or may not accept devalued currency as payment, should the tipping point be crossed. Corn and wheat prices in the U.S. have nearly doubled in less than 12 months, using our rapidly evaporating currency as the medium of exchange.

the youth unemployment bomb

BloombergBW | In Tunisia, the young people who helped bring down a dictator are called hittistes—French-Arabic slang for those who lean against the wall. Their counterparts in Egypt, who on Feb. 1 forced President Hosni Mubarak to say he won't seek reelection, are the shabab atileen, unemployed youths. The hittistes and shabab have brothers and sisters across the globe. In Britain, they are NEETs—"not in education, employment, or training." In Japan, they are freeters: an amalgam of the English word freelance and the German word Arbeiter, or worker. Spaniards call them mileuristas, meaning they earn no more than 1,000 euros a month. In the U.S., they're "boomerang" kids who move back home after college because they can't find work. Even fast-growing China, where labor shortages are more common than surpluses, has its "ant tribe"—recent college graduates who crowd together in cheap flats on the fringes of big cities because they can't find well-paying work.

In each of these nations, an economy that can't generate enough jobs to absorb its young people has created a lost generation of the disaffected, unemployed, or underemployed—including growing numbers of recent college graduates for whom the post-crash economy has little to offer. Tunisia's Jasmine Revolution was not the first time these alienated men and women have made themselves heard. Last year, British students outraged by proposed tuition increases—at a moment when a college education is no guarantee of prosperity—attacked the Conservative Party's headquarters in London and pummeled a limousine carrying Prince Charles and his wife, Camilla Bowles. Scuffles with police have repeatedly broken out at student demonstrations across Continental Europe. And last March in Oakland, Calif., students protesting tuition hikes walked onto Interstate 880, shutting it down for an hour in both directions.

More common is the quiet desperation of a generation in "waithood," suspended short of fully employed adulthood. At 26, Sandy Brown of Brooklyn, N.Y., is a college graduate and a mother of two who hasn't worked in seven months. "I used to be a manager at a Duane Reade [drugstore] in Manhattan, but they laid me off. I've looked for work everywhere and I can't find nothing," she says. "It's like I got my diploma for nothing."

While the details differ from one nation to the next, the common element is failure—not just of young people to find a place in society, but of society itself to harness the energy, intelligence, and enthusiasm of the next generation. Here's what makes it extra-worrisome: The world is aging. In many countries the young are being crushed by a gerontocracy of older workers who appear determined to cling to the better jobs as long as possible and then, when they do retire, demand impossibly rich private and public pensions that the younger generation will be forced to shoulder.

In short, the fissure between young and old is deepening. "The older generations have eaten the future of the younger ones," former Italian Prime Minister Giuliano Amato told Corriere della Sera. In Britain, Employment Minister Chris Grayling has called chronic unemployment a "ticking time bomb." Jeffrey A. Joerres, chief executive officer of Manpower (MAN), a temporary-services firm with offices in 82 countries and territories, adds, "Youth unemployment will clearly be the epidemic of this next decade unless we get on it right away. You can't throw in the towel on this."

The highest rates of youth unemployment are found in the Middle East and North Africa, at roughly 24 percent each, according to the International Labor Organization. Most of the rest of the world is in the high teens—except for South and East Asia, the only regions with single-­digit youth unemployment. Young people are nearly three times as likely as adults to be unemployed. Fist tap Ed.

bahrain's crackdown threatens u.s. interests


Video - Ruling Sunni family cracks down hard on non-violent Shiite protesters

WaPo | FOR A DECADE, the ruling al-Khalifa family of Bahrain has been claiming to be leading the country toward democracy - an assertion frequently endorsed by the United States. On Thursday, the regime demolished that policy and any pretense about its real, autocratic nature. It dispatched its security forces to assault and violently disperse peaceful pro-democracy demonstrators who were camped in Manama's Pearl Square. At least four people were killed and 230 injured in the predawn raid.

The brutality is unlikely to restore stability to the Persian Gulf nation, even in the short term - and it poses a direct threat to vital interests of the United States. The U.S. 5th Fleet is based in Bahrain and plays an important role in providing security to the Gulf and in containing nearby Iran. Not only is the crackdown likely to weaken rather than strengthen an allied government, but the United States cannot afford to side with a regime that violently represses the surging Arab demand for greater political freedom.

Bahrain is the first of the Arab world's monarchies to experience major unrest in what is becoming a region-wide upheaval - and with good reason. The Khalifa family and ruling elite, who are Sunni, preside over a population that is 70 percent Shiite, and the majority is disenfranchised, excluded from leading roles in the government or security forces. Ten years ago, the ruling family launched a cautious reform process, instituting a parliament with limited powers. But in the last year it has moved in reverse. Last summer two dozen Shiite opposition leaders were arrested and charged under terrorism laws. Many other activists were rounded up, and a human rights group was taken over by the government.

The Obama administration failed to react forcefully to those abuses, which set the stage for this week's uprising by thousands of demonstrators from both the Shiite and Sunni communities. In December, visiting Secretary of State Hillary Rodham Clinton heaped praise on the government for "the progress it is making on all fronts" and minimized the political prosecutions, describing "the glass as half full."

web of popularity achieved by bullying


Video - Wonder Woman the intentional antithesis of Superman.

NYTimes | new research suggests that the road to high school popularity can be treacherous, and that students near the top of the social hierarchy are often both perpetrators and victims of aggressive behavior involving their peers.

The latest findings, being published this month in The American Sociological Review, offer a fascinating glimpse into the social stratification of teenagers. The new study, along with related research from the University of California, Davis, also challenges the stereotypes of both high school bully and victim.

Highly publicized cases of bullying typically involve chronic harassment of socially isolated students, but the latest studies suggest that various forms of teenage aggression and victimization occur throughout the social ranks as students jockey to improve their status.

The findings contradict the notion of the school bully as maladjusted or aggressive by nature. Instead, the authors argue that when it comes to mean behavior, the role of individual traits is “overstated,” and much of it comes down to concern about status.

“Most victimization is occurring in the middle to upper ranges of status,” said the study’s author, Robert Faris, an assistant professor of sociology at U.C. Davis. “What we think often is going on is that this is part of the way kids strive for status. Rather than going after the kids on the margins, they might be targeting kids who are rivals.”

Educators and parents are often unaware of the daily stress and aggression with which even socially well-adjusted students must cope.

“It may be somewhat invisible,” Dr. Faris said. “The literature on bullying has so focused on this one dynamic of repeated chronic antagonism of socially isolated kids that it ignores these other forms of aggression.

Thursday, February 17, 2011

what becomes of science when the wells run dry?

The Scientist | The practice and funding of science may change drastically when humanity enters an era of energy crisis, in which cheap oil is but a distant memory. While the most hyperbolic doomsayers posit catastrophic scenarios of oil shortage, global conflict, and severe deprivation, the truth is that long before society downsizes in the face of energy scarcity, climate change, resource depletion, and population growth, the way science is done and the role of research in society will likely change drastically.

One of the main ways that the average scientist will feel the effects of oil shortages will be as everyone does: by an enormous inflation in the cost of doing business. Most scientific research is expensive not just in terms of dollars, but also in terms of energy. On average, for each dollar researchers spend today, the energy equivalent of about a cup of oil is used. A $1 million grant can consume the equivalent of about 1,100 barrels of oil. In the future, the same amount of dollars will buy significantly less research, and scientists will have to become much more efficient and inventive in doing research.

Far flung research projects, particularly common among ecologists and other natural scientists, will also become much less affordable. Trips to distant scientific meetings will also become prohibitively expensive. Electronic conferencing will become the norm.

The nature of interaction within the scientific community may change as well. Like the competitive atmosphere already experienced in developing countries, limited resources may lead groups to be less open and to actively exclude other groups.

In a time of energy scarcity, societal priorities will also shift, and science will be justified and supported based on the perception of how it is helping solve mounting societal problems. While today basic science is often considered intellectually superior and more elegant than applied science, in coming decades, applied science will become dominant, as research becomes required to preserve the functioning of ecosystems and the services they provide.

Natural scientists, especial those in the field of ecology, will have a critical role to play in this bleak future, in which the human economy depends much more on ecological systems. With transport and global trade hobbled, people will have to depend to a greater extent on nearby ecosystems, both natural and agricultural. Highly productive ecosystems have enormous economic value. The natural asset value of the Mississippi delta, for example, has been estimated to be as high as $1.4 trillion. Research on these natural communities will receive more attention, as more food, fuel, and fiber will have to be coaxed from nature in a sustainable way.

is the world producing enough food?

NYTimes | Food inflation has returned for many of the same old reasons: the demand for meat has returned with the recovery of middle-income economies; the price of oil is up, which both raises the cost of food production and transport, and stokes the diversion of food crops into biofuel production. Speculators are taking pounds of flesh in the commodity exchanges. And, of course, freak weather has disrupted production in key export zones.

But what makes the weather matter? This is hardly the first La Niña weather cycle, after all. Every human civilization has understood the need to plan for climate’s vicissitudes. Over the centuries, societies developed the tools of grain stores, crop diversification and "moral economies" to guarantee the poor access to food in times of crisis.

Global economic liberalization discarded these buffers in favor of lean lines of trade. Safety nets and storage became inefficient and redundant – if crops failed in one part of the world, the market would always provide from another.

Climate change turns this thinking on its head. A shock in one corner of the world now ripples to every other. The economic architecture that promised efficiency has instead made us all more vulnerable. Little has changed in this crucial respect since the last food crisis. But this isn’t simply a rerun of 2008.
food protest in JordanMuhammad Hamed/Reuters Rising food prices caused protests in Karak, Jordan, in January.

While the global recession has turned a corner for some people in some countries, unemployment remains stubbornly high for many, and hunger has trailed it. There are 75 million people more undernourished now than in 2008. At the same time, governments are cutting back on entitlement programs for the poor as part of austerity drives to fight inflation.

Urban families are unable to afford food and fuel, and governments are unresponsive to their plight. Under such circumstances, as Egyptians know too well, food prices and climate change are revolution’s kindling.

world bank warns of soaring food prices

Guardian | The World Bank has given a stark warning of the impact of the rising cost of food, saying an estimated 44 million people had been pushed into poverty since last summer by soaring commodity prices.

Robert Zoellick, the Bank's president, said food prices had risen by almost 30% in the past year and were within striking distance of the record levels reached during 2008.

"Global food prices are rising to dangerous levels and threaten tens of millions of poor people around the world," Zoellick said. "The price hike is already pushing millions of people into poverty, and putting stress on the most vulnerable, who spend more than half of their income on food."

According to the latest edition of Food Price Watch, the World Bank's food price index was up by 15% between October 2010 and January 2011, is 29% above its level a year earlier, and only 3% below its 2008 peak.

Wheat prices have seen the most pronounced increases, doubling between June last year and January 2011, while maize prices were up 73%.

The bank said that fewer people had fallen into poverty than in 2008 because of two factors – good harvests in many African countries had kept prices stable, and the increases in rice prices – a key part of the diet for many of the world's poor – had been moderate.

Wednesday, February 16, 2011

freedom to interdict your freedom to connect


Video - Anonymous response to Twitter subpoena.

Guardian | The US secretary of state, Hillary Clinton, praised the role of social networks such as Twitter in promoting freedom – at the same time as the US government was in court seeking to invade the privacy of Twitter users.

Lawyers for civil rights organisations appeared before a judge in Alexandria, Virginia, battling against a US government order to disclose the details of private Twitter accounts in the WikiLeaks row, including that of the Icelandic MP Birgitta Jonsdottir, below.

The move against Twitter has turned into a constitutional clash over the protection of individual rights to privacy in the digital age.

Clinton, in a speech in Washington, cited the positive role that Twitter, Facebook and other social networks played in uprisings in Tunisia and Egypt. In a stirring defence of the internet, she spoke of the "freedom to connect".

The irony of the Clinton speech coming on the day of the court case was not lost on the constitutional lawyers battling against the government in Alexandria. The lawyers also cited the Tunisian and Egyptian examples. Aden Fine, who represents the American Civil Liberties Union, one of the leading civil rights groups in the country, said: "It is very alarming that the government is trying to get this information about individuals' communications. But, also, above all, they should not be able to do this in secret."

still m.s.m.


Video - Still D.R.E.

HuffPo | It's not necessarily those who tweet the most, or have the most followers that help determine Twitter's trending topics, but the mainstream media, says new research from HP.

"You might expect the most prolific tweeters or those with most followers would be most responsible for creating such trends," said Bernardo Huberman, HP Senior Fellow and director of HP Labs' Social Computing Research Group, in a statement. "We found that mainstream media play a role in most trending topics and actually act as feeders of these trends. Twitter users then seem to be acting more as filter and amplifier of traditional media in most cases."

The subject of the tweet is the major determinant of whether the tweet's topic trends--largely as a result of retweeting. Thirty-one percent of trending topics are retweets, according to the HP research. Using data from Twitter's search function over 40 days, the researchers used 16.32 million tweets to identify the top 22 users that were generating the most retweets for trending topics. "Of those 22, 72% were Twitter streams run by mainstream media outfits such as CNN, the New York Times, El Pais and the BBC," HP wrote. "Although popular, most of these sites have millions of followers fewer than highly-followed tweeters such as Ashton Kutcher, Barack Obama or Lady Gaga."

While they also found that most trending topics don't survive beyond 40 minutes or so, the most dominant, longer-lasting trends were those that took hold over diverse audience members.

The researchers hinted that it remains to be seen whether social media can alter the public agenda.

how we use social media in an emergency

Mashable | The use of social media during national and international crises, both natural and political, is something that Mashable has followed with great interest over the past few years.

As a culture, we started becoming more aware of the power of social media during times of crisis, like when the Iran election in 2009 caused a furor, both on the ground and on Twitter. More recently, the Internet and social media played an important role in spreading news about the earthquake in Haiti and political revolution in Egypt.

But what about other kinds of natural disasters or crime? Can social media be used to good effect then?

In 2009, two girls trapped in a storm water drain used Facebook to ask for help rather than calling emergency services from their mobile phones. At the time, authorities were concerned about the girls’ seemingly counterintuitive action.

However, according to new research from the American Red Cross, the Congressional Management Foundation and other organizations, social media could stand to play a larger and more formal role in emergency response. In fact, almost half the respondents in a recent survey said they would use social media in the event of a disaster to let relatives and friends know they were safe.

future soldier - world-wide mind

NYTimes | Imagine, Michael Chorost proposes, that four police officers on a drug raid are connected mentally in a way that allows them to sense what their colleagues are seeing and feeling. Tony Vittorio, the captain, is in the center room of the three-room drug den.

He can sense that his partner Wilson, in the room on his left, is not feeling danger or arousal and thus has encountered no one. But suddenly Vittorio feels a distant thump on his chest. Sarsen, in the room on the right, has been hit with something, possibly a bullet fired from a gun with a silencer.

Vittorio glimpses a flickering image of a metallic barrel pointed at Sarsen, who is projecting overwhelming shock and alarm. By deducing how far Sarsen might have gone into the room and where the gunman is likely to be standing, Vittorio fires shots into the wall that will, at the very least, distract the gunman and allow Sarsen to shoot back. Sarsen is saved; the gunman is dead.

That scene, from his new book, “World Wide Mind,” is an example of what Mr. Chorost sees as “the coming integration of humanity, machines, and the Internet.” The prediction is conceptually feasible, he tells us, something that technology does not yet permit but that breaks no known physical laws.

Mr. Chorost also wrote “Rebuilt,” about his experience with deafness and his decision to get a cochlear implant in 2001. In that eloquent and thoughtful book, he refers to himself as a cyborg: He has a computer in his skull, which, along with a second implant three years ago, artificially restores his hearing. In “World Wide Mind,” he writes, “My two implants make me irreversibly computational, a living example of the integration of humans and computers.”

He takes off from his own implanted computer to imagine a world where people are connected by them. The implanted computer would work something like his BlackBerry, he explains, in that it would let people “be effortlessly aware of what their friends and colleagues are doing.” It would let each person know what the others “are seeing and feeling, thus enabling much richer forms of communication.”

Cool. Maybe. But beginning with privacy issues, the hazards are almost countless.

In discussing one of them, he cites the work of Dr. John Ratey, a professor of psychiatry at Harvard who believes people can be physically addicted to e-mail. “Each e-mail you open gives you a little hit of dopamine,” Mr. Chorost writes, “which you associate with satiety. But it’s just a little hit. The effect wears off quickly, leaving you wanting another hit.” Fist tap Nana.

Tuesday, February 15, 2011

the man who knew too little...,

Wired | How did Barr, a man with long experience in security and intelligence, come to spend his days as a CEO e-stalking clients and their wives on Facebook? Why did he start performing “reconnaissance” on the largest nuclear power company in the United States? Why did he suggest pressuring corporate critics to shut up, even as he privately insisted that corporations “suck the lifeblood out of humanity”? And why did he launch his ill-fated investigation into Anonymous, one which may well have destroyed his company and damaged his career?

Thanks to his leaked e-mails, the downward spiral is easy enough to retrace. Barr was under tremendous pressure to bring in cash, pressure which began on Nov. 23, 2009.

“A” players attract “A” players
That’s when Barr started the CEO job at HBGary Federal. Its parent company, the security firm HBGary, wanted a separate firm to handle government work and the clearances that went with it, and Barr was brought in from Northrup Grumman to launch the operation.

In an e-mail announcing Barr’s move, HBGary CEO Greg Hoglund told his company that “these two are A+ players in the DoD contracting space and are able to ‘walk the halls’ in customer spaces. Some very big players made offers to Ted and Aaron last week, and instead they chose HBGary. This reflects extremely well on our company. ‘A’ players attract ‘A’ players.”

Barr at first loved the job. In December, he sent an e-mail at 1:30am; it was the “3rd night in a row I have woken up in the middle of the night and can’t sleep because my mind is racing. It’s nice to be excited about work, but I need some sleep.”Barr had a huge list of contacts, but turning those contacts into contracts for government work with a fledgling company proved challenging. Less than a year into the job, HBGary Federal looked like it might go bust.

On Oct. 3, 2010, HBGary CEO Greg Hoglund told Aaron that “we should have a pow-wow about the future of HBGary Federal. [HBGary President] Penny and I both agree that it hasn’t really been a success… You guys are basically out of money and none of the work you had planned has come in.”

Aaron agreed. “This has not worked out as any of us have planned to date and we are nearly out of money,” he said.While he worked on government contracts, Barr drummed up a little business doing social media training for corporations using, in one of his slides, a bit of research into one Steven Paul Jobs.

what a tangled web they weave...,

Independent | The computer hackers' collective Anonymous has uncovered a proposal by a consortium of private contractors to attack and discredit WikiLeaks.

Last week Anonymous volunteers broke into the servers of HB Gary Federal, a security company that sells investigative services to companies, and posted thousands of the firm's emails on to the internet.

The attack was in revenge for claims by the company's chief executive Aaron Barr that he had successfully infiltrated the shadowy cyber protest network and discovered details of its leadership and structure.

Hacktivists, journalists and bloggers have since pored over the emails and discovered what appears to be a proposal that was intended to be pitched to the Bank of America to sabotage WikiLeaks and discredit journalists who are sympathetic to the whistle-blowing website.

The PowerPoint presentation claims that a trio of internet security companies – HB Gary Federal, Palantir Technologies and Berico Technologies – are already prepared to attack WikiLeaks which is rumoured to be getting ready to release a cache of potentially embarrassing information on the Bank of America.

The presentation, which has been seen by The Independent, recommends a multi-pronged assault on WikiLeaks including deliberately submitting false documents to the website to undermine its credibility, pioneering cyber attacks to expose who the leakers to WikiLeaks are and going after sympathetic journalists.

One of those mentioned is Glenn Greenwald, a pro-WikiLeaks reporter in the US. Writing on Salon.com Greenwald stated that his initial reaction to was "to scoff at its absurdity".

"But after learning a lot more over the last couple of days," he added, "I now take this more seriously – not in terms of my involvement but the broader implications this story highlights. For one thing, it turns out that the firms involved are large, legitimate and serious, and do substantial amounts of work for both the US government and the nation's largest private corporations."

A separate email written by Mr Barr to a Palantir employee suggests that security companies should track and intimidate people who donate to WikiLeaks. Security firms, Mr Barr wrote, "need to get people to understand that if they support the organisation we will come after them. Transaction records are easily identifiable."

The Bank of America does not seem to have directly solicited the services of HB Gary Federal. Instead it pitched the idea to Hunton and Williams, a law firm that represents the bank.

A Bank of America spokesman denied any knowledge of the proposals: "We've never seen the presentation, never evaluated it, and have no interest in it." A spokesman for Hunton and Williams declined to comment. HB Gary Federal has acknowledged in a statement that it was hit by a cyber attack but has suggested the documents online could be falsified.

However, the two other security firms named on the presentation have not denied the authenticity of the documents. Instead, both Berico and Palantir issued angry statements distancing themselves from HB Gary Federal and severing ties with the firm.

But a statement from Anonymous claimed the presentation showed how sections of corporate America were "entangled in highly dubious and most likely illegal activities, including a smear campaign against WikiLeaks, its supportive journalists, and adversaries of the US Chamber of Commerce and Bank of America".

tool-user

WaPo | A feud between a security contracting firm and a group of guerrilla computer hackers has spilled over onto K Street, as stolen e-mails reveal plans for a dirty-tricks-style campaign against critics of the U.S. Chamber of Commerce.

The tale began this month when a global hackers collective known as Anonymous broke into the computers of HBGary Federal, a California security firm, and dumped tens of thousands of internal company e-mails onto the Internet.

The move was in retaliation for assertions by HBGary Federal chief executive Aaron Barr that he had identified leaders of the hackers' group, which has actively supported the efforts of anti-secrecy Web site WikiLeaks to obtain and disclose classified documents.

The e-mails revealed, among other things, a series of often-dubious counterintelligence proposals aimed at enemies of Bank of America and the chamber. The proposals included distributing fake documents and launching cyber-attacks.

The chamber has adamantly denied any knowledge of the "abhorrent" proposals, including some contained in a sample blueprint outlined for Hunton & Williams, a law and lobbying firm that works for the chamber. The business group said in a statement Monday that the proposal "was not requested by the Chamber, it was not delivered to the Chamber and it was never discussed with anyone at the Chamber."

Two other security firms named in the e-mails, Berico Technologies and Palantir Technologies, also have issued statements distancing themselves from the plans. HBGary Federal and Hunton & Williams declined to comment.

The hacked e-mails suggest that the three security firms worked with Hunton & Williams in hopes of landing a $2 million contract to assist the chamber. Some of the e-mails, which were highlighted by the liberal Web site ThinkProgress on Monday, seem to suggest that the chamber had been apprised of the efforts. The chamber denied any such knowledge.

On Nov. 16, for example, Barr suggests in an e-mail to Berico that his company had spoken "directly" to the chamber despite the lack of a signed contract.

Other e-mails describe Hunton & Williams lawyer Bob Quackenboss as the "key client contact operationally" with the chamber and make references to a demonstration session that had "sold the Chamber in the first place."

On Dec. 1, a Palantir engineer summarized a meeting with Hunton & Williams, saying the law firm "was looking forward to briefing the results to the Chamber to get them to pony up the cash for Phase II." The proposed meeting was set to take place this past Monday, according to the e-mail.

"While many questions remain in the unfolding ChamberLeaks controversy, what's clear is that this multitude of emails clearly contradicts the Chamber's claim that they were 'not aware of these proposals until HBGary's e-mails leaked,' " ThinkProgress reporter Scott Keyes wrote in a blog post.

One Nov. 29 e-mail contains presentations and memos outlining how a potential counterintelligence program against chamber critics might work. The documents are written under the logo of Team Themis, which was the joint project name adopted by the three technology firms.

tool...,

Switched | Based on e-mails he sent before beginning his mission, it's clear that Barr's motives, from the very beginning, were profit-driven. A social media fanatic, Barr firmly believed that he could use data from sites like Facebook and LinkedIn to identify any hacker in the world, including members of Anonymous. "Hackers may not list the data, but hackers are people too so they associate with friends and family," Barr wrote in an e-mail to a colleague at HBGary Federal. "Those friends and family can provide key indicators on the hacker without them releasing it...." He even wanted to give a talk at this year's Bside security conference, titled "Who Needs NSA when we have Social Media?" But, long-term security implications aside, Barr knew exactly what he would do once he obtained data on Anonymous' members. "I will sell it," he wrote.

Using several aliases, Barr began regularly dropping in on Anonymous' instant relay chat (IRC) forums, and, after setting up fake Facebook and Twitter accounts, attempted to unearth the members' true identities via social media. Putting real names to screennames, however, wasn't easy. Barr's techniques included matching timecodes; when someone shared something in the Anonymous IRC, he would check a suspected Twitter handle for any follow-up activity in the next few seconds. More matches lessened the likelihood of coincidence. By the time he concluded his research, he believed he had successfully identified 80 to 90-percent of Anonymous' leaders -- all thanks to information that was publicly available.

Some of his colleagues at HBGary, however, soon became uneasy with the direction that Barr was taking his investigation. In exchanges with his coder, he insisted that he was not aiming to get anyone arrested, but simply wanted to prove the efficacy of his statistical analysis. In an e-mail to another colleague, though, the coder complained that Barr made many of his claims based not on statistics, but on his "best gut feeling." Others, meanwhile, feared retribution from Anonymous, and with good reason.

Though Barr insisted that he wouldn't reveal the names of Anonymous' leaders at a meeting with the FBI, the group didn't take any chances, and launched a devastating counter-offensive against both Barr and his company. Barr's e-mails were leaked, his Twitter account hijacked, and his iPad, apparently, wiped clean. HBGary, meanwhile, suffered a DDoS attack that crippled its site.

The attack on the company was so bad that at one point, HBGary President Penny Leavy dove into Anonymous' IRC, in an attempt to reason with them. The members asked her why Barr was meeting with the FBI. She insisted he just wanted their business, and had no interest in toppling Anonymous. She, in turn, asked what they demanded. "Simple: fire Aaron, have him admit defeat in a public statement," a member responded. "We won't bother you further after this, but what we've done can't be taken back. Realize that, and for the company's sake, dispose of Aaron." The group later hacked an e-mail account belonging to Leavy's husband, and is threatening to post it online.

Anderson concludes his piece by examining what the saga says about Anonymous, whose members he describes as "young, technically sophisticated, brash, and crassly juvenile." After what happened to HBGary and Barr, he writes, it's become difficult to write off Anonymous' attacks "as the harmless result of a few mask-wearing buffoons."

But perhaps the most intriguing character in this drama is Barr, himself. His e-mails shed some light on the inner workings of a company man who seems philosophically divided. Like Anonymous, he once supported WikiLeaks, until the organization began leaking diplomatic cables, last fall. The document dump led Barr to conclude that "they [WikiLeaks] are a menace," and fueled his antipathy toward Anonymous, which he saw as a group driven not by principle, but by power.

In another message, he declared that corporations "suck the lifeblood out of humanity," but acknowledged that they serve a purpose, and affirmed his belief that some secrets are better left unexposed. "Its [sic] all about power," Barr wrote. "The Wikileaks and Anonymous guys think they are doing the people justice by without much investigation or education exposing information or targeting organizations? BS. Its about trying to take power from others and give it to themeselves [sic]. I follow one law. Mine."

Monday, February 14, 2011

why do we sleep?


Video - Jay Electronica Dimethyltryptamine

Physorg | While we can more or less abstain from some basic biological urges—for food, drink, and sex—we can’t do the same for sleep. At some point, no matter how much espresso we drink, we just crash. And every animal that’s been studied, from the fruit fly to the frog, also exhibits some sort of sleep-like behavior. (Paul Sternberg, Morgan Professor of Biology, was one of the first to show that even a millimeter-long worm called a nematode falls into some sort of somnolent state.) But why do we—and the rest of the animal kingdom—sleep in the first place?

“We spend so much of our time sleeping that it must be doing something important,” says David Prober, assistant professor of biology and an expert on how genes and neurons regulate sleep. Yes, we snooze in order to rest and recuperate, but what that means at the molecular, genetic, or even cellular level remains a mystery. “Saying that we sleep because we’re tired is like saying we eat because we’re hungry,” Prober says. “That doesn’t explain why it’s better to eat some foods rather than others and what those different kinds of foods do for us.”

No one knows exactly why we slumber, Prober says, but there are four main hypotheses. The first is that sleeping allows the body to repair cells damaged by metabolic byproducts called free radicals. The production of these highly reactive substances increases during the day, when metabolism is faster. Indeed, scientists have found that the expression of genes involved in fixing cells gets kicked up a notch during sleep. This hypothesis is consistent with the fact that smaller animals, which tend to have higher metabolic rates (and therefore produce more free radicals), tend to sleep more. For example, some mice sleep for 20 hours a day, while giraffes and elephants only need two- to three-hour power naps.

Another idea is that sleep helps replenish fuel, which is burned while awake. One possible fuel is ATP, the all-purpose energy-carrying molecule, which creates an end product called adenosine when burned. So when ATP is low, adenosine is high, which tells the body that it’s time to sleep. While a postdoc at Harvard, Prober helped lead some experiments in which zebrafish were given drugs that prevented adenosine from latching onto receptor molecules, causing the fish to sleep less. But when given drugs with the opposite effect, they slept more. He has since expanded on these studies at Caltech.

Sleep might also be a time for your brain to do a little housekeeping. As you learn and absorb information throughout the day, you’re constantly generating new synapses, the junctions between neurons through which brain signals travel. But your skull has limited space, so bedtime might be when superfluous synapses are cleaned out.

And finally, during your daily slumber, your brain might be replaying the events of the day, reinforcing memory and learning. Thanos Siapas, associate professor of computation and neural systems, is one of several scientists who have done experiments that suggest this explanation for sleep. He and his colleagues looked at the brain activity of rats while the rodents ran through a maze and then again while they slept. The patterns were similar, suggesting the rats were reliving their day while asleep.

what is reality?


Video - BBC Horizon Documentary What is Reality?

There is a strange and mysterious world that surrounds us, a world largely hidden from our senses. The quest to explain the true nature of reality is one of the great scientific detective stories.

It starts with Jacobo Konisberg talking about the discovery of the Top quark at Fermilab. Frank Wilceck then featured to explain some particle physics theory at his country shack using bits of fruit. Anton Zeilinger showed us the double slit experiment and then Seth Lloyd showed us the worlds most powerful quantum computer, which has some problems. Lloyd has some interesting ideas about the universe being like a quantum computer.

Lenny Susskind then made an appearance to tell us about how he had discovered the holographic principle after passing an interesting hologram in the corridor. The holgraphic principle was illustated by projecting an image of Lenny onto himself. Max Tegmark then draws some of his favourite equations onto a window and tell us that reality is maths before he himself dissolved into equations.

The most interesting part of the program was a feature about an experiment to construct a holometer at Fermilab described by one of the project leaders Craig Hogan. The holometer is a laser inteferometer inspired by the noise produced at the gravitational wave detectors such as LIGO. It is hoped that if the holographic principle is correct this experiment will detect its effects.

Clues have been pieced together from deep within the atom, from the event horizon of black holes, and from the far reaches of the cosmos. It may be that that we are part of a cosmic hologram, projected from the edge of the universe. Or that we exist in an infinity of parallel worlds. Your reality may never look quite the same again.

the illusion of reality


Video - BBC Atom Documentary The Illusion of Reality

BBC Atom | Al-Khalili discovers that there might be parallel universes in which different versions of us exist, and finds out that empty space isn't empty at all, but seething with activity.

Sunday, February 13, 2011

ua experts determine age of book nobody can read

UANews | While enthusiasts across the world pored over the Voynich manuscript, one of the most mysterious writings ever found – penned by an unknown author in a language no one understands – a research team at the UA solved one of its biggest mysteries: When was the book made?

University of Arizona researchers have cracked one of the puzzles surrounding what has been called "the world's most mysterious manuscript" – the Voynich manuscript, a book filled with drawings and writings nobody has been able to make sense of to this day.

Using radiocarbon dating, a team led by Greg Hodgins in the UA's department of physics has found the manuscript's parchment pages date back to the early 15th century, making the book a century older than scholars had previously thought.

This tome makes the "DaVinci Code" look downright lackluster: Rows of text scrawled on visibly aged parchment, flowing around intricately drawn illustrations depicting plants, astronomical charts and human figures bathing in – perhaps – the fountain of youth. At first glance, the "Voynich manuscript" appears to be not unlike any other antique work of writing and drawing.

An alien language

But a second, closer look reveals that nothing here is what it seems. Alien characters, some resembling Latin letters, others unlike anything used in any known language, are arranged into what appear to be words and sentences, except they don't resemble anything written – or read – by human beings.

Hodgins, an assistant research scientist and assistant professor in the UA's department of physics with a joint appointment at the UA's School of Anthropology, is fascinated with the manuscript.

"Is it a code, a cipher of some kind? People are doing statistical analysis of letter use and word use – the tools that have been used for code breaking. But they still haven't figured it out."

A chemist and archaeological scientist by training, Hodgins works for the NSF Arizona Accelerator Mass Spectrometry, or AMS, Laboratory, which is shared between physics and geosciences. His team was able to nail down the time when the Voynich manuscript was made.

Currently owned by the Beinecke Rare Book and Manuscript Library of Yale University, the manuscript was discovered in the Villa Mondragone near Rome in 1912 by antique book dealer Wilfrid Voynich while sifting through a chest of books offered for sale by the Society of Jesus. Voynich dedicated the remainder of his life to unveiling the mystery of the book's origin and deciphering its meanings. He died 18 years later, without having wrestled any its secrets from the book.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...