Wednesday, February 23, 2011

facing factors he can no longer control...,


Video - Part 1 of a long, gassy 9 part Farrakhan-type tirade...,

The Independent | So he will go down fighting. That's what Muammar Gaddafi told us last night, and most Libyans believe him. This will be no smooth flight to Riyadh or a gentle trip to a Red Sea holiday resort. Raddled, cowled in desert gowns, he raved on.

He had not even begun to use bullets against his enemies – a palpable lie – and "any use of force against the authority of the state shall be punished by death", in itself a palpable truth which Libyans knew all too well without the future tense of Gaddafi's threat. On and on and on he ranted. Like everything Gaddafi, it was very impressive – but went on far too long.

He cursed the people of Benghazi who had already liberated their city – "just wait until the police return to restore order", this dessicated man promised without a smile. His enemies were Islamists, the CIA, the British and the "dogs" of the international press. Yes, we are always dogs, aren't we? I was long ago depicted in a Bahraini newspaper cartoon (Crown Prince, please note) as a rabid dog, worthy of liquidation. But like Gaddafi's speeches, that's par for the course. And then came my favourite bit of the whole Gaddafi exegesis last night: HE HADN'T EVEN BEGUN TO USE VIOLENCE YET!

So let's erase all the YouTubes and Facebooks and the shooting and blood and gouged corpses from Benghazi, and pretend it didn't happen. Let's pretend that the refusal to give visas to foreign correspondents has actually prevented us from hearing the truth. Gaddafi's claim that the protesters in Libya – the millions of demonstrators – "want to turn Libya into an Islamic state" is exactly the same nonsense that Mubarak peddled before the end in Egypt, the very same nonsense that Obama and La Clinton have suggested. Indeed, there were times last night when Gaddafi – in his vengefulness, his contempt for Arabs, for his own people – began to sound very like the speeches of Benjamin Netanyahu. Was there some contact between these two rogues, one wondered, that we didn't know about?

In many ways, Gaddafi's ravings were those of an old man, his fantasies about his enemies – "rats who have taken tablets" who included "agents of Bin Laden" – were as disorganised as the scribbled notes on the piece of paper he held in his right hand, let alone the green-covered volume of laws from which he kept quoting. It was not about love. It was about the threat of execution. "Damn those" trying to stir unrest against Libya. It was a plot, an international conspiracy. "Your children are dying – but for what?" He would fight "until the last drop of my blood with the Libyan people is behind me". America was the enemy (much talk of Fallujah), Israel was the enemy, Sadat was an enemy, colonial fascist Italy was the enemy. Among the heroes and friends was Gaddafi's grandfather, "who fell a martyr in 1911" against the Italian enemy.

Dressed in brown burnous and cap and gown, Gaddafi's appearance last night raised some odd questions. Having kept the international media – the "dogs" in question – out of Libya, he allowed the world to observe a crazed nation: YouTube and blogs of terrible violence versus state television pictures of an entirely unhinged dictator justifying what he had either not seen on YouTube or hadn't been shown. And there's an interesting question here: dictators and princes who let the international press into their countries – Messrs Ben Ali/Mubarak/Saleh/Prince Salman – are permitting it to film their own humiliation. Their reward is painful indeed. But sultans like Gaddafi who keep the journos out fare little different.

The hand-held immediacy of the mobile phone, the intimacy of sound and the crack of gunfire are in some ways more compelling than the edited, digital film of the networks. Exactly the same happened in Gaza when the Israelis decided, Gaddafi-like, to keep foreign journalists out of their 2009 bloodletting: the bloggers and YouTubers (and Al Jazeera) simply gave us a reality we didn't normally experience from the "professional" satellite boys. Perhaps, in the end, it takes a dictator with his own monopoly on cameras to tell the truth. "I will die as a martyr," Gaddafi said last night. Almost certainly true.

it's me or it's chaos!

Time | There's been virtually no reliable information coming out of Tripoli, but a source close to the Gaddafi regime I did manage to get hold of told me the already terrible situation in Libya will get much worse. Among other things, Gaddafi has ordered security services to start sabotaging oil facilities. They will start by blowing up several oil pipelines, cutting off flow to Mediterranean ports. The sabotage, according to the insider, is meant to serve as a message to Libya's rebellious tribes: It's either me or chaos.

Two weeks ago this same man had told me the uprisings in Tunisia and Egypt would never touch Libya. Gaddafi, he said, had a tight lock on all of the major tribes, the same ones that have kept him in power for the past 41 years. The man of course turned out to be wrong, and everything he now has to say about Gaddafi's intentions needs to be taken in that context.

The source went on and told me that Gaddafi's desperation has a lot to with the fact that he now can only count on the loyalty of his tribe, the Qadhadhfa. And as for the army, as of Monday he only has the loyalty of approximately 5,000 troops. They are his elite forces, the officers all handpicked. Among them is the unit commanded by his second youngest son Khamis, the 32nd Brigade. (The total strength of the regular Libyan army is 45,000.)

My Libyan source said that Gaddafi has told people around him that he knows he cannot retake Libya with the forces he has. But what he can do is make the rebellious tribes and army officers regret their disloyalty, turning Libya into another Somalia. "I have the money and arms to fight for a long time," Gaddafi reportedly said.

As part of the same plan to turn the tables, Gaddafi ordered the release from prison of the country's Islamic militant prisoners, hoping they will act on their own to sow chaos across Libya. Gaddafi envisages them attacking foreigners and rebellious tribes. Couple that with a shortage of food supplies, and any chance for the rebels to replace Gaddafi will be remote.

My Libyan source said that in order to understand Gaddafi's state of mind we need to understand that he feels deeply betrayed by the media, which he blames for sparking the revolt. In particular, he blames the Qatari TV station al-Jazeera, and is convinced it targeted him for purely political motivations. He also feels betrayed by the West because it has only encouraged the revolt. Over the weekend, he warned several European embassies that if he falls, the consequence will be a flood of African immigration that will "swamp" Europe. (Comment on this story.)

Pressed, my Libyan source acknowledged Gaddafi is a desperate, irrational man, and his threats to turn Libya into another Somalia at this point may be mostly bluffing. On the other hand, if Gaddafi in fact enjoys the loyalty of troops he thinks he has, he very well could take Libya to the brink of civil war, if not over.

wikileaks cables portray libyan profligacy


Video - Interesting take on Libyan first family.

NYTimes | After New Year’s Day 2009, Western media reported that Seif al-Islam el-Qaddafi, a son of the Libyan leader Col. Muammar el-Qaddafi, had paid Mariah Carey $1 million to sing just four songs at a bash on the Caribbean island of St. Barts.

In the newspaper he controlled, Seif indignantly denied the report — the big spender, he said, was his brother, Muatassim, Libya’s national security adviser, according to an American diplomatic cable from the capital, Tripoli.

It was Muatassim, too, the cable said, who had demanded $1.2 billion in 2008 from the chairman of Libya’s national oil corporation, reportedly to establish his own militia. That would let him keep up with yet another brother, Khamis, commander of a special-forces group that “effectively serves as a regime protection unit.”

As the Qaddafi clan conducts a bloody struggle to hold onto power in Libya, cables obtained by WikiLeaks offer a vivid account of the lavish spending, rampant nepotism and bitter rivalries that have defined what a 2006 cable called “Qadhafi Incorporated,” using the State Department’s preference from the multiple spellings for Libya’s troubled first family.

The glimpses of the clan’s antics in recent years that have reached Libyans despite Col. Qaddafi’s tight control of the media have added to the public anger now boiling over. And the tensions between siblings could emerge as a factor in the chaos in the oil-rich African country.

Though the Qaddafi children are described as jockeying for position as their father ages — three sons fought to profit from a new Coca-Cola franchise — they have been well taken care of, cables say. “All of the Qaddafi children and favorites are supposed to have income streams from the National Oil Company and oil service subsidiaries,” one cable from 2006 says.

A year ago, a cable reported that proliferating scandals had sent the clan into a “tailspin” and “provided local observers with enough dirt for a Libyan soap opera.” Muatassim had repeated his St. Barts New Year’s fest, this time hiring the pop singers BeyoncĂ© and Usher. An unnamed “local political observer” in Tripoli told American diplomats that Muatassim’s “carousing and extravagance angered some locals, who viewed his activities as impious and embarrassing to the nation.”

curious counter-narrative...,

Guardian | Who among the first evangelists of the internet foresaw this? When they gushingly described the still emerging technology as "transformational", it was surely the media or information, rather than political, landscapes they had in mind. And yet now it is the hard ground of the Middle East, not just our reading habits or entertainment options, that is changing before our eyes – thanks, at least in part, to the internet.

Take the Tunisia uprising that started it all. Those close to it insist a crucial factor was not so much the WikiLeaks revelations of presidential corruption that I mentioned here last week, but Facebook. It was on Facebook that the now legendary Boazizi video – showing a vegetable seller burning himself to death – was posted, and on Facebook that subsequent demonstrations were organised. Who knows, if the people of Tunis one day build a Freedom Square, perhaps they'll make room for a statue of Mark Zuckerberg. If that sounds fanciful, note the Egyptian newborns named simply "Facebook". (Not that we should get carried away with the notion of internet as liberator: dictators have found it useful, too.)

But what about the rest of us, those unlikely ever to go online to organise an insurrection? What has been the transformative effect on us? Or to borrow the title of the latest of many books chewing on this question, how is the internet changing the way you think?

Given the subject I thought it wise to engage in a little light crowd-sourcing, floating that question on Twitter. As if to vindicate the "wisdom of crowds" thesis often pressed by internet cheerleaders, the range of responses mirrored precisely the arguments raised in the expert essays collected by editor John Brockman in the new book.

There are the idealists, grateful for a tool that has enabled them to think globally. They are now plugged into a range of sources, access to which would once have required effort, expense and long delays. It's not just faraway information that is within reach, but faraway people – activists are able to connect with like-minded allies on the other side of the world. As Newsnight's Paul Mason noted recently: "During the early 20th century people would ride hanging on the undersides of train carriages across borders just to make links like these."

It's this possibility of cross-border collaboration that has the internet gurus excited, as they marvel at open-source efforts such as the Linux computer operating system, with knowledge traded freely across the globe. Richard Dawkins even imagines a future when such co-operation is so immediate, so reflexive, that our combined intelligence comes to resemble a single nervous system: "A human society would effectively become one individual," he writes.

arab democratic revolution far from over


Video - Amy Goodman speaks with Marwan Bishara, senior political analyst at Al Jazeera English, and MIT Professor Emeritus Noam Chomsky.

Guardian | It is self-evidently democratic. To be sure, other factors, above all the socio-economic, greatly fuelled it, but the concentration on this single aspect of it, the virtual absence of other factional or ideological slogans has been striking. Indeed, so striking that, some now say, this emergence of democracy as an ideal and politically mobilising force amounts to nothing less than a "third way" in modern Arab history. The first was nationalism, nourished by the experience of European colonial rule and all its works, from the initial great carve-up of the "Arab nation" to the creation of Israel, and the west's subsequent, continued will to dominate and shape the region. The second, which only achieved real power in non-Arab Iran, was "political Islam", nourished by the failure of nationalism.

And it is doubly revolutionary. First, in the very conduct of the revolution itself, and the sheer novelty and creativity of the educated and widely apolitical youth who, with the internet as their tool, kindled it. Second, and more conventionally, in the depth, scale and suddenness of the transformation in a vast existing order that it seems manifestly bound to wreak.

Arab, yes – but not in the sense of the Arabs going their own away again. Quite the reverse. No other such geopolitical ensemble has so long boasted such a collection of dinosaurs, such inveterate survivors from an earlier, totalitarian era; no other has so completely missed out on the waves of "people's power" that swept away the Soviet empire and despotisms in Latin America, Asia and Africa. In rallying at last to this now universal, but essentially western value called democracy, they are in effect rejoining the world, catching up with history that has left them behind.

If it was in Tunis that the celebrated "Arab street" first moved, the country in which – apart from their own – Arabs everywhere immediately hoped that it would move next was Egypt. That would amount to a virtual guarantee that it would eventually come to them all. For, most pivotal, populous and prestigious of Arab states, Egypt was always a model, sometimes a great agent of change, for the whole region. It was during the nationalist era, after President Nasser's overthrow of the monarchy in 1952, that it most spectacularly played that role. But in a quieter, longer-term fashion, it was also the chief progenitor, through the creation of the Muslim Brotherhood, of the "political Islam" we know today, including – in both the theoretical basis as well as substantially in personnel – the global jihad and al-Qaida that were to become its ultimate, deviant and fanatical descendants.

But third, and most topically, it was also the earliest and most influential exemplar of the thing which, nearly 60 years on, the Arab democratic revolution is all about. Nasser did seek the "genuine democracy" that he held to be best fitted for the goals of his revolution. But, for all its democratic trappings, it was really a military-led, though populist, autocracy from the very outset; down the years it underwent vast changes of ideology, policy and reputation, but, forever retaining its basic structures, it steadily degenerated into that aggravated, arthritic,deeply oppressive and immensely corrupt version of its original self over which Hosni Mubarak presided. With local variations, the system replicated itself in most Arab autocracies, especially the one-time revolutionary ones like his, but in the older, traditional monarchies too.

And, sure enough, Egypt's "street" did swiftly move, and in nothing like the wild and violent manner that the image of the street in action has always tends to conjure up in anxious minds. As a broad and manifestly authentic expression of the people's will, it accomplished the first, crucial stage of what surely ranks as one of the most exemplary, civilised uprisings in history. The Egyptians feel themselves reborn, the Arab world once more holds Egypt, "mother of the world", in the highest esteem. And finally – after much artful equivocation as they waited to see whether the pharaoh, for 30 years the very cornerstone of their Middle East, had actually fallen – President Obama and others bestowed on them the unstinting official tributes of the west.

These plaudits raise the great question: if the Arabs are now rejoining the world what does it mean for the world?

Tuesday, February 22, 2011

the mushroom in christian art

In The Mushroom in Christian Art, author John A. Rush uses an artistic motif to define the nature of Christian art, establish the identity of Jesus, and expose the motive for his murder. Covering Christian art from 200 CE (common era) to the present, the author reveals that Jesus, the Teacher of Righteousness mentioned in the Dead Sea Scrolls, is a personification of the Holy Mushroom, Amanita muscaria. The mushroom, Rush argues, symbolizes numerous mind-altering substances—psychoactive mushrooms, cannabis, henbane, and mandrake—used by the early, more experimentally minded Christian sects.

Drawing on primary historical sources, Rush traces the history—and face—of Jesus as being constructed and codified only after 325 CE. The author relates Jesus’s life to a mushroom typology, discovering its presence, disguised, in early Christian art. In the process, he reveals the ritual nature of the original Christian cults, rites, and rituals, including mushroom use. The book authoritatively uncovers Jesus’s message of peace, love, and spiritual growth and proposes his murder as a conspiracy by powerful reactionary forces who would replace that message with the oppressive religious-political system that endures to this day. Rush’s use of the mushroom motif as a springboard for challenging mainstream views of Western religious history is both provocative and persuasive.

the neuronal replicator hypothesis


MIT Press | We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuronal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cognitive architectures. Replicators overcome some limitations of selectionist models of neuronal search. Hebbian learning is combined with replication to structure exploration on the basis of associations learned in the past. Neuromodulatory gating of sets of bistable neurons allows patterns of activation to be copied with mutation. If the probability of copying a set is related to the utility of that set, then an evolutionary algorithm can be implemented at rapid timescales in the brain. Populations of neuronal replicators can undertake a more rapid and stable search than can be achieved by serial modification of a single solution. Hebbian learning added to neuronal replication allows a powerful structuring of variability capable of learning the location of a global optimum from multiple previously visited local optima. Replication of solutions can solve the problem of catastrophic forgetting in the stability-plasticity dilemma. In short, neuronal replication is essential to explain several features of flexible cognition. Predictions are made for the experimental validation of the neuronal replicator hypothesis.

hebbian theory

Wikipedia | Hebbian theory describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in 1949, it is also called Hebb's rule, Hebb's postulate, and cell assembly theory, and states:
Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.
The theory is often summarized as "cells that fire together, wire together"[citation needed], a simplified and figurative way of putting the theory. It attempts to explain "associative learning", in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. Such learning is known as Hebbian learning.

Monday, February 21, 2011

a really elaborate hardware sales pitch...,

The Economist | Four years in the making, Watson is the brainchild of David Ferrucci, head of the DeepQA project at IBM’s research centre in Yorktown Heights, New York. Dr Ferrucci and his team have been using search, semantics and natural-language processing technologies to improve the way computers handle questions and answers in plain English. That is easier said than done. In parsing a question, a computer has to decide what is the verb, the subject, the object, the preposition as well as the object of the preposition. It must disambiguate words with multiple meanings, by taking into account any context it can recognise. When people talk among themselves, they bring so much contextual awareness to the conversation that answers become obvious. “The computer struggles with that,” says Dr Ferrucci.

Another problem for the computer is copying the facility the human brain has to use experience-based short-cuts (heuristics) to perform tasks. Computers have to do this using lengthy step-by-step procedures (algorithms). According to Dr Ferrucci, it would take two hours for one of the fastest processors to answer a simple natural-language question. To stand any chance of winning, contestants on “Jeopardy!” have to hit the buzzer with a correct answer within three seconds. For that reason, Watson was endowed with no fewer than 2,880 Power750 chips spread over 90 servers. Flat out, the machine can perform 80 trillion calculations a second. For comparison’s sake, a modern PC can manage around 100 billion calculations a second.

For the contest, Watson had to rely entirely on its own resources. That meant no searching the internet for answers or asking humans for help. Instead, it used more than 100 different algorithms to parse the natural-language questions and interrogate the 15 trillion bytes of trivia stored in its memory banks—equivalent to 200m pages of text. In most cases, Watson could dredge up answers quicker than either of its two human rivals. When it was not sure of the answer, the computer simply shut up rather than risk losing the bet. That way, it avoided impulsive behaviour that cost its opponents points.

Your correspondent finds it rather encouraging that a machine has beaten the best in the business. After all, getting a computer to converse with humans in their own language has been an elusive goal of artificial intelligence for decades. Making it happen says more about human achievement than anything spooky about machine dominance. And should a machine manage the feat without the human participants in the conversation realising they are not talking to another person, then the machine would pass the famous test for artificial intelligence devised in 1950 by Alan Turing, a British mathematician famous for cracking the Enigma and Lorenz ciphers during the second world war.

It is only a matter of time before a computer passes the Turing Test. It will not be Watson, but one of its successors doubtless will. Ray Kurzweil, a serial innovator, engineer and prognosticator, believes it will happen by 2029. He notes that it was only five years after the massive and hugely expensive Deep Blue beat Mr Kasparov in 1997 that Deep Fritz was able to achieve the same level of performance by combining the power of just eight personal computers. In part, that was because of the inexorable effects of Moore’s Law halving the price/performance of computing every 18 months. It was also due to the vast improvements in pattern-recognition software used to make the crucial tree-pruning decisions that determine successful moves and countermoves in chess.

ephaptic coupling

Cordis | Researchers believed neurons in the brain communicated through physical connections known as synapses. However, EU-funded neuroscientists have uncovered strong evidence that neurons also communicate with each other through weak electric fields, a finding that could help us understand how biophysics gives rise to cognition.

The study, published in the journal Nature Neuroscience, was funded in part by the EUSYNAPSE ('From molecules to networks: understanding synaptic physiology and pathology in the brain through mouse models') project, which received EUR 8 million under the 'Life sciences, genomics and biotechnology for health' Thematic area of the EU's Sixth Framework Programme (FP6).

Lead author Dr Costas Anastassiou, a postdoctoral scholar at the Californian Institute of Technology (Caltech) in the US, and his colleagues explain how the brain is an intricate network of individual nerve cells, or neurons, that use electrical and chemical signals to communicate with one another.

Every time an electrical impulse races down the branch of a neuron, a tiny electric field surrounds that cell. A few neurons are like individuals talking to each other and having small conversations. But when they all fire together, it's like the roar of a crowd at a sports game.

That 'roar' is the summation of all the tiny electric fields created by organised neural activity in the brain. While it has long been recognised that the brain generates weak electrical fields in addition to the electrical activity of firing nerve cells, these fields were considered epiphenomenon - superfluous side effects.

Nothing was known about these weak fields because, in fact, they are usually too weak to measure at the level of individual neurons; their dimensions are measured in millionths of a metre (microns). Therefore, the researchers decided to determine whether these weak fields have any effect on neurons.

Experimentally, measuring such weak fields emanating from or affecting a small number of brain cells was no easy task. Extremely small electrodes were used in close proximity to a cluster of rat neurons to look for 'local field potentials', the electric fields generated by neuron activity. The result? They were successful in measuring fields as weak as one millivolt (one millionth of a volt).

Commenting on the results, Dr Anastassiou says: 'Because it had been so hard to position that many electrodes within such a small volume of brain tissue, the findings of our research are truly novel. Nobody had been able to attain this level of spatial and temporal resolution.'

What they found was surprising. 'We observed that fields as weak as one millivolt per millimetre robustly alter the firing of individual neurons, and increase the so-called "spike-field coherence" - the synchronicity with which neurons fire with relationship to the field,' he says.

Sunday, February 20, 2011

you won't find consciousness in the brain


How Does The Brain Produce Consciousness


NewScientist | MOST neuroscientists, philosophers of the mind and science journalists feel the time is near when we will be able to explain the mystery of human consciousness in terms of the activity of the brain. There is, however, a vocal minority of neurosceptics who contest this orthodoxy. Among them are those who focus on claims neuroscience makes about the preciseness of correlations between indirectly observed neural activity and different mental functions, states or experiences.

This was well captured in a 2009 article in Perspectives on Psychological Science by Harold Pashler from the University of California, San Diego, and colleagues, that argued: "...these correlations are higher than should be expected given the (evidently limited) reliability of both fMRI and personality measures. The high correlations are all the more puzzling because method sections rarely contain much detail about how the correlations were obtained."

Believers will counter that this is irrelevant: as our means of capturing and analysing neural activity become more powerful, so we will be able to make more precise correlations between the quantity, pattern and location of neural activity and aspects of consciousness.

This may well happen, but my argument is not about technical, probably temporary, limitations. It is about the deep philosophical confusion embedded in the assumption that if you can correlate neural activity with consciousness, then you have demonstrated they are one and the same thing, and that a physical science such as neurophysiology is able to show what consciousness truly is.

Many neurosceptics have argued that neural activity is nothing like experience, and that the least one might expect if A and B are the same is that they be indistinguishable from each other. Countering that objection by claiming that, say, activity in the occipital cortex and the sensation of light are two aspects of the same thing does not hold up because the existence of "aspects" depends on the prior existence of consciousness and cannot be used to explain the relationship between neural activity and consciousness.

This disposes of the famous claim by John Searle, Slusser Professor of Philosophy at the University of California, Berkeley: that neural activity and conscious experience stand in the same relationship as molecules of H2O to water, with its properties of wetness, coldness, shininess and so on. The analogy fails as the level at which water can be seen as molecules, on the one hand, and as wet, shiny, cold stuff on the other, are intended to correspond to different "levels" at which we are conscious of it. But the existence of levels of experience or of description presupposes consciousness. Water does not intrinsically have these levels.

We cannot therefore conclude that when we see what seem to be neural correlates of consciousness that we are seeing consciousness itself. While neural activity of a certain kind is a necessary condition for every manifestation of consciousness, from the lightest sensation to the most exquisitely constructed sense of self, it is neither a sufficient condition of it, nor, still less, is it identical with it. If it were identical, then we would be left with the insuperable problem of explaining how intracranial nerve impulses, which are material events, could "reach out" to extracranial objects in order to be "of" or "about" them. Straightforward physical causation explains how light from an object brings about events in the occipital cortex. No such explanation is available as to how those neural events are "about" the physical object. Biophysical science explains how the light gets in but not how the gaze looks out.

Many features of ordinary consciousness also resist neurological explanation. Take the unity of consciousness. I can relate things I experience at a given time (the pressure of the seat on my bottom, the sound of traffic, my thoughts) to one another as elements of a single moment. Researchers have attempted to explain this unity, invoking quantum coherence (the cytoskeletal micro-tubules of Stuart Hameroff at the University of Arizona, and Roger Penrose at the University of Oxford), electromagnetic fields (Johnjoe McFadden, University of Surrey), or rhythmic discharges in the brain (the late Francis Crick).

These fail because they assume that an objective unity or uniformity of nerve impulses would be subjectively available, which, of course, it won't be. Even less would this explain the unification of entities that are, at the same time, experienced as distinct. My sensory field is a many-layered whole that also maintains its multiplicity. There is nothing in the convergence or coherence of neural pathways that gives us this "merging without mushing", this ability to see things as both whole and separate.

And there is an insuperable problem with a sense of past and future. Take memory. It is typically seen as being "stored" as the effects of experience which leave enduring changes in, for example, the properties of synapses and consequently in circuitry in the nervous system. But when I "remember", I explicitly reach out of the present to something that is explicitly past. A synapse, being a physical structure, does not have anything other than its present state. It does not, as you and I do, reach temporally upstream from the effects of experience to the experience that brought about the effects. In other words, the sense of the past cannot exist in a physical system. This is consistent with the fact that the physics of time does not allow for tenses: Einstein called the distinction between past, present and future a "stubbornly persistent illusion".

There are also problems with notions of the self, with the initiation of action, and with free will. Some neurophilosophers deal with these by denying their existence, but an account of consciousness that cannot find a basis for voluntary activity or the sense of self should conclude not that these things are unreal but that neuroscience provides at the very least an incomplete explanation of consciousness.

I believe there is a fundamental, but not obvious, reason why that explanation will always remain incomplete - or unrealisable. This concerns the disjunction between the objects of science and the contents of consciousness. Science begins when we escape our subjective, first-person experiences into objective measurement, and reach towards a vantage point the philosopher Thomas Nagel called "the view from nowhere". You think the table over there is large, I may think it is small. We measure it and find that it is 0.66 metres square. We now characterise the table in a way that is less beholden to personal experience.

Science begins when we escape our first-person subjective experience

Thus measurement takes us further from experience and the phenomena of subjective consciousness to a realm where things are described in abstract but quantitative terms. To do its work, physical science has to discard "secondary qualities", such as colour, warmth or cold, taste - in short, the basic contents of consciousness. For the physicist then, light is not in itself bright or colourful, it is a mixture of vibrations in an electromagnetic field of different frequencies. The material world, far from being the noisy, colourful, smelly place we live in, is colourless, silent, full of odourless molecules, atoms, particles, whose nature and behaviour is best described mathematically. In short, physical science is about the marginalisation, or even the disappearance, of phenomenal appearance/qualia, the redness of red wine or the smell of a smelly dog.

Consciousness, on the other hand, is all about phenomenal appearances/qualia. As science moves from appearances/qualia and toward quantities that do not themselves have the kinds of manifestation that make up our experiences, an account of consciousness in terms of nerve impulses must be a contradiction in terms. There is nothing in physical science that can explain why a physical object such as a brain should ascribe appearances/qualia to material objects that do not intrinsically have them.

Material objects require consciousness in order to "appear". Then their "appearings" will depend on the viewpoint of the conscious observer. This must not be taken to imply that there are no constraints on the appearance of objects once they are objects of consciousness.

Our failure to explain consciousness in terms of neural activity inside the brain inside the skull is not due to technical limitations which can be overcome. It is due to the self-contradictory nature of the task, of which the failure to explain "aboutness", the unity and multiplicity of our awareness, the explicit presence of the past, the initiation of actions, the construction of self are just symptoms. We cannot explain "appearings" using an objective approach that has set aside appearings as unreal and which seeks a reality in mass/energy that neither appears in itself nor has the means to make other items appear. The brain, seen as a physical object, no more has a world of things appearing to it than does any other physical object.

particles that flock


Video - Video made to be used in the explanation of experiments being carried out at the CERN LHC

ScientificAmerican | In its first six months of operation, the Large Hadron Collider near Geneva has yet to find the Higgs boson, solve the mystery of dark matter or discover hidden dimensions of spacetime. It has, however, uncovered a tantalizing puzzle, one that scientists will take up again when the collider restarts in February following a holiday break. Last summer physicists noticed that some of the particles created by their proton collisions appeared to be synchronizing their flight paths, like flocks of birds. The findings were so bizarre that “we’ve spent all the time since [then] convincing ourselves that what we were see ing was real,” says Guido Tonelli, a spokesperson for CMS, one of two general-purpose experiments at the LHC.

The effect is subtle. When proton collisions result in the release of more than 110 new particles, the scientists found, the emerging particles seem to fly in the same direction. The high-energy collisions of protons in the LHC may be uncovering “a new deep internal structure of the initial protons,” says Frank Wilczek of the Massachusetts Institute of Technology, winner of a Nobel Prize for his explanation of the action of gluons. Or the particles may have more interconnections than scientists had realized. “At these higher energies [of the LHC], one is taking a snapshot of the proton with higher spatial and time resolution than ever before,” Wilczek says.

When seen with such high resolution, protons, according to a theory developed by Wilczek and his colleagues, consist of a dense medium of gluons—massless particles that act inside the protons and neutrons, controlling the behavior of quarks, the constituents of all protons and neutrons. “It is not implausible,” Wilczek says, “that the gluons in that medium interact and are correlated with one another, and these interactions are passed on to the new particles.”

If confirmed by other LHC physicists, the phenomenon would be a fascinating new finding about one of the most common particles in our universe and one scientists thought they understood well. Full-monty at arXiv.

Saturday, February 19, 2011

more than a feeling...,

Wired | Natural selection has nothing to worry about.

Let’s begin with energy efficiency. One of the most remarkable facts about the human brain is that it requires less energy (12 watts) than a light bulb. In other words, that loom of a trillion synapses, exchanging ions and neurotransmitters, costs less to run than a little incandescence. Or look at Deep Blue: when the machine was operating at full speed, it was a fire hazard, and required specialized heat-dissipating equipment to keep it cool. Meanwhile, Kasparov barely broke a sweat.

The same lesson applies to Watson. I couldn’t find reliable information on its off-site energy consumption, but suffice to say it required many tens of thousands of times as much energy as all the human brains on stage combined. While this might not seem like a big deal, evolution long ago realized that we live in a world of scarce resources. Evolution was right. As computers became omnipresent in our lives — I’ve got one dissipating heat in my pocket right now — we’re going to need to figure out how to make them more efficient. Fortunately, we’ve got an ideal prototype locked inside our skull.

The second thing Watson illustrates is the power of metaknowledge, or the ability to reflect on what we know. As Vaughan Bell pointed out a few months ago, this is Watson’s real innovation:

Answering this question needs pre-existing knowledge and, computationally, two main approaches. One is constraint satisfaction, which finds which answer is the ‘best fit’ to a problem which doesn’t have mathematically exact solution; and the other is a local search algorithm, which indicates when further searching is unlikely to yield a better result – in other words, when to quit computing and give an answer – because you can always crunch more data.

Our brain comes preprogrammed with metaknowledge: We don’t just know things — we know we know them, which leads to feelings of knowing. I’ve written about this before, but one of my favorite examples of such feelings is when a word is on the tip of the tongue. Perhaps it occurs when you run into an old acquaintance whose name you can’t remember, although you know that it begins with the letter J. Or perhaps you struggle to recall the title of a recent movie, even though you can describe the plot in perfect detail.

What’s interesting about this mental hiccup is that, even though the mind can’t remember the information, it’s convinced that it knows it. We have a vague feeling that, if we continue to search for the missing word, we’ll be able to find it. (This is a universal experience: The vast majority of languages, from Afrikaans to Hindi to Arabic, even rely on tongue metaphors to describe the tip-of-the-tongue moment.) But here’s the mystery: If we’ve forgotten a person’s name, then why are we so convinced that we remember it? What does it mean to know something without being able to access it?

This is where feelings of knowing prove essential. The feeling is a signal that we can find the answer, if only we keep on thinking about the question. And these feelings aren’t just relevant when we can’t remember someone’s name.

what is watson?


Video - IBM researcher discusses the technology behind its language-parsing machine.

Friday, February 18, 2011

a tipping point is nearing

American Thinker | We are facing a tipping point. There will soon be a crisis affecting US citizens beyond any experienced since the Great Depression. And it may happen within the year. This past week three awful developments put a dagger into the hope for a growth-led recovery, which held promise of possibly averting a debt and currency implosion crushing the American economy.

The first was a little-noticed, but tragic, series of events in the newly elected House of Representatives. The speaker, Mr. Boehner, had given the task of fashioning the majority's spending cut agenda to Representative Paul Ryan (R-Wisconsin), a rising conservative star representing the vocal wing of fiscal conservatives in the House. Promising to cut $100 billion of government spending, Mr. Boehner spoke before the elections of the urgency to produce immediately when Republicans took control.

The second awful development to occur last week was the employment report from the Labor Department, describing employment conditions in the U.S. economy in January, 2011. The report was packed with statistics, all pointing to anemic growth with a modest pickup in manufacturing employment. The little-noticed (not by the bond market) aspect of the report was the "benchmark" revisions, an attempt to get the total picture more accurate each year than simply adding up all the monthly change numbers. This year's benchmark revisions showed two alarming things: a decline from previously reported employment in December 2010 of nearly 500,000 jobs, and a reduction in the workforce of a similar amount.

The third development of the last week which received much less press than the Egyptian crisis is the "new normal" in Social Security. The CBO released a report disclosing that the net cash flow for the Social Security trust fund -- excluding interest received from the book entry bonds it holds in U.S. debt -- will be negative $56 billion in 2011, and for every year hence even more so. This is the train wreck that was supposed to happen in 2020. It is upon us now. Any limp action by conservatives to bring this program into solvency can be expected only to slow the raging river of red ink this behemoth program (along with its twin Godzilla, Medicare) spills on U.S. citizens. With no political will to fix them, these "entitlements" will obligate Americans to borrow more and more money from China--to honor promises we simply refuse to admit we can't keep.

So why do these developments argue for a crisis of Great Depression proportions? Because they speak unequivocally of our pathway to insolvency, and the potential of currency failure via hyperinflation, despite the hopes of conservatives and market participants to see a halt of such direction. Housing prices, the foundation of so much of private citizen debt loads, are destined for stagnation -- not inflation -- as the supply of homes is far greater than the demand -- 11% of the nation's homes stand empty today. When the world begins to recognize that there is no fix for America's borrowings, a fast and brutal exodus from our currency and bonds can send us a shock in mere weeks or months.

Unlike the Great Depression, however, we will enter such a shock in a weakened state, with few producers among us and record mountains of debt. More cataclysmic is the specter of inadequate food, as less than 4% of us farm, and those that do may cease to be as productive or may not accept devalued currency as payment, should the tipping point be crossed. Corn and wheat prices in the U.S. have nearly doubled in less than 12 months, using our rapidly evaporating currency as the medium of exchange.

the youth unemployment bomb

BloombergBW | In Tunisia, the young people who helped bring down a dictator are called hittistes—French-Arabic slang for those who lean against the wall. Their counterparts in Egypt, who on Feb. 1 forced President Hosni Mubarak to say he won't seek reelection, are the shabab atileen, unemployed youths. The hittistes and shabab have brothers and sisters across the globe. In Britain, they are NEETs—"not in education, employment, or training." In Japan, they are freeters: an amalgam of the English word freelance and the German word Arbeiter, or worker. Spaniards call them mileuristas, meaning they earn no more than 1,000 euros a month. In the U.S., they're "boomerang" kids who move back home after college because they can't find work. Even fast-growing China, where labor shortages are more common than surpluses, has its "ant tribe"—recent college graduates who crowd together in cheap flats on the fringes of big cities because they can't find well-paying work.

In each of these nations, an economy that can't generate enough jobs to absorb its young people has created a lost generation of the disaffected, unemployed, or underemployed—including growing numbers of recent college graduates for whom the post-crash economy has little to offer. Tunisia's Jasmine Revolution was not the first time these alienated men and women have made themselves heard. Last year, British students outraged by proposed tuition increases—at a moment when a college education is no guarantee of prosperity—attacked the Conservative Party's headquarters in London and pummeled a limousine carrying Prince Charles and his wife, Camilla Bowles. Scuffles with police have repeatedly broken out at student demonstrations across Continental Europe. And last March in Oakland, Calif., students protesting tuition hikes walked onto Interstate 880, shutting it down for an hour in both directions.

More common is the quiet desperation of a generation in "waithood," suspended short of fully employed adulthood. At 26, Sandy Brown of Brooklyn, N.Y., is a college graduate and a mother of two who hasn't worked in seven months. "I used to be a manager at a Duane Reade [drugstore] in Manhattan, but they laid me off. I've looked for work everywhere and I can't find nothing," she says. "It's like I got my diploma for nothing."

While the details differ from one nation to the next, the common element is failure—not just of young people to find a place in society, but of society itself to harness the energy, intelligence, and enthusiasm of the next generation. Here's what makes it extra-worrisome: The world is aging. In many countries the young are being crushed by a gerontocracy of older workers who appear determined to cling to the better jobs as long as possible and then, when they do retire, demand impossibly rich private and public pensions that the younger generation will be forced to shoulder.

In short, the fissure between young and old is deepening. "The older generations have eaten the future of the younger ones," former Italian Prime Minister Giuliano Amato told Corriere della Sera. In Britain, Employment Minister Chris Grayling has called chronic unemployment a "ticking time bomb." Jeffrey A. Joerres, chief executive officer of Manpower (MAN), a temporary-services firm with offices in 82 countries and territories, adds, "Youth unemployment will clearly be the epidemic of this next decade unless we get on it right away. You can't throw in the towel on this."

The highest rates of youth unemployment are found in the Middle East and North Africa, at roughly 24 percent each, according to the International Labor Organization. Most of the rest of the world is in the high teens—except for South and East Asia, the only regions with single-­digit youth unemployment. Young people are nearly three times as likely as adults to be unemployed. Fist tap Ed.

bahrain's crackdown threatens u.s. interests


Video - Ruling Sunni family cracks down hard on non-violent Shiite protesters

WaPo | FOR A DECADE, the ruling al-Khalifa family of Bahrain has been claiming to be leading the country toward democracy - an assertion frequently endorsed by the United States. On Thursday, the regime demolished that policy and any pretense about its real, autocratic nature. It dispatched its security forces to assault and violently disperse peaceful pro-democracy demonstrators who were camped in Manama's Pearl Square. At least four people were killed and 230 injured in the predawn raid.

The brutality is unlikely to restore stability to the Persian Gulf nation, even in the short term - and it poses a direct threat to vital interests of the United States. The U.S. 5th Fleet is based in Bahrain and plays an important role in providing security to the Gulf and in containing nearby Iran. Not only is the crackdown likely to weaken rather than strengthen an allied government, but the United States cannot afford to side with a regime that violently represses the surging Arab demand for greater political freedom.

Bahrain is the first of the Arab world's monarchies to experience major unrest in what is becoming a region-wide upheaval - and with good reason. The Khalifa family and ruling elite, who are Sunni, preside over a population that is 70 percent Shiite, and the majority is disenfranchised, excluded from leading roles in the government or security forces. Ten years ago, the ruling family launched a cautious reform process, instituting a parliament with limited powers. But in the last year it has moved in reverse. Last summer two dozen Shiite opposition leaders were arrested and charged under terrorism laws. Many other activists were rounded up, and a human rights group was taken over by the government.

The Obama administration failed to react forcefully to those abuses, which set the stage for this week's uprising by thousands of demonstrators from both the Shiite and Sunni communities. In December, visiting Secretary of State Hillary Rodham Clinton heaped praise on the government for "the progress it is making on all fronts" and minimized the political prosecutions, describing "the glass as half full."

web of popularity achieved by bullying


Video - Wonder Woman the intentional antithesis of Superman.

NYTimes | new research suggests that the road to high school popularity can be treacherous, and that students near the top of the social hierarchy are often both perpetrators and victims of aggressive behavior involving their peers.

The latest findings, being published this month in The American Sociological Review, offer a fascinating glimpse into the social stratification of teenagers. The new study, along with related research from the University of California, Davis, also challenges the stereotypes of both high school bully and victim.

Highly publicized cases of bullying typically involve chronic harassment of socially isolated students, but the latest studies suggest that various forms of teenage aggression and victimization occur throughout the social ranks as students jockey to improve their status.

The findings contradict the notion of the school bully as maladjusted or aggressive by nature. Instead, the authors argue that when it comes to mean behavior, the role of individual traits is “overstated,” and much of it comes down to concern about status.

“Most victimization is occurring in the middle to upper ranges of status,” said the study’s author, Robert Faris, an assistant professor of sociology at U.C. Davis. “What we think often is going on is that this is part of the way kids strive for status. Rather than going after the kids on the margins, they might be targeting kids who are rivals.”

Educators and parents are often unaware of the daily stress and aggression with which even socially well-adjusted students must cope.

“It may be somewhat invisible,” Dr. Faris said. “The literature on bullying has so focused on this one dynamic of repeated chronic antagonism of socially isolated kids that it ignores these other forms of aggression.

Thursday, February 17, 2011

what becomes of science when the wells run dry?

The Scientist | The practice and funding of science may change drastically when humanity enters an era of energy crisis, in which cheap oil is but a distant memory. While the most hyperbolic doomsayers posit catastrophic scenarios of oil shortage, global conflict, and severe deprivation, the truth is that long before society downsizes in the face of energy scarcity, climate change, resource depletion, and population growth, the way science is done and the role of research in society will likely change drastically.

One of the main ways that the average scientist will feel the effects of oil shortages will be as everyone does: by an enormous inflation in the cost of doing business. Most scientific research is expensive not just in terms of dollars, but also in terms of energy. On average, for each dollar researchers spend today, the energy equivalent of about a cup of oil is used. A $1 million grant can consume the equivalent of about 1,100 barrels of oil. In the future, the same amount of dollars will buy significantly less research, and scientists will have to become much more efficient and inventive in doing research.

Far flung research projects, particularly common among ecologists and other natural scientists, will also become much less affordable. Trips to distant scientific meetings will also become prohibitively expensive. Electronic conferencing will become the norm.

The nature of interaction within the scientific community may change as well. Like the competitive atmosphere already experienced in developing countries, limited resources may lead groups to be less open and to actively exclude other groups.

In a time of energy scarcity, societal priorities will also shift, and science will be justified and supported based on the perception of how it is helping solve mounting societal problems. While today basic science is often considered intellectually superior and more elegant than applied science, in coming decades, applied science will become dominant, as research becomes required to preserve the functioning of ecosystems and the services they provide.

Natural scientists, especial those in the field of ecology, will have a critical role to play in this bleak future, in which the human economy depends much more on ecological systems. With transport and global trade hobbled, people will have to depend to a greater extent on nearby ecosystems, both natural and agricultural. Highly productive ecosystems have enormous economic value. The natural asset value of the Mississippi delta, for example, has been estimated to be as high as $1.4 trillion. Research on these natural communities will receive more attention, as more food, fuel, and fiber will have to be coaxed from nature in a sustainable way.

is the world producing enough food?

NYTimes | Food inflation has returned for many of the same old reasons: the demand for meat has returned with the recovery of middle-income economies; the price of oil is up, which both raises the cost of food production and transport, and stokes the diversion of food crops into biofuel production. Speculators are taking pounds of flesh in the commodity exchanges. And, of course, freak weather has disrupted production in key export zones.

But what makes the weather matter? This is hardly the first La Niña weather cycle, after all. Every human civilization has understood the need to plan for climate’s vicissitudes. Over the centuries, societies developed the tools of grain stores, crop diversification and "moral economies" to guarantee the poor access to food in times of crisis.

Global economic liberalization discarded these buffers in favor of lean lines of trade. Safety nets and storage became inefficient and redundant – if crops failed in one part of the world, the market would always provide from another.

Climate change turns this thinking on its head. A shock in one corner of the world now ripples to every other. The economic architecture that promised efficiency has instead made us all more vulnerable. Little has changed in this crucial respect since the last food crisis. But this isn’t simply a rerun of 2008.
food protest in JordanMuhammad Hamed/Reuters Rising food prices caused protests in Karak, Jordan, in January.

While the global recession has turned a corner for some people in some countries, unemployment remains stubbornly high for many, and hunger has trailed it. There are 75 million people more undernourished now than in 2008. At the same time, governments are cutting back on entitlement programs for the poor as part of austerity drives to fight inflation.

Urban families are unable to afford food and fuel, and governments are unresponsive to their plight. Under such circumstances, as Egyptians know too well, food prices and climate change are revolution’s kindling.

Chipocalypse Now - I Love The Smell Of Deportations In The Morning

sky |   Donald Trump has signalled his intention to send troops to Chicago to ramp up the deportation of illegal immigrants - by posting a...