Tuesday, August 03, 2010

cosmic rays not background after all?

LiveScience | A puzzling pattern in the cosmic rays bombarding Earth from space has been discovered by an experiment buried deep under the ice of Antarctica.

Cosmic rays are highly energetic particles streaming in from space that are thought to originate in the distant remnants of dead stars.

But it turns out these particles are not arriving uniformly from all directions. The new study detected an overabundance of cosmic rays coming from one part of the sky, and a lack of cosmic rays coming from another.

This odd pattern was detected by the IceCube Neutrino Observatory, an experiment still under construction that is actually intended to detect other exotic particles called neutrinos. In fact, scientists have gone out of their way to try to block out all signals from cosmic rays in order to search for the highly elusive neutrinos, which are much harder to find.

Yet in sifting through their cosmic-ray data to try to separate it from possible neutrino signals, the researchers noticed the intriguing pattern. Fist tap Nana.

Monday, August 02, 2010

human race will be extinct within 100 years

DailyMail | As the scientist who helped eradicate smallpox he certainly know a thing or two about extinction.

And now Professor Frank Fenner, emeritus professor of microbiology at the Australian National University, has predicted that the human race will be extinct within the next 100 years.

He has claimed that the human race will be unable to survive a population explosion and 'unbridled consumption.’

Fenner told The Australian newspaper that 'homo sapiens will become extinct, perhaps within 100 years.'

'A lot of other animals will, too,' he added.

'It's an irreversible situation. I think it's too late. I try not to express that because people are trying to do something, but they keep putting it off.'

Since humans entered an unofficial scientific period known as the Anthropocene - the time since industrialisation - we have had an effect on the planet that rivals any ice age or comet impact, he said.

Fenner, 95, has won awards for his work in helping eradicate the variola virus that causes smallpox and has written or co-written 22 books.

He announced the eradication of the disease to the World Health Assembly in 1980 and it is still regarded as one of the World Health Organisation's greatest achievements.

He was also heavily involved in helping to control Australia's myxomatosis problem in rabbits.

Last year official UN figures estimated that the world’s population is currently 6.8 billion. It is predicted to exceed seven billion by the end of 2011.

Fenner blames the onset of climate change for the human race’s imminent demise.

He said: 'We'll undergo the same fate as the people on Easter Island.

'Climate change is just at the very beginning. But we're seeing remarkable changes in the weather already.'

'The Aborigines showed that without science and the production of carbon dioxide and global warming, they could survive for 40,000 or 50,000 years.

‘But the world can't. The human species is likely to go the same way as many of the species that we've seen disappear.'

the limits of the coded world

NYTimes | To my mind the philosopher who gave the most complete answer to this question was Immanuel Kant. In Kant’s view, the main mistake philosophers before him had made when considering how humans could have accurate knowledge of the world was to forget the necessary difference between our knowledge and the actual subject of that knowledge. At first glance, this may not seem like a very easy thing to forget; for example, what our eyes tell us about a rainbow and what that rainbow actually is are quite different things. Kant argued that our failure to grasp this difference was further reaching and had greater consequences than anyone could have thought.

The belief that our empirical exploration of the world and of the human brain could ever eradicate human freedom is an error.

Taking again the example of the rainbow, Kant would argue that while most people would grant the difference between the range of colors our eyes perceive and the refraction of light that causes this optical phenomenon, they would still maintain that more careful observation could indeed bring one to know the rainbow as it is in itself, apart from its sensible manifestation. This commonplace understanding, he argued, was at the root of our tendency to fall profoundly into error, not only about the nature of the world, but about what we were justified in believing about ourselves, God, and our duty to others.

The problem was that while our senses can only ever bring us verifiable knowledge about how the world appears in time and space, our reason always strives to know more than appearances can show it. This tendency of reason to always know more is and was a good thing. It is why human kind is always curious, always progressing to greater and greater knowledge and accomplishments. But if not tempered by a respect for its limits and an understanding of its innate tendencies to overreach, reason can lead us into error and fanaticism.

Let’s return to the example of the experiment predicting the monkeys’ decisions. What the experiment tells us is nothing other than that the monkeys’ decision making process moves through the brain, and that our technology allows us to get a reading of that activity faster than the monkeys’ brain can put it into action. From that relatively simple outcome, we can now see what an unjustified series of rather major conundrums we had drawn. And the reason we drew them was because we unquestioningly translated something unknowable — the stretch of time including the future of the monkeys’ as of yet undecided and unperformed actions — into a neat scene that just needed to be decoded in order to be experienced. We treated the future as if it had already happened and hence as a series of events that could be read and narrated.

From a Kantian perspective, with this simple act we allowed reason to override its boundaries, and as a result we fell into error. The error we fell into was, specifically, to believe that our empirical exploration of the world and of the human brain could ever eradicate human freedom.

This, then, is why, as “irresistible” as their logic might appear, none of the versions of Galen Strawson’s “Basic Argument” for determinism, which he outlined in The Stone last week, have any relevance for human freedom or responsibility. According to this logic, responsibility must be illusory, because in order to be responsible at any given time an agent must also be responsible for how he or she became how he or she is at that time, which initiates an infinite regress, because at no point can an individual be responsible for all the genetic and cultural forces that have produced him or her as he or she is. But this logic is nothing other than a philosophical version of the code of codes; it assumes that the sum history of forces determining an individual exist as a kind of potentially legible catalog.

The point to stress, however, is that this catalog is not even legible in theory, for to be known it assumes a kind of knower unconstrained by time and space, a knower who could be present from every possible perspective at every possible deciding moment in an agent’s history and prehistory. Such a knower, of course, could only be something along the lines of what the monotheistic traditions call God. But as Kant made clear, it makes no sense to think in terms of ethics, or responsibility, or freedom when talking about God; to make ethical choices, to be responsible for them, to be free to choose poorly, all of these require precisely the kind of being who is constrained by the minimal opacity that defines our kind of knowing.

As much as we owe the nature of our current existence to the evolutionary forces Darwin first discovered, or to the cultures we grow up in, or to the chemical states affecting our brain processes at any given moment, none of this impacts on our freedom. I am free because neither science nor religion can ever tell me, with certainty, what my future will be and what I should do about it. The dictum from Sartre that Strawson quoted thus gets it exactly right: I am condemned to freedom. I am not free because I can make choices, but because I must make them, all the time, even when I think I have no choice to make.

your move: the limits of free will

NYTimes | You may have heard of determinism, the theory that absolutely everything that happens is causally determined to happen exactly as it does by what has already gone before — right back to the beginning of the universe. You may also believe that determinism is true. (You may also know, contrary to popular opinion, that current science gives us no more reason to think that determinism is false than that determinism is true.) In that case, standing on the steps of the store, it may cross your mind that in five minutes’ time you’ll be able to look back on the situation you’re in now and say truly, of what you will by then have done, “Well, it was determined that I should do that.” But even if you do fervently believe this, it doesn’t seem to be able to touch your sense that you’re absolutely morally responsible for what you next.

The case of the Oxfam box, which I have used before to illustrate this problem, is relatively dramatic, but choices of this type are common. They occur frequently in our everyday lives, and they seem to prove beyond a doubt that we are free and ultimately morally responsible for what we do. There is, however, an argument, which I call the Basic Argument, which appears to show that we can never be ultimately morally responsible for our actions. According to the Basic Argument, it makes no difference whether determinism is true or false. We can’t be ultimately morally responsible either way.

The argument goes like this.

(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.

(2) So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are — at least in certain mental respects.

(3) But you can’t be ultimately responsible for the way you are in any respect at all.

(4) So you can’t be ultimately responsible for what you do.

The key move is (3). Why can’t you be ultimately responsible for the way you are in any respect at all? In answer, consider an expanded version of the argument.

(a) It’s undeniable that the way you are initially is a result of your genetic inheritance and early experience.

(b) It’s undeniable that these are things for which you can’t be held to be in any way responsible (morally or otherwise).

(c) But you can’t at any later stage of life hope to acquire true or ultimate moral responsibility for the way you are by trying to change the way you already are as a result of genetic inheritance and previous experience.

(d) Why not? Because both the particular ways in which you try to change yourself, and the amount of success you have when trying to change yourself, will be determined by how you already are as a result of your genetic inheritance and previous experience.

(e) And any further changes that you may become able to bring about after you have brought about certain initial changes will in turn be determined, via the initial changes, by your genetic inheritance and previous experience.

There may be all sorts of other factors affecting and changing you. Determinism may be false: some changes in the way you are may come about as a result of the influence of indeterministic or random factors. But you obviously can’t be responsible for the effects of any random factors, so they can’t help you to become ultimately morally responsible for how you are.

Sunday, August 01, 2010

warsocialism's epic failure

HuffPo | Throughout much of the twentieth century, great powers had vied with one another to create new, or more effective, instruments of coercion. Military innovation assumed many forms. Most obviously, there were the weapons: dreadnoughts and aircraft carriers, rockets and missiles, poison gas, and atomic bombs -- the list is a long one. In their effort to gain an edge, however, nations devoted equal attention to other factors: doctrine and organization, training systems and mobilization schemes, intelligence collection and war plans.

All of this furious activity, whether undertaken by France or Great Britain, Russia or Germany, Japan or the United States, derived from a common belief in the plausibility of victory. Expressed in simplest terms, the Western military tradition could be reduced to this proposition: war remains a viable instrument of statecraft, the accoutrements of modernity serving, if anything, to enhance its utility.

Grand Illusions

That was theory. Reality, above all the two world wars of the last century, told a decidedly different story. Armed conflict in the industrial age reached new heights of lethality and destructiveness. Once begun, wars devoured everything, inflicting staggering material, psychological, and moral damage. Pain vastly exceeded gain. In that regard, the war of 1914-1918 became emblematic: even the winners ended up losers. When fighting eventually stopped, the victors were left not to celebrate but to mourn. As a consequence, well before Fukuyama penned his essay, faith in war’s problem-solving capacity had begun to erode. As early as 1945, among several great powers -- thanks to war, now great in name only -- that faith disappeared altogether.

Among nations classified as liberal democracies, only two resisted this trend. One was the United States, the sole major belligerent to emerge from the Second World War stronger, richer, and more confident. The second was Israel, created as a direct consequence of the horrors unleashed by that cataclysm. By the 1950s, both countries subscribed to this common conviction: national security (and, arguably, national survival) demanded unambiguous military superiority. In the lexicon of American and Israeli politics, “peace” was a codeword. The essential prerequisite for peace was for any and all adversaries, real or potential, to accept a condition of permanent inferiority. In this regard, the two nations -- not yet intimate allies -- stood apart from the rest of the Western world.

So even as they professed their devotion to peace, civilian and military elites in the United States and Israel prepared obsessively for war. They saw no contradiction between rhetoric and reality.

Yet belief in the efficacy of military power almost inevitably breeds the temptation to put that power to work. “Peace through strength” easily enough becomes “peace through war.” Israel succumbed to this temptation in 1967. For Israelis, the Six Day War proved a turning point. Plucky David defeated, and then became, Goliath. Even as the United States was flailing about in Vietnam, Israel had evidently succeeded in definitively mastering war.

A quarter-century later, U.S. forces seemingly caught up. In 1991, Operation Desert Storm, George H.W. Bush’s war against Iraqi dictator Saddam Hussein, showed that American troops like Israeli soldiers knew how to win quickly, cheaply, and humanely. Generals like H. Norman Schwarzkopf persuaded themselves that their brief desert campaign against Iraq had replicated -- even eclipsed -- the battlefield exploits of such famous Israeli warriors as Moshe Dayan and Yitzhak Rabin. Vietnam faded into irrelevance.

For both Israel and the United States, however, appearances proved deceptive. Apart from fostering grand illusions, the splendid wars of 1967 and 1991 decided little. In both cases, victory turned out to be more apparent than real. Worse, triumphalism fostered massive future miscalculation.

Saturday, July 31, 2010

crusader or ego-tripper?

Telegraph | His supporters regard the convicted computer hacker as a crusader who dedicates his life to exposing the unvarnished truth, no matter what the consequences.

Detractors regard him as a man driven more by his own ego and a desire to cock a snook at the Establishment he has spent much of his life fighting.

The Australian has admitted in the past that he came close to a breakdown after being tried on charges of illegal hacking in the mid 1990s, which also led to a bitter break-up with the mother of his baby son.

Mr Assange was spared a jail sentence in 1995 after admitting 25 charges of hacking into computer networks including the Canadian communications firm Nortel.

Ken Day, who headed the Operation Weather investigation into Mr Assange’s circle of hackers, told The Daily Telegraph: "Ego is a big part of who he is.

The challenge to win. I think that's important to him as a person. He wouldn't give up on a system he was trying to break into, he was very persistent.”

white house implores assange to desist - why shoud he?


Video - Julian Assange - I Enjoy Exposing People Who Abuse Their Power

Guardian | The White House has implored WikiLeaks to stop posting secret Afghanistan war documents.

President Obama's spokesman, Robert Gibbs, said the war logs jeopardised national security and put the lives of Afghan informants and US soldiers at risk.

"I think it's important that no more damage be done to our national security," Gibbs told NBC's Today show today.

The WikiLeaks editor-in-chief Julian Assange said yesterday that the website had contacted the White House — with the New York Times acting as intermediary — to offer US government officials the chance to go through the documents to make sure no innocent people were identified. But the White House did not respond to the approach, he said.

Assange dismissed allegations that innocent people or informants had been put in danger by the publication of the documents.

US defence secretary Robert Gates and Admiral Mike Mullen, chairman of the joint chiefs of staff, called the release of the documents deeply damaging and potentially life-threatening for Afghan informants.

"Mr Assange can say whatever he likes about the greater good he thinks he and his source are doing, but the truth is they might already have on their hands the blood of some young soldier or that of an Afghan family," Mullen said.

But Assange also has supporters in the US. Peter Scheer, executive director of the First Amendment Coalition, argues that Wikileaks has become a journalistic necessity.

It is the result, be believes, of the US supreme court's failure to support journalists in their attempts to protect their confidential sources. He writes: "Wikileaks, in short, is a response to journalists' loss of control over their information."

Though Gates has told reporters that the documents offer little insight into current policies and events, Scheer says the stories extracted from the raw data by The Guardian, the New York Times and Der Spiegel "shed new light on the role of Pakistani intelligence, the extent of civilian casualties, Taliban military capabilities and other matters."

deep state reaction to narrative game change


Video - Gates asks FBI to catch leaker.

NYTimes | Defense Secretary Robert M. Gates on Thursday denounced the disclosure this week of 75,000 classified documents about the Afghanistan war by the Web site WikiLeaks, asserting that the security breach had endangered lives and damaged the ability of others to trust the United States government to protect their secrets.

Speaking to reporters at the Pentagon, Mr. Gates portrayed the documents as “a mountain of raw data and individual impressions, most several years old” that offered little insight into current policies and events. Still, he said, the disclosures — which include some identifying information about Afghans who have helped the United States — have “potentially dramatic and grievously harmful consequences.”

“The battlefield consequences of the release of these documents are potentially severe and dangerous for our troops, our allies and Afghan partners, and may well damage our relationships and reputation in that key part of the world,” he said. “Intelligence sources and methods, as well as military tactics, techniques and procedures, will become known to our adversaries.”

The Times has taken care not to publish information that would harm national security interests or disclose anything that was likely to put lives at risk or jeopardize military or antiterrorist operations, withholding any names of operatives in the field and informants cited in the reports. It also has not linked to the archives of raw material.

Mr. Gates said the documents’ disclosure had prompted a rethinking of a trend nearly two decades old, dating from the Persian Gulf war of 1991, of trying to make intelligence information more accessible to troops in combat situations so they can respond rapidly to developments.

“We endeavor to push access to sensitive battlefield information down to where it is most useful — on the front lines — where as a practical matter there are fewer restrictions and controls than at rear headquarters,” he said. “In the wake of this incident, it will be a real challenge to strike the right balance between security and providing our frontline troops the information they need.”

Friday, July 30, 2010

genome surprise!!!

Wired | The ebola virus is one of the nastiest pathogens known to man. It corrodes blood vessels and stops clotting, leaving most of its human victims bleeding to death through their pores. And guinea pigs — along with opossums, wallabies and insect-eating bats — have it in their genes.

A genomic hunt for virus genes traced sequences to Ebola and the closely related Marburg virus in no fewer than six vertebrate species. Echoes of the less-gruesome borna virus family appeared in 13 species, including humans. The genes appear to have been mixed in about 40 million years ago, and have stuck around ever since.

“Some of these sequences have been conserved,” and that’s almost certainly not a coincidence, said cell biologist Ann Marie Skalka of the Fox Chase Cancer Center. “We speculate that some of these must have provided an evolutionary advantage.”

Skalka specializes in RNA viruses, which unlike most common viruses are made from single strands of primitive genetic material, rather than DNA.

Common viruses, better known as retroviruses, insert their DNA into the genomes of infected cells. They hijack its function and, should the cell survive, leave pieces of themselves behind. Retroviral leftovers have accumulated for hundreds of millions of years in animal genomes; they account for about 8 percent of the human.

RNA viruses, however, were long thought to leave no leftovers. They float outside a cell’s chromosomes, hijacking its machinery from afar and ostensibly leaving genomes intact. But that assumption proved wrong. Fist tap Nana.

identical cells? not so much..,

The Scientist | Genetically identical cells may be far more different than previously believed. Published this week in Science, researchers find striking variation in levels of gene expression among individual, genetically identical E. coli, seemingly the result of simple chance.

"The paper is quite rich," said Sanjay Tyagi, a molecular biologist at New Jersey Medical School who was not involved in the research. "People think that if an organism has a particular genotype, it determines its phenotype -- that there's a one-to-one relationship," said Tyagi. "But as it turns out, [differences in gene expression] can arise just from chance."

In traditional gene expression studies, researchers grind up a population of cells, then identify overall amounts of gene products from the resulting mixture. Researchers at Harvard University instead studied cells one by one, still calculating averages but also capturing variation in the population with single molecule sensitivity -- and found cells expressing genes at wildly different levels. "It's single molecules meet systems biology," said Sunney Xie, senior author on the paper and a chemical biologist at Harvard University.

oceans in manmade peril

YahooLiveScience | One hundred days ago Thursday, the oil rig Deepwater Horizon began spewing oil into the Gulf of Mexico. As profoundly as the leak of millions of barrels of oil is injuring the Gulf ecosystem, it is only one of many threats to the Earth's oceans that, many experts say, could change the makeup of the oceans as we know them and wipe out a large portion of marine life.

The waters of the Gulf were already heavily fished, and the Gulf has been home to an oxygen-depleted dead zone generated by agricultural runoff rich in nutrients.

The Gulf and the rest of the world's waters also face the uncertain and potentially devastating effects of climate change. Warming ocean temperatures reduce the water's oxygen content, and rising atmospheric carbon dioxide is altering the basic chemistry of the ocean, making it more acidic. There is no shortage of evidence that both of these effects have begun to wreak havoc on certain important creatures.

Human beings created these problems, largely in the two centuries since the Industrial Revolution, but for some researchers, they bring to mind the ancient past. The Earth has seen several mass extinctions, including five that annihilated more than half the planet's species. Experts now believe Earth is in the midst of a sixth event, the first one caused by humans.

"Today the synergistic effects of human impacts are laying the groundwork for a comparably great Anthropocene mass extinction in the oceans, with unknown ecological and evolutionary consequences," Jeremy Jackson of the Scripps Institution of Oceanography at the University of California, San Diego, wrote in a 2008 article published in the journal Proceedings of the National Academy of Sciences.

ocean life support dwindling

The Scientist | Phytoplankton, which are responsible for half of the world's primary production and are the basis of all marine ecosystems, have been declining for more than 100 years, perhaps the result of rising sea temperatures, according to a study published in this week's Nature -- a cause for concern about the health of the Earth's oceans.

"It is troubling," said marine scientist David Siegel of the University of California, Santa Barbara, who was not involved in the research. With data dating back to the late 1800s, "this paper finds a long-term trend that's huge," he said. "The phytoplankton community has undoubtedly been changing."

Phytoplankton productivity lies at the base of the marine food web, supporting all ocean life and contributing to global geochemical processes, including the carbon cycle. Through photosynthetic activities, phytoplankton reduce atmospheric carbon dioxide. Satellite data from the last few decades has suggested that phytoplankton might be on the decline.

Thursday, July 29, 2010

organizing for anti-capitalist transition


Video - David Harvey the crisis today.

davidharvey | The historical geography of capitalist development is at a key inflexion point in which the geographical configurations of power are rapidly shifting at the very moment when the temporal dynamic is facing very serious constraints. Three percent compound growth (generally considered the minimum satisfactory growth rate for a healthy capitalist economy) is becoming less and less feasible to sustain without resort to all manner of fictions (such as those that have characterized asset markets and financial affairs over the last two decades). There are good reasons to believe that there is no alternative to a new global order of governance that will eventually have to manage the transition to a zero growth economy. If that is to be done in an equitable way, then there is no alternative to socialism or communism. Since the late 1990s, the World Social Forum became the center for articulating the theme “another world is possible.” It must now take up the task of defining how another socialism or communism is possible and how the transition to these alternatives are to be accomplished. The current crisis offers a window of opportunity to reflect on what might be involved.

The current crisis originated in the steps taken to resolve the crisis of the 1970s. These steps included:

(a) the successful assault upon organized labor and its political institutions while mobilizing global labor surpluses, instituting labor-saving technological changes and heightening competition. The result has been global wage repressions (a declining share of wages in total GDP almost everywhere) and the creation of an even vaster disposable labor reserve living under marginal conditions.

(b) undermining previous structures of monopoly power and displacing the previous stage of (nation state) monopoly capitalism by opening up capitalism to far fiercer international competition. Intensifying global competition translated into lower non-financial corporate profits. Uneven geographical development and inter-territorial competition became key features in capitalist development, opening the way towards the beginnings of a hegemonic shift of power particularly but not exclusively towards East Asia.

(c) utilizing and empowering the most fluid and highly mobile form of capital – money capital – to reallocate capital resources globally (eventually through electronic markets) thus sparking deindustrialization in traditional core regions and new forms of (ultra-oppressive) industrialization and natural resource and agricultural raw material extractions in emergent markets. The corollary was to enhance the profitability of financial corporations and to find new ways to globalize and supposedly absorb risks through the creation of fictitious capital markets.

(d) At the other end of the social scale, this meant heightened reliance on “accumulation by dispossession” as a means to augment capitalist class power. The new rounds of primitive accumulation against indigenous and peasant populations were augmented by asset losses of the lower classes in the core economies (as witnessed by the sub-prime housing market in the US which foisted a huge asset loss particularly upon African American populations).

(e) The augmentation of otherwise sagging effective demand by pushing the debt economy (governmental, corporate and household) to its limits (particularly in the USA and the UK but also in many other countries from Latvia to Dubai).

(f) Compensating for anemic rates of return in production by the construction of whole series of asset market bubbles, all of which had a Ponzi character, culminating in the property bubble that burst in 2007-8. These asset bubbles drew upon finance capital and were facilitated by extensive financial innovations such as derivatives and collateralized debt obligations.

The political forces that coalesced and mobilized behind these transitions had a distinctive class character and clothed themselves in the vestments of a distinctive ideology called neoliberal. The ideology rested upon the idea that free markets, free trade, personal initiative and entrepreneurialism were the best guarantors of individual liberty and freedom and that the “nanny state” should be dismantled for the benefit of all. But the practice entailed that the state must stand behind the integrity of financial institutions, thus introducing (beginning with the Mexican and developing countries debt crisis of 1982) “moral hazard” big time into the financial system. The state (local and national) also became increasingly committed to providing a “good business climate” to attract investments in a highly competitive environment. The interests of the people were secondary to the interests of capital and in the event of a conflict between them, the interests of the people had to be sacrificed (as became standard practice in IMF structural adjustments programs from the early 1980s onwards). The system that has been created amounts to a veritable form of communism for the capitalist class.

These conditions varied considerably, of course, depending upon what part of the world one inhabited, the class relations prevailing there, the political and cultural traditions and how the balance of political-economic power was shifting.

So how can the left negotiate the dynamics of this crisis? At times of crisis, the irrationality of capitalism becomes plain for all to see. Surplus capital and surplus labor exist side-by side with seemingly no way to put them back together in the midst of immense human suffering and unmet needs. In midsummer of 2009, one third of the capital equipment in the United States stood idle, while some 17 per cent of the workforce were either unemployed, enforced part-timers or “discouraged” workers. What could be more irrational than that!

Can capitalism survive the present trauma? Yes. But at what cost? This question masks another. Can the capitalist class reproduce its power in the face of the raft of economic, social, political and geopolitical and environmental difficulties? Again, the answer is a resounding “yes.” But the mass of the people will have to surrender the fruits of their labour to those in power, to surrender many of their rights and their hard-won asset values (in everything from housing to pension rights), and to suffer environmental degradations galore to say nothing of serial reductions in their living standards which means starvation for many of those already struggling to survive at rock bottom. Class inequalities will increase (as we already see happening). All of that may require more than a little political repression, police violence and militarized state control to stifle unrest.

Since much of this is unpredictable and since the spaces of the global economy are so variable, then uncertainties as to outcomes are heightened at times of crisis. All manner of localized possibilities arise for either nascent capitalists in some new space to seize opportunities to challenge older class and territorial hegemonies (as when Silicon Valley replaced Detroit from the mid-1970s onwards in the United States) or for radical movements to challenge the reproduction of an already destabilized class power. To say that the capitalist class and capitalism can survive is not to say that they are predestined to do so nor does it say that their future character is given. Crises are moments of paradox and possibilities.

So what will happen this time around? If we are to get back to three percent growth, then this means finding new and profitable global investment opportunities for $1.6 trillion in 2010 rising to closer to $3 trillion by 2030. This contrasts with the $0.15 trillion new investment needed in 1950 and the $0.42 trillion needed in 1973 (the dollar figures are inflation adjusted). Real problems of finding adequate outlets for surplus capital began to emerge after 1980, even with the opening up of China and the collapse of the Soviet Bloc. The difficulties were in part resolved by creation of fictitious markets where speculation in asset values could take off unhindered. Where will all this investment go now?

f**k you buddy redux


Video - Adam Curtis BBC Documentary The Trap - Fuck You Buddy.

Wikipedia | In this episode, Curtis examines the rise of game theory during the Cold War and the way in which its mathematical models of human behaviour filtered into economic thought. The programme traces the development of game theory with particular reference to the work of John Nash, who believed that all humans were inherently suspicious and selfish creatures that strategised constantly. Using this as his first premise, Nash constructed logically consistent and mathematically verifiable models, for which he won the Bank of Sweden Prize in Economic Sciences, commonly referred to as the Nobel Prize in Economics. He invented system games reflecting his beliefs about human behaviour, including one he called "Fuck You Buddy" (later published as "So Long Sucker"), in which the only way to win was to betray your playing partner, and it is from this game that the episode's title is taken. These games were internally coherent and worked correctly as long as the players obeyed the ground rules that they should behave selfishly and try to outwit their opponents, but when RAND's analysts tried the games on their own secretaries, they instead chose not to betray each other, but to cooperate every time. This did not, in the eyes of the analysts, discredit the models, but instead proved that the secretaries were unfit subjects.

What was not known at the time was that Nash was suffering from paranoid schizophrenia, and, as a result, was deeply suspicious of everyone around him—including his colleagues—and was convinced that many were involved in conspiracies against him. It was this mistaken belief that led to his view of people as a whole that formed the basis for his theories. Footage of an older and wiser Nash was shown in which he acknowledges that his paranoid views of other people at the time were false.

Curtis examines how game theory was used to create the USA's nuclear strategy during the Cold War. Because no nuclear war occurred, it was believed that game theory had been correct in dictating the creation and maintenance of a massive American nuclear arsenal—because the Soviet Union had not attacked America with its nuclear weapons, the supposed deterrent must have worked. Game theory during the Cold War is a subject Curtis examined in more detail in the To The Brink of Eternity part of his first series, Pandora's Box, and he reuses much of the same archive material in doing so.

A separate strand in the documentary is the work of R.D. Laing, whose work in psychiatry led him to model familial interactions using game theory. His conclusion was that humans are inherently selfish, shrewd, and spontaneously generate strategems during everyday interactions. Laing's theories became more developed when he concluded that some forms of mental illness were merely artificial labels, used by the state to suppress individual suffering. This belief became a staple tenet of counterculture during the 1960s. Reference is made to the Rosenhan experiment, in which bogus patients, surreptitiously self-presenting at a number of American psychiatric institutions, were falsely diagnosed as having mental disorders, while institutions, informed that they were to receive bogus patients, "identified" numerous supposed imposters who were actually genuine patients. The results of the experiment were a disaster for American psychiatry, because they destroyed the idea that psychiatrists were a privileged elite able to genuinely diagnose, and therefore treat, mental illness.

All these theories tended to support the beliefs of what were then fringe economists such as Friedrich von Hayek, whose economic models left no room for altruism, but depended purely on self-interest, leading to the formation of public choice theory. In an interview, the economist James M. Buchanan decries the notion of the "public interest", asking what it is and suggesting that it consists purely of the self-interest of the governing bureaucrats. Buchanan also proposes that organisations should employ managers who are motivated only by money. He describes those who are motivated by other factors—such as job satisfaction or a sense of public duty—as "zealots".

As the 1960s became the 1970s, the theories of Laing and the models of Nash began to converge, producing a widespread popular belief that the state (a surrogate family) was purely and simply a mechanism of social control which calculatedly kept power out of the hands of the public. Curtis shows that it was this belief that allowed the theories of Hayek to look credible, and underpinned the free-market beliefs of Margaret Thatcher, who sincerely believed that by dismantling as much of the British state as possible—and placing former national institutions into the hands of public shareholders—a form of social equilibrium would be reached. This was a return to Nash's work, in which he proved mathematically that if everyone was pursuing their own interests, a stable, yet perpetually dynamic, society could result.

The episode ends with the suggestion that this mathematically modelled society is run on data—performance targets, quotas, statistics—and that it is these figures combined with the exaggerated belief in human selfishness that has created "a cage" for Western humans. The precise nature of the "cage" is to be discussed in the next episode.

Wednesday, July 28, 2010

unsustainable culture's "undesirable" fitness


Video - Ice Cube My Summer Vacation

NewScientist | FROM feckless fathers and teenaged mothers to so-called feral kids, the media seems to take a voyeuristic pleasure in documenting the lives of the "underclass". Whether they are inclined to condemn or sympathise, commentators regularly ask how society got to be this way. There is seldom agreement, but one explanation you are unlikely to hear is that this kind of "delinquent" behaviour is a sensible response to the circumstances of a life constrained by poverty. Yet that is exactly what some evolutionary biologists are now proposing.

There is no reason to view the poor as stupid or in any way different from anyone else, says Daniel Nettle of the University of Newcastle in the UK. All of us are simply human beings, making the best of the hand life has dealt us. If we understand this, it won't just change the way we view the lives of the poorest in society, it will also show how misguided many current efforts to tackle society's problems are - and it will suggest better solutions.

Evolutionary theory predicts that if you are a mammal growing up in a harsh, unpredictable environment where you are susceptible to disease and might die young, then you should follow a "fast" reproductive strategy - grow up quickly, and have offspring early and close together so you can ensure leaving some viable progeny before you become ill or die. For a range of animal species there is evidence that this does happen. Now research suggests that humans are no exception. Fist tap Chauncey deVega

Video - Johnny Cash General Lee Dukes of Hazard moonshine mobile.

the arctic and shell

royaldutchshellplc | Back in the early mid-1980’s Shell Oil began to take a serious look at the hydrocarbon production potential of the Chukchi Sea. They formed an operating division and staffed it with management, albeit with no staff, except secretarial support for the managers. (These guys were referred to ‘managers without portfolio’).

Shell’s exploration and production research division began to investigate the regions of the Chukchi Sea where potential lease sales were likely to occur. Of interest were the basic oceanographic parameters; water depths, under topography, sea floor surface geology, ice pack characteristics, etc. The basic surveys revealed some interesting results. The sea flow was basically covered fairly deeply with soft sediment (marine mud) but the topography was highly unusual. Acoustic mapping revealed an ocean bottom that was scarred by all sorts of crisscrossing trenches from a meter or so in depth, to almost 20 meters in depth.

At first these surface trenches were thought to be remnant features from the last ice age. But they appeared to relatively young. Arguments about their age and origin could not be settled with then available data. So, Shell obtained the cooperation of the US Coast Guard and ‘borrowed’ one of their icebreakers for a summer to do some detailed ocean bottom surveys and surface mapping. In addition a pattern of acoustic buoys were left upon the ocean bottom.

The next year the icebreaker with its compliment of Shell ‘boffins’ remapped the ocean bottom surface and set about looking for the acoustic buoys. They found some, many were never found. Those that were found had been displaced. And the topography of the ocean bottom had change completely. The old ocean bottom topography was gone, replaced by a new topography that had been sculpted by the dragging of the previous winter’s pressure ridges across the ocean bottom by surface winds and ocean currents. Some of the new trenches were almost 20 meters deep.

This news came as a very rude surprise to Shell’s leaders. Shell’s head office management turned to its talent pool at its research labs for an answer. Surely there must be an answer. Shell management wanted an engineering solution to the problem posed to development of oil and gas reserves by the Arctic ice.

After some degree of consideration it was recognized that the ice sheet itself posed problems, but those problems could probably be handled with creative design features to platforms or man-made islands. The pressure ridges however, were a whole different problem. Their size and extent made them a force of nature that could not be defeated. In shallow waters it might be possible to build rock and gravel production ‘islands’ that could be repaired after each winter’s battering by the ice. However, in deeper waters construction of these islands this was not a feasible solution. Man-made platforms of some sort would be required.

Short of the use of small nuclear weapons to ‘vaporize’ the problem posed by pressure ridges there was no ‘rational or practical’ engineering solution.

retailers pay more to get cargo (no guarantees)

NYTimes | The grills shaped like kegs and toolboxes, ordered for a Father’s Day promotion at Cost Plus World Market, arrived too late for the holiday. At the Container Store, platinum-color hangers, advertised in a summer sale catalog, were delivered days after the sale began. At True Value Hardware, the latecomers were fans and portable chairs.

Fighting for freight, retailers are outbidding each other to score scarce cargo space on ships, paying two to three times last year’s freight rates — in some cases, the highest rates in five years. And still, many are getting merchandise weeks late.

The problems stem from 2009, when stores slashed inventory. With little demand for shipping, ocean carriers took ships out of service: more than 11 percent of the global shipping fleet was idle in spring 2009, according to AXS-Alphaliner, an industry consultant.

Carriers also moved to “slow steaming,” traveling at slower and more fuel-efficient speeds, while the companies producing containers, the typically 20- or 40-foot boxes in which most consumer companies ship goods, essentially stopped making them.

“All my customers, they’re having a terrible time,” said Steven L. Horton, principal at Horton Global Strategies, which negotiates freight contracts for companies. “With the increased cost and them not knowing if they’re even going to get the space or equipment, it’s a weekly battle.”

served with no regard....,


Video - Bell California outrageous municipal gangsterism...,

Tuesday, July 27, 2010

crises of capitalism


homesteads for future tax income?

NYTimes | Give away land to make money?

It hardly sounds like a prudent scheme. But in a bit of déjà vu, that is exactly what this small Nebraska city aims to do.

Beatrice was a starting point for the Homestead Act of 1862, the federal law that handed land to pioneering farmers. Back then, the goal was to settle the West. The goal of Beatrice’s “Homestead Act of 2010,” is, in part, to replenish city coffers.

The calculus is simple, if counterintuitive: hand out city land now to ensure property tax revenues in the future.

“There are only so many ball fields a place can build,” Tobias J. Tempelmeyer, the city attorney, said the other day as he stared out at grassy lots, planted with lonely mailboxes, that the city is working to get rid of. “It really hurts having all this stuff off the tax rolls.”

Around the nation, cities and towns facing grim budget circumstances are grasping at unlikely — some would say desperate — means to bolster their shrunken tax bases. Like Beatrice, places like Dayton, Ohio, and Grafton, Ill., are giving away land for nominal fees or for nothing in the hope that it will boost the tax rolls and cut the lawn-mowing bills.

Analysts say that this year and next, city budgets will reach their most dismal points of the recession, largely because of lag time inherent in the way taxes are collected and distributed.

Despite signs of a recovery, if a slow one, in other elements of the economy, it may be years away for many municipalities. Between now and 2012, America’s cities are likely to experience shortfalls totaling $55 billion to $85 billion, according to a survey by the National League of Cities, because of slumping revenues from property taxes and sales taxes and reduced support from state governments.

Trash Israeli Professional Boxer Spitting On And Beating On Kids At UCLA...,

sportspolitika  |   On Sunday, however, the mood turned ugly when thousands of demonstrators, including students and non-students, showed ...