Monday, November 10, 2014

which cheek did jesus turn?


tandfonline |  In portraiture, subjects are mostly depicted with a greater portion of the left side of their face (left hemiface) facing the viewer. This bias may be induced by the right hemisphere's dominance for emotional expression and agency. Since negative emotions are particularly portrayed by the left hemiface, and since asymmetrical hemispheric activation may induce alterations of spatial attention and action-intention, we posited that paintings of the painful and cruel crucifixion of Jesus would be more likely to show his left hemiface than observed in portraits of other people. By analyzing depictions of Jesus's crucifixion from book and art gallery sources, we determined a significantly greater percent of these crucifixion pictures showed the left hemiface of Jesus facing the viewer than found in other portraits. In addition to the facial expression and hemispatial attention-intention hypotheses, there are other biblical explanations that may account for this strong bias, and these alternatives will have to be explored in future research.

In portraits, most subjects are depicted with their head rotated rightward, with more of the left than right side of the subject's face being shown. For example, in the largest study of facial portraiture, McManus and Humphrey (1973) studied 1474 portraits and found a 60% bias to portray a greater portion of the subjects’ left than right hemiface. Nicholls, Clode, Wood, and Wood (1999) found the same left hemiface bias even when accounting for the handedness of the painter.
Multiple theories have been proposed in an attempt to explain the genesis of this left hemiface bias in portraits. One hypothesis is that the right hemisphere is dominant for mediating facial emotional expressions. In an initial study, Buck and Duffy (1980) reported that patients with right hemisphere damage were less capable of facially expressing emotions than those with left hemisphere damage when viewing slides of familiar people, unpleasant scenes, and unusual pictures. These right-left hemispheric differences in facial expressiveness have been replicated in studies involving the spontaneous and voluntary expression of emotions in stroke patients with focal lesions (Borod, Kent, Koff, Martin, & Alpert, 1988; Borod, Koff, Lorch, & Nicholas, 1985; Borod, Koff, Perlman Lorch, & Nicholas, 1986; Richardson, Bowers, Bauer, Heilman, & Leonard, 2000).
Hemispheric asymmetries are even reported in more “naturalistic” settings outside the laboratory. For example, Blonder, Burns, Bowers, Moore, and Heilman (1993) videotaped interviews with patients and spouses in their homes and found that patients with right hemisphere damage were rated as less facially expressive than left hemisphere-damaged patients and normal control patients. These lesion studies suggest that the right hemisphere has a dominant role in mediating emotional facial expressions. Whereas corticobulbar fibers that innervate the forehead are bilateral, the contralateral hemisphere primarily controls the lower face. Thus, these lesion studies suggest that the left hemiface below the forehead, which is innervated by the right hemisphere, may be more emotionally expressive.
This right hemisphere-left hemiface dominance postulate has been further supported by studies of normal subjects portraying emotional facial expressions. For example, Borod et al. (1988) asked subjects to portray emotions, either by verbal command or visual imitation. The judges who rated these facial expressions ranked the left face as expressing stronger emotions. Sackeim and Gur (1978) showed normal subjects photographs of normal people facially expressing their emotions and asked participants to rate the intensity of the emotion being expressed. However, before showing these pictures of people making emotional faces, Sackeim and Gur altered the photographs. They either paired the left hemiface with a mirror image of this photograph's left hemiface to form a full face made up of two left hemifaces or formed full faces from right hemifaces. Normal participants found that the composite photographs of the left hemiface were more emotionally expressive than the right hemiface. Triggs, Ghacibeh, Springer, and Bowers (2005) administered transcranial magnetic stimulation (TMS) to the motor cortex of 50 subjects during contraction of bilateral orbicularis oris muscles and analyzed motor evoked potentials (MEPs). They found that the MEPs elicited in the left lower face were larger than the right face, and thus the left face might appear to be more emotionally expressive because it is more richly innervated.
Another reason portraits often have the subjects rotated to the right may be related to the organization of the viewer's brain. Both lesion studies (e.g., Adolphs, Damasio, Tranel, & Damasio, 1996; Bowers, Bauer, Coslett, & Heilman, 1985; DeKosky, Heilman, Bowers, & Valenstein, 1980) and physiological and functional imaging studies (e.g., Davidson & Fox, 1982; Puce, Allison, Asgari, Gore, & McCarthy, 1996; Sergent, Ohta, & Macdonald, 1992) have revealed that the right hemisphere is dominant for the recognition of emotional facial expressions and the recognition of previously viewed faces (Hilliard, 1973; Jones, 1979). In addition, studies of facial recognition and the recognition of facial emotional expressions have demonstrated that facial pictures shown in the left visual field and left hemispace are better recognized than those viewed on the right (Conesa, Brunold-Conesa, & Miron, 1995). Since the right hemisphere is dominant for facial recognition and the perception of facial emotions when viewing faces, the normal viewer of portraits may attend more to the left than right visual hemispace and hemifield. When the head of a portrait is turned to the right and the observer focuses on the middle of the face (midsagittal plane), more of the subject's face would fall in the viewer's left visual hemispace and thus be more likely to project to the right hemisphere.
Agency is another concept that may influence the direction of facial deviation in portraiture. Chatterjee, Maher, Gonzalez Rothi, and Heilman (1995) demonstrated that when right-handed individuals view a scene with more than one figure, they are more likely to see the left figure as being the active agent and the right figure as being the recipient of action or the patient. From this perspective the artist is the agent, and perhaps he or she is more likely to paint the left hemiface of the subject, which from the artist's perspective is more to the right, the position of the patient. Support for this agency hypothesis comes from studies in which individuals rated traits of left- versus right-profiled patients, and found that those with the right cheek exposed were considered more “active” (Chatterjee, 2002).
Taking this background information into account and applying it to depictions of the crucifixion of Jesus Christ highlights the various influences on profile painting in portraiture. Specifically, we confirm the predilection to display the left hemiface in portraiture and predict the same in portraits of Jesus’ crucifixion.
The strongest artistic portrayals of a patient being subject to cruel and painful agents are images of the crucifixion of Jesus. The earliest depiction of Christ on the cross dates back to around 420 AD. As Christianity existed for several centuries before that, this seems to be a late onset for this type of art. Because of the strong focus on Christ's resurrection and the disgrace of his agony and death, art historians postulate that there was a hesitation for early followers to show Christ on the cross. The legalization of Christianity also may have lifted the stigma. Based on the artwork still in existence from that period, Jesus was often pictured alive during the crucifixion scene. Several centuries later, from the end of the seventh century to the beginning of the eighth century, Christ is more often shown dead on the cross (Harries, 2005).

Sunday, November 09, 2014

vote all you want, the secret government won't change....,


bostonglobe |  The voters who put Barack Obama in office expected some big changes. From the NSA’s warrantless wiretapping to Guantanamo Bay to the Patriot Act, candidate Obama was a defender of civil liberties and privacy, promising a dramatically different approach from his predecessor.

But six years into his administration, the Obama version of national security looks almost indistinguishable from the one he inherited. Guantanamo Bay remains open. The NSA has, if anything, become more aggressive in monitoring Americans. Drone strikes have escalated. Most recently it was reported that the same president who won a Nobel Prize in part for promoting nuclear disarmament is spending up to $1 trillion modernizing and revitalizing America’s nuclear weapons.

Why did the face in the Oval Office change but the policies remain the same? Critics tend to focus on Obama himself, a leader who perhaps has shifted with politics to take a harder line. But Tufts University political scientist Michael J. Glennon has a more pessimistic answer: Obama couldn’t have changed policies much even if he tried.

Though it’s a bedrock American principle that citizens can steer their own government by electing new officials, Glennon suggests that in practice, much of our government no longer works that way. In a new book, “National Security and Double Government,” he catalogs the ways that the defense and national security apparatus is effectively self-governing, with virtually no accountability, transparency, or checks and balances of any kind. He uses the term “double government”: There’s the one we elect, and then there’s the one behind it, steering huge swaths of policy almost unchecked. Elected officials end up serving as mere cover for the real decisions made by the bureaucracy.

Glennon cites the example of Obama and his team being shocked and angry to discover upon taking office that the military gave them only two options for the war in Afghanistan: The United States could add more troops, or the United States could add a lot more troops. Hemmed in, Obama added 30,000 more troops.

Glennon’s critique sounds like an outsider’s take, even a radical one. In fact, he is the quintessential insider: He was legal counsel to the Senate Foreign Relations Committee and a consultant to various congressional committees, as well as to the State Department. “National Security and Double Government” comes favorably blurbed by former members of the Defense Department, State Department, White House, and even the CIA. And he’s not a conspiracy theorist: Rather, he sees the problem as one of “smart, hard-working, public-spirited people acting in good faith who are responding to systemic incentives”—without any meaningful oversight to rein them in.

How exactly has double government taken hold? And what can be done about it? Glennon spoke with Ideas from his office at Tufts’ Fletcher School of Law and Diplomacy. This interview has been condensed and edited.  Fist tap Arnach.

another fake bin laden story?


paulcraigroberts |  Osama bin Laden died in December 2001 of renal failure and other health problems, having denied in his last recorded video any responsibility for 9/11, instead directing Americans to look inside their own government. The FBI itself has stated that there is no evidence that Osama bin Laden is responsible for 9/11. Bin Laden’s obituary appeared in numerous foreign and Arabic press, and also on Fox News. No one can survive renal failure for a decade, and no dialysis machine was found in the alleged Abbottabad compound of bin Laden, who allegedly was murdered by SEALs a decade after his obituary notices.

Additionally, no one among the crew of the ship from which the White House reported bin Laden was buried at sea saw any such burial, and the sailors sent messages home to that effect. Somehow a burial was held onboard a ship on which there are constant watches and crew on alert at all hours, and no one witnessed it.

Additionally, the White House story of the alleged murder of bin Laden changed twice within the first 24 hours. The claim that Obama and his government watched the action transmitted live from cameras on the SEALs’ helmets was quickly abandoned, despite the release of a photo of the Obama regime intently focused on a TV set and alleged to be watching the live action. No video of the deed was ever released. To date there is no evidence whatsoever in behalf of the Obama regime’s claim. Not one tiny scrap. Just unsubstantiated self-serving claims.

Additionally, as I have made available on my website, witnesses interviewed by Pakistan TV reported that only one helicopter landed in Abbottabad and that when the occupants of the helicopter returned from the alleged bin Laden compound, the helicopter exploded on takeoff and there were no survivors. In other words, there was no bin Laden corpse to deliver to the ship that did not witness a burial and no SEAL hero to return who allegedly murdered an unarmed bin Laden. Moreover, the BBC interviewed residents in Abbottabad, including those next door to the alleged “bin Laden compound,” and all say that they knew the person who lived there and it was not bin Laden.

Any SEAL who was so totally stupid as to kill the unarmed “Terror Mastermind” would probably have been courtmartialed for incompetency. Look at the smiling face of the man Who Killed Bin Laden. He thinks that his claim that he murdered a man makes him a hero, a powerful comment on the moral degeneracy of Americans.

dissension in the ranks?


theguardian |  Robert O’Neill, a highly decorated 38-year-old veteran from Butte, Montana, now retired after 17 years in the forces, also said the al-Qaida leader died afraid and that US military leaders had not wanted him captured alive.

Once proud of their reputation as the “quiet professionals”, squabbling among the elite Seals has thrust them into the media spotlight. The men who have fuelled controversy about how much the public should know about the bin Laden raid face hostility about why they are breaking a traditional vow of silence and criticism from former comrades in arms.

“Vets need to sack up. We will bash each other for no fucking reason,” O’Neill told freelance journalist Alex Quade, in recordings broadcast on CNN. “Every marine that gets out, every Ranger that gets out, every, every army guy that writes a book, they’re lauded as heroes. You do it as a Seal and you’re a fucking villain.”

Controversy has already swirled around Matt Bissonnette, another member of the 23-man team, who under the pen name Mark Owen in 2012 published No Easy Day, a book about the raid. The manuscript was not cleared by the Pentagon and he is still under investigation for leaking classified material.

Both O’Neill and Bissonnette were apparently the targets of a fierce letter of criticism sent to all former and serving Seals by rear admiral Brian Losey and force master chief Michael Magaraci. It did not mention them by name, but there were few doubts about its target.

Any mission is successful because of teamwork by hundreds of unnamed comrades not a few shots by the men on the ground and if those men claim public recognition for their role they betray the others, the letter argued.

“Any real credit to be rendered is about the incredible focus, commitment and teamwork of this diverse network, and the years of hard work undertaken with little individual public credit,” the letter, dated October 31, said. “We do not abide wilful or selfish disregard for our core values in return for public notoriety and financial gain, which only diminishes otherwise honourable service, courage and sacrifice.”

O’Neill says he shot Bin Laden twice in the head. He had featured in an Esquire magazine piece last year named only as “The Shooter”, but was planning to unmask himself in an interview with Fox News this month. A special forces website scooped the news channel, naming O’Neill in its report on the senior officers’ letter.

It unleashed attacks and questions about the ex-Seal’s account but the veteran, in an echo of his former commanders, claimed he was not interested in fame. “The most important thing that I have learned in the last two years is, to me it doesn’t matter any more if I am ‘The Shooter,’” O’Neill said in comments recorded before his name was made public.

“The team got him. It was a successful mission. Regardless of the negativity that comes with it, I don’t give a fuck. We got him. We brought him out, and we lived. And that obviously will go down historically, but I don’t care if I’m ‘The Shooter’. And there are people who think I’m not. So whatever.”

Saturday, November 08, 2014

consanguinity and reproductive health among arabs

reproductive-health journal |  Socio-cultural factors, such as maintenance of family structure and property, ease of marital arrangements, better relations with in-laws, and financial advantages relating to dowry seem to play a crucial role in the preference of consanguinity in Arab populations [3]. Consanguineous marriages are generally thought to be more stable than marriages between non-relatives, though there are no studies to compare divorce rates of consanguineous and non-consanguineous marriages among Arabs. It is generally believed that the husband's family would side with the consanguineous wife in marital disputes since she is considered part of the extended family. When there are children with disabilities, more family members share in caring for these children. Unlike what is thought, consanguinity in the Arab World is not only confined to Muslim communities. Several other communities, including the Lebanese, Jordanian, and Palestinian Christian populations, have also practiced consanguinity, but to a lesser extent than Muslims [4-7].
Consanguinity rates show wide variations among Arab countries, as well as within the same country (Table 1, Additional file 1). However, reports from Arab countries on consanguinity rates may sometimes include marriages between third cousins or far relatives within the consanguineous category. Although this discrepancy affects the total consanguinity rate, it does not markedly alter the average inbreeding coefficient. Therefore, for comparison of consanguinity rates among populations, two parameters are best used; the mean inbreeding coefficient (F) and marriages between first cousins. However, Arab societies have a long tradition of consanguinity, and the cumulative estimate of (F) may exceed the estimated value which is calculated for a single generation [8].
Secular changes in the consanguinity rates have been noticed in some Arab populations. In Jordan[9], Lebanon [5], Bahrain [10], and among Palestinians [11-13], the frequency of consanguineous marriage is decreasing. Several factors may be playing a role in decreasing the consanguinity rates in Arab countries. Amongst these factors are the increasing higher female education levels, the declining fertility resulting in lower numbers of suitable relatives to marry, more mobility from rural to urban settings, and the improving economic status of families. Moreover, genetic diseases may be feared more now that infectious diseases are on the decline as causes of severe morbidity and mortality.
Generally, the highest rates of marriages to close relatives are consistently reported in the more traditional rural areas and among the poorest and least educated in society [8]. Reports from some Arab countries have shown that consanguinity rates are lower in urban when compared to rural settings. Urban to rural first cousin rates in Algeria were 10% and 15% [14], in Egypt, 8.3% and 17.2% [15], and in Jordan, 29.8% and 37.9% [6], respectively. Likewise the mean inbreeding coefficient was lower in urban as compared to rural settings in Syria (0.0203 versus 0.0265) [16]. In Jordan, it was evident that the higher the level of education of the female partner, the lower the consanguinity rate. Only 12% of university educated females would marry their first cousins, whereas 25% of university educated males tend to marry first cousins [6]. Similar trends of lower consanguinity rates among educated women, but not educated men, were noticed in Yemen [17] and Tunisia [18].
On the other hand, social, religious, cultural, political and economic factors still play roles in favoring consanguineous marriages among the new generations just as strongly as they did among the older generations, particularly in rural areas. Consanguinity rates seem to be increasing at a higher pace in Qatar [19], Yemen [17], the United Arab Emirates (UAE) [20], and Tlemcen in Algeria [14]. In Morocco, a study indicated an increasing consanguinity rate from the previous (21.5%) to the present (25.4%) generation [21], while another study indicated a decreasing consanguinity rate [22]. Consanguinity rates are not declining in some Arab countries because it is generally accepted that the social advantages of consanguinity outweigh the disadvantages [23], and consanguinity is regarded as a deeply rooted cultural trend. It is believed that the practice of consanguinity has significant social and economic advantages. Consanguineous marriages among Arabs are respected because it is thought that they promote family stability, simplify financial premarital negotiations, offer a greater compatibility between the spouses and other family members, offer a lesser risk of hidden financial and health issues, and maintain the family land possessions [3,24,25]. Among 390 women attending reproductive health clinics in Jordan, consanguinity was protective against violence during pregnancy [26]. In all cases, reports on secular trends in consanguinity need to be treated with some caution because in countries where consanguinity is favored, major regional and ethnic differences in prevalence are commonly observed [3].

the problem of inbreeding in islam

pjmedia |  There is a dire phenomenon rising in Europe that is crippling entire societies and yet the continent sleeps, refusing not only to confront the destructive elephant in the room, but also to admit its very existence. The troubling reality being referred to is the widespread practice of Muslim inbreeding and the birth defects and social ills that it spawns.
The tragic effect of the left’s control of the boundaries of debate is that any discussion about vital issues such as these marks an individual as an “Islamophobe” and a “racist.” A person who dares to point at the pathology of inbreeding in the Muslim community is accused of whipping up hatred against Muslim people. But all of this could not be further from the truth. To fight against inbreeding anywhere is to defend humanity and to defend innocent babies from birth defects. Fighting against this Islamic practice stems from a pro-Muslim calling, since identifying destructive ideologies and practices in Islam enables the protection of the Muslim people from harm.
Massive inbreeding among Muslims has been going on since their prophet allowed first-cousin marriages more than 50 generations (1,400 years) ago. For many Muslims, therefore, intermarriage is regarded as being part of their religion. In many Muslim communities, it is a source of social status to marry one’s daughter or son to his or her cousin. Intermarriage also ensures that wealth is kept within the family. Islam’s strict authoritarianism plays a large role as well: keeping daughters and sons close gives families more power to control and decide their choices and lifestyles.
Westerners have a historical tradition of being ready to fight and die for their country. Muslims, on the other hand, are bound together less by patriotism, but mainly by family relations and religion. Intermarrying to protect the family and community from outside non-Islamic influence is much more important to Muslims living in a Western nation than integrating into that nation and supporting it.
Today, 70 percent of all Pakistanis are inbred and in Turkey the amount is between 25-30 percent (Jyllands-Posten, 27/2 2009 “More stillbirths among immigrants“). A rough estimate reveals that close to half of everybody living in the Arab world is inbred. A large percentage of the parents that are blood related come from families where intermarriage has been a tradition for generations.
BBC investigation in Britain several years ago revealed that at least 55% of the Pakistani community in Britain was married to a first cousin. The Times of India affirmed that “this is thought to be linked to the probability that a British Pakistani family is at least 13 times more likely than the general population to have children with recessive genetic disorders.”

did inbreeding shape the course of human evolution?



newscientist |  TALK about an inauspicious beginning. For thousands of years our ancestors lived in small, isolated populations, leaving them severely inbred, according to a new genetic analysis. The inbreeding may have caused a host of health problems, and it is likely that small populations were a barrier to the development of complex technologies.
In recent years, geneticists have read the genomes of long-dead humans andextinct relatives like Neanderthals. David Reich of Harvard Medical School in Boston has now sequenced the Neanderthal genome and that of another extinct human, the Denisovan, to an unprecedented degree of accuracy. He presented his findings at a Royal Society meeting on ancient DNA in London on 18 November.
Describing the genomes as "nearly error-free", Reich says both species were severely inbred due to small populations. "Archaic populations had low genetic diversity, really extraordinarily low," he said. "It's among the lowest diversity of any organism in the animal kingdom."
One Neanderthal, whose DNA Reich obtained from a toe bone, had almost no diversity in about one-eighth of the genome: both copies of each gene were identical. That suggests the individual's parents were half-siblings.
That's in line with previous evidence of small populations, says Chris Stringerof the Natural History Museum in London. "In the distant past, human populations were probably only in the thousands or at best tens of thousands, and lived locally, exchanging mates only with their nearest neighbours."
Our genomes still carry traces of these small populations. A 2010 study concluded that our ancestors 1.2 million years ago had a population of just 18,500 individuals, spread over a vast area (PNAS, doi.org/dv75x8).
Fossils suggest the inbreeding took its toll, says Erik Trinkaus of Washington University in St Louis, Missouri. Those he has studied have a range of deformities, many of which are rare in modern humans. He thinks such deformities were once much more common (PLoS ONE, doi.org/p6r).
Despite the impact on health, it is unclear whether inbreeding could have killed off the Neanderthals and Denisovans. More likely is the effect of small populations on culture and technology, says Mark Thomas of University College London. Larger populations retain more knowledge and find ways to improve technologies. This "cumulative culture" is unique to humans, but it could only emerge in reasonably large populations. In small populations, knowledge is easily lost, which explains why skills like bone-working show up and then vanish, says Trinkaus.

gene-centrism vs. multi-level selection

guardian |  A disagreement between the twin giants of genetic theory, Richard Dawkins and EO Wilson, is now being fought out by rival academic camps in an effort to understand how species evolve.
The learned spat was prompted by the publication of a searingly critical review of Wilson's new book, The Social Conquest of Earth, in Prospect magazine this month. The review, written by Dawkins, author of the popular and influential books The Selfish Gene, The Blind Watchmaker and The God Delusion, has prompted more letters and on-line comment than any other article in the recent history of the magazine and attacks Wilson's theory "as implausible and as unsupported by evidence".
"I am not being funny when I say of Edward Wilson's latest book that there are interesting and informative chapters on human evolution, and on the ways of social insects (which he knows better than any man alive), and it was a good idea to write a book comparing these two pinnacles of social evolution, but unfortunately one is obliged to wade through many pages of erroneous and downright perverse misunderstandings of evolutionary theory," Dawkins writes.
The Oxford evolutionary biologist, 71, has also infuriated many readers by listing other established academics who, he says, are on his side when it comes to accurately representing the mechanism by which species evolve. Wilson, in a short piece penned promptly in response to Dawkins's negative review, was also clearly annoyed by this attempt to outflank him.
"In any case," Wilson writes, "making such lists is futile. If science depended on rhetoric and polls, we would still be burning objects with phlogiston [a mythical fire-like element] and navigating with geocentric maps."
Wilson, 83, is a Harvard professor of evolutionary biology who became famous in the early 1970s with his study of social species in his books The Insect Societiesand Sociobiology. He is internationally acknowledged as "the father of sociobiology" and is the world's leading authority on ants.
For lay spectators, the row is a symptom of the long and controversial evolution of the very idea of evolution. At root it is a dispute about whether natural selection, the theory of "the survival of the fittest" first put forward by Charles Darwin in 1859, occurs only to preserve the single gene. Wilson is an advocate of "multi-level selection theory", a development of the idea of "kin selection", which holds that other biological, social and even environmental priorities may be behind the process.

Friday, November 07, 2014

the $9billion dollar witness; meet jpmorgan chase's worst nightmare


rollingstone |   She tried to stay quiet, she really did. But after eight years of keeping a heavy secret, the day came when Alayne Fleischmann couldn't take it anymore. 
"It was like watching an old lady get mugged on the street," she says. "I thought, 'I can't sit by any longer.'" 

Fleischmann is a tall, thin, quick-witted securities lawyer in her late thirties, with long blond hair, pale-blue eyes and an infectious sense of humor that has survived some very tough times. She's had to struggle to find work despite some striking skills and qualifications, a common symptom of a not-so-common condition called being a whistle-blower.

Fleischmann is the central witness in one of the biggest cases of white-collar crime in American history, possessing secrets that JPMorgan Chase CEO Jamie Dimon late last year paid $9 billion (not $13 billion as regularly reported – more on that later) to keep the public from hearing.

Back in 2006, as a deal manager at the gigantic bank, Fleischmann first witnessed, then tried to stop, what she describes as "massive criminal securities fraud" in the bank's mortgage operations.
Thanks to a confidentiality agreement, she's kept her mouth shut since then. "My closest family and friends don't know what I've been living with," she says. "Even my brother will only find out for the first time when he sees this interview." 

Six years after the crisis that cratered the global economy, it's not exactly news that the country's biggest banks stole on a grand scale. That's why the more important part of Fleischmann's story is in the pains Chase and the Justice Department took to silence her.

She was blocked at every turn: by asleep-on-the-job regulators like the Securities and Exchange Commission, by a court system that allowed Chase to use its billions to bury her evidence, and, finally, by officials like outgoing Attorney General Eric Holder, the chief architect of the crazily elaborate government policy of surrender, secrecy and cover-up. "Every time I had a chance to talk, something always got in the way," Fleischmann says.

why would omidyar sponsor journalism probing global capitalism?


rall |  Just over one year ago, billionaire eBay cofounder Pierre Omidyar issued one of the most dramatic announcements America’s beleaguered journalists had experienced in their lifetimes. After decades of closing newspapers, shrinking newsrooms, vanishing foreign bureaus and the near extinction of investigative reporting due to brutal, relentless budget-cutting, Omidyar would endow a new company, First Look Media, with a staggeringly large sum of cash – $250 million – to be deployed in the service of a breathtakingly ambitious attempt to reinvent advocacy journalism in everything from investigations of financial corruption to sports coverage.

Even better, from the standpoint of progressives living in the political wilderness since the rise and fall of George McGovern, First Look Media would be edited by leftist pundits and advocacy journalists like the legal columnist Glenn Greenwald, to whom former NSA contractor Edward Snowden leaked more than a million classified US government documents, the documentarian Laura Poitras, also involved intimately in the Snowdon saga, and the respected anti-militarism critic Jeremy Scahill.

As some cynics opined, it all sounded too good to be true. (Disclosure: for just shy of a month earlier this year, I worked for Pando Daily.) Why would a billionaire like Omidyar bankroll a bunch of antiestablishment types like the financial reporter Matt Taibbi – hired away from Rolling Stone – whose mission in life is in large part to undermine global capitalism?

Although it’s too soon to declare First Look dead and gone, and Omidyar claims to be as committed to his utopian company as ever, things have gone from bad to worse over the last year. Omidyar’s $250 million pledge shrunk to $50 million. The mission to fund hard-hitting journalism and commentary was recast as, among other things, possibly a “platform” expected to generate significant revenue. Tales of shrinking budgets, diminished expectations, shrinking ambitions and staffers leaving after complaining of managerial incompetence appeared with increasing frequency in the trade press.

omidyar's version of events...,


firstlook |  Matt Taibbi, who joined First Look Media just seven months ago, left the company on Tuesday. His departure—which he describes as a refusal to accept a work reassignment, and the company describes as a resignation—was the culmination of months of contentious disputes with First Look founder Pierre Omidyar, chief operating officer Randy Ching, and president John Temple over the structure and management of Racket, the digital magazine Taibbi was hired to create. Those disputes were exacerbated by a recent complaint from a Racket employee about Taibbi’s behavior as a manager.

The departure of the popular former Rolling Stone writer is a serious setback for First Look in its first year of operations. Last January, Omidyar announced with great fanfare that he would personally invest $250 million in the company to build “a general interest news site that will cover topics ranging from entertainment and sports to business and the economy” incorporating multiple “digital magazines” as well as a “flagship news site.”

One year later, First Look still has only one such magazine, The Intercept.

Omidyar has publicly and privately pledged multiple times that First Look will never interfere with the stories produced by its journalists. He has adhered to that commitment with both The Intercept and Racket, and Taibbi has been clear that he was free to shape Racket‘s journalism fully in his image. His vision was a hard-hitting, satirical magazine in the style of the old Spy that would employ Taibbi’s facility for merciless ridicule, humor, and parody to attack Wall Street and the corporate world. First Look was fully behind that vision.

Taibbi’s dispute with his bosses instead centered on differences in management style and the extent to which First Look would influence the organizational and corporate aspects of his role as editor-in-chief. Those conflicts were rooted in a larger and more fundamental culture clash that has plagued the project from the start: A collision between the First Look executives, who by and large come from a highly structured Silicon Valley corporate environment, and the fiercely independent journalists who view corporate cultures and management-speak with disdain. That divide is a regular feature in many newsrooms, but it was exacerbated by First Look’s avowed strategy of hiring exactly those journalists who had cultivated reputations as anti-authoritarian iconoclasts.

The Intercept, through months of disagreements and negotiations with First Look over the summer, was able to resolve most of these conflicts; as a result, it now has a sizable budget, operational autonomy, and a team of talented journalists, editors, research specialists, and technologists working collaboratively and freely in the manner its founders always envisioned.

When First Look was launched last October, it was grounded in two principles: one journalistic, the other organizational. First, journalists would enjoy absolute editorial freedom and journalistic independence. Second, the newsroom would avoid rigid top-down hierarchies and instead would be driven by the journalists and their stories.

nouveau petit elite honeypot for out of pocket journalists?


nymag |  The confusion inherent to any start-up has been exacerbated by Omidyar’s ruminative style. This spring, he went through a period of deep thinking, highlighted by a summit with news-­industry veterans at a hotel he part-owns in Laguna Beach, California. Under “Chatham House Rules,” no one was to talk directly about what was said. “He’s a true believer, I believe,” says Ken Doctor, a media analyst who attended. Many of those who have heard Omidyar and his aides, at that summit and other meetings, have come away thinking his plans sounded naïve and not fully baked. Sandy Rowe, a former editor of the Oregonian who was brought on as a consultant, says the fuzzy vision gives Omidyar flexibility. “This is a man who, since he said he would put down this $250 million, has never said, ‘Here is my plan.’ ”

The absence of a plan, however, contributed to dissension within First Look, and chatter began to emanate from behind its wall of operational secrecy. There was an East Coast–West Coast feud, a divide between the journalists and the technologists. Omidyar’s loyalists out in California and Hawaii grumbled as Greenwald traveled the world, promoting a book, picking up awards, and speaking out of turn. Poitras, meanwhile, was immersed in finishing a documentary on Snowden. There was an internal battle over budgets, which stalled hiring and hindered journalistic output. The Intercept initially published at a piddling rate. In June, the three co-founders of the Intercept and Taibbi wrote a joint letter to Omidyar demanding freedom to proceed with their expansion.

Omidyar then published a blog post saying he had “definitely rethought some of our original ideas and plans.” Instead of quick expansion, he announced that First Look would be in “planning, start-up, and experimental mode for at least the next few years,” focusing its immediate efforts on the Intercept and Racket while working to develop new journalistic technology and design with a team in San Francisco. He also appointed a confidant as First Look’s editorial boss: the former Civil Beat editor John Temple. “I think that the message,” Temple told me in August, “is that we’re not trying hard enough if we’re not failing a little bit, if we’re not saying things that don’t bear fruit.”
The shift proved beneficial to the Intercept, which is no longer under the day-to-day management of its founders. Omidyar lured editor John Cook away from Gawker to run the site, and after a publication pause and a redesign, it has been gaining momentum, breaking big stories about the NSA’s surveillance of American Muslim leaders and the seemingly arbitrary standards of the government’s terrorist-screening system. The latter disclosure reportedly came from a leaker other than Snowden; the FBI recently searched the home of a government contractor suspected of being the source.

The factional conflicts within Omidyar’s enterprise, however, seem far from settled. In August, Temple spoke enthusiastically about Racket, which he said had broadened its focus to include political topics. But as its launch date neared, Taibbi disappeared from the company amid disputes with First Look ­higher-ups. Omidyar announced Taibbi was leaving and that First Look would now “turn our focus to exploring next steps” for Racket, a project that a spokeswoman said had cost him $2 million over its eight months of development. In the wake of the tumultuous departure, the Intercept published a remarkable inside account describing “months of contentious disputes” between Taibbi and his superiors over his management, including a complaint from an employee that he was “verbally abusive.” But the journalists did not spare Omidyar from blame, describing what they called “a collision between the First Look executives, who by and large come from a highly structured Silicon Valley corporate environment, and the fiercely independent journalists who view corporate cultures and management-speak with disdain.” The iconoclasts even questioned Omidyar’s “avowed strategy” of hiring “anti-authoritarian iconoclasts.”

Even before the turmoil, Temple hinted that a strategic reconsideration was under way. “It will be more complex,” he told me, “than an organization of iconoclasts.” He says that Omidyar sees journalism as “the third phase of his professional life,” bringing together his technology experience and philanthropy, and is prepared to be patient, even if it perplexes outsiders. Temple says there is no incongruity between Omidyar’s communitarian ideals and his financing of an insurgency. “It’s not all about civility,” Temple says. “It’s about having a healthy and open society.” There’s a tangible insight buried in that amorphous sentiment: Omidyar’s interest in journalism is mechanistic. He wants to aggregate to himself the power to declassify and to bring about the “greater good,” as he defines it.

In October, the founding Intercept gang — minus Omidyar — got together for a party at Mayday Space, a loft in a graffitied section of Bushwick. The Snowden saga had entered its Redford-and-Hoffman phase with the premiere of Poitras’s documentary Citizenfour, which was partly financed by Skoll’s Participant Media and looks destined for Oscar consideration. A DJ spun songs next to a huge ­propaganda-style poster reading ­WHISTLE-BLOWER! KNOW YOUR PLACE … SHUT YOUR FACE. Smokers congregated on the balcony, which had a distant view of the Empire State Building, lit red. Greenwald hinted of further scoops. “Stay tuned, is all I can say,” he told me.
Greenwald says that he and Omidyar plan to finally meet later this month, when they will appear at a very different sort of gathering: an invite-only event called Newsgeist, co-sponsored by Google and the Knight Foundation. Billed as an “unconference,” it has no agenda other than “reimagining the future of the news.” Greenwald told me “top editors, executives, moguls, and founders” are expected to attend, including Dean Baquet of the New York Times. I asked the organizer from Google about other attendees and speakers, but he said he could disclose no further details, to “protect the privacy and security of our invited guests.” It seems that the Newsgeist is very hush-hush.

no journalist has been luckier...,

rollingstone |  Today is my last day at Rolling Stone. As of this week, I’m leaving to work for First Look Media, the new organization that’s already home to reporters like Glenn Greenwald, Jeremy Scahill and Laura Poitras.

I’ll have plenty of time to talk about the new job elsewhere. But in this space, I just want to talk about Rolling Stone, and express my thanks. Today is a very bittersweet day for me. As excited as I am about the new opportunity, I’m sad to be leaving this company.

More than 15 years ago, Rolling Stone sent a reporter, Brian Preston, to do a story on the eXile, the biweekly English-language newspaper I was editing in Moscow at the time with Mark Ames. We abused the polite Canadian Preston terribly – I think we thought we were being hospitable – and he promptly went home and wrote a story about us that was painful, funny and somewhat embarrassingly accurate. Looking back at that story now, in fact, I’m surprised that Rolling Stone managing editor Will Dana gave me a call years later, after I’d returned to the States.

I remember when Will called, because it was such an important moment in my life. I was on the American side of Niagara Falls, walking with friends, when my cell phone rang. Night had just fallen and when Will invited me to write a few things in advance of the 2004 presidential election, I nearly walked into the river just above the Falls.

At the time, I was having a hard time re-acclimating to life in America and was a mess personally. I was broke and having anxiety attacks. I specifically remember buying three cans of corned beef hash with the last dollars of available credit on my last credit card somewhere during that period. Anyway I botched several early assignments for the magazine, but Will was patient and eventually brought me on to write on a regular basis.

It was my first real job and it changed my life. Had Rolling Stone not given me a chance that year, God knows where I’d be – one of the ideas I was considering most seriously at the time was going to Ukraine to enroll in medical school, of all things.

In the years that followed, both Will and editor/publisher Jann S. Wenner were incredibly encouraging and taught me most of what I now know about this business. It’s been an amazing experience.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...