Sunday, April 15, 2012

the wealth defense industry

alternet | In 2005, Citigroup offered its high net-worth clients in the United States a concise statement of the threats they and their money faced.

The report told them they were the leaders of a “plutonomy,” an economy driven by the spending of its ultra-rich citizens. “At the heart of plutonomy is income inequality,” which is made possible by “capitalist-friendly governments and tax regimes.”

The danger, according to Citigroup’s analysts, is that “personal taxation rates could rise – dividends, capital gains, and inheritance taxes would hurt the plutonomy.”

But the ultra-rich already knew that. In fact, even as America’s income distribution has skewed to favor the upper classes, the very richest have successfully managed to reduce their overall tax burden. Look no further than Republican presidential contender Mitt Romney, who in 2010 paid 13.9 percent of his $21.6 million income in taxes that year, the same tax rate as an individual who earned a mere $8,500 to $34,500.

How is that possible? How can a country make so much progress toward equality on other fronts – race, gender, sexual orientation and disability – but run the opposite way in its policy on taxing the rich?

In 2004, the American Political Science Association (APSA) tried to answer that very question. The explanation they came up with viewed the problem as a classic case of democratic participation: While the poor have overwhelming numbers, the wealthy have higher rates of political participation, more advanced skills and greater access to resources and information. In short, APSA said, the wealthy use their social capital to offset their minority status at the ballot box.

But this explanation has one major flaw. Regardless of the Occupy movement’s rhetoric, most of the growth in the wealth gap has actually gone to a tiny sliver of the 1% – one-tenth of it, or even one-one-hundredth.

Even more shockingly, that 1 percent of the 1% has shifted its tax burden not to the middle class or poor, but to rich households in the 85th to 99th percentile range. In 2007, the effective income tax rate for the richest 400 Americans was below 17 percent, while the “mass affluent” 1% paid nearly 24 percent. Disparities in Social Security taxes were even greater, with the merely rich paying 12.4 percent of their income, while the super-rich paid only one-one-thousandth of a percent.

It’s one thing for the poor to lose the democratic participation game, but APSA has no explanation for why the majority of the upper class – which has no shortage of government-influencing social capital – should fall so far behind the very top earners. (Of course, relative to middle- and lower-class earners, they’ve done just fine.)

For a better explanation, we need to look more closely at the relationship between wealth and political power. I propose an updated theory of “oligarchy,” the same lens developed by Plato and Aristotle when they studied the same problem in their own times.

A quick review

First, let’s review what we think we know about power in America.

We begin with a theory of “democratic pluralism,” which posits that democracy is basically a tug-of-war with different interest groups trying to pull government policy toward an outcome. In this framework, the rich are just one group among many competing “special interests.”

Of course, it’s hard not to notice that some groups can tug better than others. So in the 1950s, social scientists, like C. Wright Mills, author of The Power Elite, developed another theory of “elites” – those who wield more pull thanks to factors like education, social networks and ethnicity. In this view, wealth is just one of many factors that might help someone become the leader of a major business or gain a government position, thereby joining the elite.

But neither theory explains how the super-rich are turning public policy to their benefit even at the expense of the moderately rich. The mass affluent vastly outnumber the super-rich, and the super-rich aren’t necessarily better-educated, more skilled or more able to participate in politics; nor do the super-rich dominate the top posts of American government – our representatives tend to be among the slightly lower rungs of the upper class who are losing the tax battle.

Also, neither theory takes into account the unique power that comes with enormous wealth – the kind found in that one-tenth of the 1%. Whether or not the super-rich hold any official position in business or government, they remain powerful.

Only when we separate wealth from all other kinds of power can we begin to understand why our tax system looks the way it does – and, by extension, how the top one-tenth of 1% of the income distribution has distorted American democracy.

Enormous wealth is the heart of oligarchy.

So what’s an oligarchy?

Across all political spectrums, oligarchs are people (never corporations or other organizations) who command massive concentrations of material resources (that is, wealth) that can be deployed to defend or enhance their own property and interests, even if they don’t own those resources personally. Without this massive concentration of wealth, there are no oligarchs.

In any society, of course, an extremely unequal wealth distribution provokes conflict. Oligarchy is the politics of the defense of this wealth, propagated by the richest members of society.

Wealth defense can take many forms. In ancient Greece and Rome, the wealthiest citizens cooperated to run institutionalized states that defended their property rights. In Suharto’s Indonesia, a single oligarch led a despotic regime that mostly used state power to support other oligarchs. In medieval Europe, the rich built castles and raised private armies to defend themselves against each other and deter peasants tempted by their masters’ vaults. In all of these cases oligarchs are directly engaged in rule. They literally embody the law and play an active role in coercion as part of their wealth defense strategy.

Contemporary America (along with other capitalist states) instead houses a kind of “civil oligarchy.” The big difference is that property rights are now guaranteed by the impersonal laws of an armed state. Even oligarchs, who can be disarmed for the first time in history and no longer need to rule directly, must submit to the rule of law for this modern “civil” arrangement to work. When oligarchs do enter government, it is more for vanity than to rule as or for oligarchs. Good examples are New York City Mayor Michael Bloomberg, former presidential candidate Ross Perot and former Massachusetts Governor Mitt Romney.

Another feature of American oligarchy is that it allows oligarchs to hire skilled professionals, middle- and upper-class worker bees, to labor year-round as salaried, full-time political advocates and defenders of the oligarchy. Unlike those backing ordinary politicians, the oligarchs’ professional forces require no ideological invigoration to keep going. In other words, they function as a very well-paid mercenary army.

Whatever views and interests may divide the very rich, they are united in being materially focused and materially empowered. The social and political tensions associated with extreme wealth bond oligarchs together even if they never meet, and sets in motion the complex dynamics of wealth defense. Oligarchs do overlap with each other in certain social circles that theorists of the elite worked hard to map. But such networks are not vital to their power and effectiveness. Oligarchic theory requires no conspiracies or backroom deals. It is the minions oligarchs hire who provide structure and continuity to America’s civil oligarchy.

Saturday, April 14, 2012

pool near U.S. city contains more radioactive cesium than released by fukushima, chernobyl and all nuclear bomb tests COMBINED



zerohedge | The spent fuel pools at Fukushima are currently the top short-term threat to humanity.

But fuel pools in the United States store an average of ten times more radioactive fuel than stored at Fukushima, have virtually no safety features, and are vulnerable to accidents and terrorist attacks.

If the water drains out for any reason, it will cause a fire in the fuel rods, as the zirconium metal jacket on the outside of the fuel rods could very well catch fire within hours or days after being exposed to air. See this, this, this and this. (Even a large solar flare could knock out the water-circulation systems for the pools.)

The pools are also filling up fast, according to the Nuclear Regulatory Commission.

The New York Times notes that squeezing more rods into pools may increase the risk of fire:

The reactor operators have squeezed spent fuel more tightly into the pools, raising the heat load and, according to some analyses, raising the risk of fire if the pools were ever drained.

Indeed, the fuel pools and rods at Fukushima appear to have “boiled”, caught fire and/or exploded soon after the earthquake knocked out power systems. See this, this, this, this and this.

Robert Alvarez – a nuclear expert and a former special assistant to the United States Secretary of Energy – notes that there have also been many incidents within the U.S. involving fuel pools

Friday, April 13, 2012

hide not slide...,

WSJ | U.S. securities regulators are conducting a wide-ranging investigation into the complex relationships between rapid-fire trading firms and stock exchanges, according to the official overseeing some 20 probes into computerized trading.

The inquiry into ownership and other ties is part of a broader probe into whether high-speed traders have unfair advantages over other investors, according to people familiar with the matter.

"We're interested in understanding the ownership structure and history" of the firms, said Daniel Hawke, head of the Securities and Exchange Commission's market abuse enforcement unit, in an interview. "How the firm got started, who was behind it and who wrote the [computer] code that might be at issue."

One such area under SEC scrutiny is the use of routing and trading instructions, known as order types. Many investors use relatively simple order types, such as limit orders, which specify a price at which an investor is willing to buy or sell a stock.

But exchanges also offer more sophisticated order types commonly used by rapid-fire traders that could potentially give them an edge over other investors, according to industry experts. Some allow computer-driven traders to hide orders and prevent them from routing to other exchanges, where the traders may have less control over the order execution.

One order type, called "Hide Not Slide" and offered by the exchange operator Direct Edge Holdings LLC, is among those being scrutinized by the SEC, according to people familiar with the matter. The agency is also looking at a similar order type offered by computerized stock exchange BATS Global Markets Inc., the people said.

Other exchanges also offer order types that share similar characteristics. Representatives of Direct Edge and BATS declined to comment.

The SEC is examining whether such order types unfairly allow high-speed traders to jump ahead of other investors in an exchange's "order book," or the queue of buy and sell orders that are typically ranked by price and when they were received, according to people familiar with the matter.

Another area of focus for the SEC are the rebates some traders earn from exchanges even as other investors pay fees to complete trades, say people familiar with exchange operations and the SEC probes.

The SEC stepped up its scrutiny of these high-speed trading firms and exchanges after the May 6, 2010 "flash crash," when computerized trading triggered a 9% selloff within minutes.

Some aspects of the SEC's inquiry are tied to at least one whistleblower, according to people familiar with the matter.

One of the most prominent lines of inquiry involves BATS, which last month pulled its initial public offering after halting trading due to what it described as a software glitch. On the morning BATS shares began trading, The Wall Street Journal disclosed details of the SEC's investigation into whether superfast-trading firms have exploited their links to BATS and other exchanges to gain an unfair advantage over other investors.

high frequency trading is cuckoo

BusinessSpectator | In the Australian Stock Exchange’s Sydney data room, which is about the size of a big lounge room, there are six “cuckoos”. These are the banks of servers installed by high frequency traders.

They sit against the wall opposite the ASX servers and each is connected directly into the host by a fat fibre optic pipe. Each cable is precisely the same length by agreement with the ASX so that none gets an advantage; if one server is closer to the input, its cable is looped around to lengthen it.

Think about that: one less metre of optic fibre carrying data at 299.8 million metres per second would give one share trader an unfair advantage over the rest. It suggests that something pretty quick is going on.

The question is whether it’s fair to the rest of us; whether those six parasites with their suckers fastened directly into the heart of the ASX should be allowed to get away with it.

The ASX is no longer a regulator, just a business, so it says that if the practice is legal and it pays a fee – not to mention a handy rent in the data room – then it can’t and won’t stop them.

For global regulators it’s actually too late: high-frequency trading accounts for as much as 70 per cent of the volume on American stock exchanges, including the NYSE; the time to control it was ten years ago.

What do the computers and their algorithms do? Well, as my relatively low-frequency brain can understand it, these machines constantly monitor order flow into the ASX servers and the sophisticated programs can pick up patterns that indicate when a reasonably large order has been placed. What they then do, in effect, is “front-run” – that is, they buy ahead of the order and make a small spread selling into it.

In other words, by operating at the speed of light they can “feel” a buy order coming and can dart in front of them and ensure that the buyers pay a little bit more than they were going to, without noticing a thing.

These operators begin each day owning no shares and end each day in the same position but they make a lot of money by doing thousands of trades every day: it’s a high volume, low margin business.

It’s not known how much money the HFT traders make, but whatever it is they weren’t making it 10-15 years ago, and stockmarket returns have not gone up in that time, so whatever they make has come out of someone else’s pocket.

That someone, of course, is you.

where has all the trading gone?

cnbc | It’s one of the biggest mysteries on Wall Street. How can stocks be in their fourth year of a bull market and trading activity be so low?

During March, average daily volume in equity shares was at their lowest level since December 2007, according to new data from Credit Suisse. This is the same month that marked the three-year anniversary of the bull market that caused the Standard & Poor's 500 to double from its March 2009 credit-crisis low.

Credit Suisse tried to solve the riddle by blaming the growing popularity of options and futures markets, a drop in high frequency trading and stock splits.

“There’s no way to sugar-coat it: Volumes are down and trending lower,” wrote Ana Avramovic of Credit Suisse, in a note to clients. “A growing preference for other asset classes may be drawing money away from equities.”

Daily equity volume in March was 6.59 billion shares a day, the lowest since a sub-6 billion volume month in December 2007, according to Credit Suisse. (The firm adjusted December 2011’s low figures to account for the holiday-skewed week.)

Avramovic noted that options and futures volume set a record in 2011, as investors hungry to add risk looked for a place where they could use leverage.

“The options market has been breaking records for nine straight years and the shift has been a growing field that provides protection and leverage,” said Pete Najarian, co-founder of TradeMonster.com, an options and equity brokerage.

The Credit Suisse analyst also notes that high-frequency trading, which accounts for half of all market activity, has been on the decline since last summer. What’s more, Citigroup alone accounted for 7 percent of all volume in the second half of 2009 before its 10 for 1 reverse split, according to the report.

But the answer may be simpler than this. After two vicious bear markets in a decade, the average investor simply doesn’t trust this market anymore.

“There is no fresh money going into the markets,” said Doug Kass of Seabreeze Partners. “Why should we be surprised the retail investor is not there? We’ve had two huge drawdowns in stocks since 2000, a flash crash two years ago and real incomes are stagnating.”

Thursday, April 12, 2012

fear of a black republican..,



aljazeera | In the southern US state of Mississippi nearly 40 per cent of the population is black. But during the state's March 13 Republican primary, only two per cent of the voters were African Americans - and it was a similar picture throughout other Republican contests.

For years Republican leaders have said they would work hard to increase African American support for their party, but so far, those efforts appear to be a failure.

When asked during an interview why the Republican party is poison for so many African Americans the only black candidate to run for the 2012 Republican ticket, Herman Cain, said it was because "...they have been brainwashed into not being open-minded, not even considering a conservative point of view."

A documentary titled Fear of A Black Republican was just released here in Washington DC and it posed a similar question: Does the Republican party really want more black people?

So why has the Republican party not been successful in reaching out to black voters? And why are they also losing the hispanic voters?

To discuss these issues, we are joined by James Braxton Peterson, the director of Africana studies at Lehigh University; Ana Navarro, the National Hispanic co-chair for both John McCain and John Huntsman; and Kevin Williams, the director of the film, Fear of a Black Republican.

"I think there is a core cultural competency challenge here that the Republican party... has lost it's sense and its ability to connect with communities of colour. You've got to be competent about the cultural issues that are important to these communities and then we can talk about the different strategies for trying to secure their votes."

James Braxton Peterson, analyst on black politics


LOSING THE BLACK VOTE:

  • The Republican party has currently little support among black voters
  • Since the early 1960's black voters have overwhelmingly voted the Democratic party
  • Richard Nixon was the last Republican candidate to get significant black support
  • Nixon got 32 per cent of black vote in his loss to John F Kennedy in 1960
  • President Lyndon Johnson got 94 per cent support from black voters after he signed the Civil Rights Act in 1964
  • Republicans promoted the abolition of slavery, gaining black votes
  • President Dwight Eisenhower got 39 per cent of black vote in 1956
  • In the 2008 election John McCain received just 4 per cent of black votes
  • In the 2008 US presidential election 95 per cent of African Americans voted for Obama

the taint of social darwinism

NYTimes | Given the well-known Republican antipathy to evolution, President Obama’s recent description of the Republican budget as an example of “social Darwinism” may be a canny piece of political labeling. In the interests of historical accuracy, however, it should be clearly recognized that “social Darwinism” has very little to do with the ideas developed by Charles Darwin in “On the Origin of Species.” Social Darwinism emerged as a movement in the late 19th-century, and has had waves of popularity ever since, but its central ideas owe more to the thought of a luminary of that time, Herbert Spencer, whose writings are (to understate) no longer widely read.

Spencer, who coined the phrase “survival of the fittest,” thought about natural selection on a grand scale. Conceiving selection in pre-Darwinian terms — as a ruthless process, “red in tooth and claw” — he viewed human culture and human societies as progressing through fierce competition. Provided that policymakers do not take foolish steps to protect the weak, those people and those human achievements that are fittest — most beautiful, noble, wise, creative, virtuous, and so forth — will succeed in a fierce competition, so that, over time, humanity and its accomplishments will continually improve. Late 19th-century dynastic capitalists, especially the American “robber barons,” found this vision profoundly congenial. Their contemporary successors like it for much the same reasons, just as some adolescents discover an inspiring reinforcement of their self-image in the writings of Ayn Rand .

Although social Darwinism has often been closely connected with ideas in eugenics (pampering the weak will lead to the “decline of the race”) and with theories of racial superiority (the economic and political dominance of people of North European extraction is a sign that some racial groups are intrinsically better than others), these are not central to the position.

The heart of social Darwinism is a pair of theses: first, people have intrinsic abilities and talents (and, correspondingly, intrinsic weaknesses), which will be expressed in their actions and achievements, independently of the social, economic and cultural environments in which they develop; second, intensifying competition enables the most talented to develop their potential to the full, and thereby to provide resources for a society that make life better for all. It is not entirely implausible to think that doctrines like these stand behind a vast swath of Republican proposals, including the recent budget, with its emphasis on providing greater economic benefits to the rich, transferring the burden to the middle-classes and poor, and especially in its proposals for reducing public services. Fuzzier versions of the theses have pervaded Republican rhetoric for the past decade (and even longer).

Wednesday, April 11, 2012

mitX aims to enhance education

MIT | Speaking to students in Simmons Hall on Wednesday evening, Provost L. Rafael Reif and Chancellor Eric Grimson explained how MITx — the Institute’s new initiative for online learning — could not only bring aspects of MIT’s education to the world at large, but also enhance and enrich the educational experience of students on campus.

“Our goal is to use this as a platform to strengthen the residential student experience,” Grimson said. The informal meeting was one of a series of sessions with undergraduates and graduate students to explain MITx, as well as to gather input and suggestions on “how you would imagine using this, and what you would like to see,” he said.

The MITx concept, Reif explained, grew out of a series of meetings over the last five years in which faculty, administrators and student representatives explored “ways we could use technology to enhance what we do on campus.”

MITx’s pilot class is now under way: 6.002x, Introduction to Circuits and Electronics. So far, more than 120,000 people worldwide have signed up for the course, which is now about halfway through its semester, Grimson said. At least 20,000 of those students have been actively keeping up with the course’s lectures, exercises and online tests.

For MIT’s own students, Reif said, materials developed for MITx classes will add an extra dimension to their on-campus experience. In some cases, he said, MITx might replace existing lecture components with online versions students could watch at their own pace, repeating segments if necessary to make sure they understand. Instead of a classroom experience defined by passive listening, MIT students might actually “experience increased face-to-face interactions” with each other and with instructors, Reif said.

When MIT students used online lectures and exercises, instructors would also have access to data about which sections students sailed through and which were more challenging. When students got to recitation sections, teaching assistants would have a clearer sense of which topics they needed to focus on, Grimson said.

MITx itself is an experiment, Reif said, and will be closely monitored so that course materials and the interactive experience can be improved as needed. The software platform developed for MITx is open-source and will eventually be released to the public: Beneficial modifications made or suggested by anyone around the world will be incorporated into the platform at any time.

“We believe this is the future,” Reif said of interactive online education as a supplement to campus learning. The software platform will be offered to other institutions for use in their classes — providing an alternative to a trend toward online educational materials developed by for-profit companies, he added. Fist tap Dale.

personal informatics for self-regulated learning

quantifiedself | Although the internet has fundamentally changed the speed and the scale of accessing information, that change has not seen such an impact in traditional forms of education. With popular new efforts like the video and exercise resource Khan Academy and online courses from Stanford (now spinning off into sites like Udemy and Coursera), people are talking about a revolution of personalized education – learners will be able to use computer-delivered content to learn at their own pace, whether supplementing schoolwork, developing job skills, or pursuing a hobby.

How personal informatics can help learning

There’s a problem here: learning on one’s own is not easy. Researchers have repeatedly found that people hold misconceptions about how to study well. For instance, rereading a passage gives the illusion of effective learning, but in reality quizzing oneself on the same material is far more effective for retention. Even then, people can misjudge the which items they will or will not be able to remember later.

The process of self-regulated learning works best when people accurately self-assess their learning and use that information to determine learning strategies and choose among resources. This reflective process fits well into the framework of personal informatics used already for applications like keeping up with one’s finances or making personal healthcare decisions.

For most people, their only experience quantifying learning is through grades on assignments and tests. While these can allow some level of reflection, the feedback loop is usually not tight enough. We are unable to fix our mistakes, making grades feel less like a opportunity for improvement and more like a final judgement.
How personal learning data can be collected

With computer-based practice, there is a great opportunity for timely personalized feedback. Several decades of research in the learning sciences have developed learner models for estimating a person’s knowledge of a topic based on their actions in a computer-based practice environment, often called an intelligent tutoring system. For example, a learner model for a physics tutor may predict the error rate of responses in defining the potential energy as a step in a physics problem — we see that the error rate decreases over the number of opportunities to use that skill, indicating learning (see below; from the PSLC DataShop). Such systems can not only track progress and give feedback but also make suggestions for effective learning strategies. Fist tap Dale.

3 major publishers sue open-education textbook startup

Chronicle | Open-education resources have been hailed as a trove of freely available information that can be used to build textbooks at virtually no cost. But a copyright lawsuit filed last month presents a potential roadblock for the burgeoning movement.

A group of three large academic publishers has sued the start-up Boundless Learning in federal court, alleging that the young company, which produces open-education alternatives to printed textbooks, has stolen the creative expression of their authors and editors, violating their intellectual-property rights. The publishers Pearson, Cengage Learning, and Macmillan Higher Education filed their joint complaint last month in the U.S. District Court for the Southern District of New York.

The publishers’ complaint takes issue with the way the upstart produces its open-education textbooks, which Boundless bills as free substitutes for expensive printed material. To gain access to the digital alternatives, students select the traditional books assigned in their classes, and Boundless pulls content from an array of open-education sources to knit together a text that the company claims is as good as the designated book. The company calls this mapping of printed book to open material “alignment”—a tactic the complaint said creates a finished product that violates the publishers’ copyrights.

“Notwithstanding whatever use it claims to make of ‘open source educational content,’ Defendant distributes ‘replacement textbooks’ that are created from, based upon, and overwhelmingly similar to Plaintiffs’ textbooks,” the complaint reads.

The complaint attempts to distance itself from attacking the legitimacy of open-education resources, but goes on to argue that Boundless is building its business model by stealing.

“Whether in the lecture hall or in a textbook, anyone is obviously free to teach the subjects biology, economics, or psychology, and can do so using, creating, and refining the pedagogical materials they think best, whether consisting of ‘open source educational content’ or otherwise,” it reads. “But by making unauthorized ‘shadow-versions’ of Plaintiffs’ copyrighted works, Defendant teaches only the age-old business model of theft.” Fist tap Dale.

let's not derail MIT from it's path to excellence!

The Tech | MIT is the finest research institution in the world, in no small part because of its unwavering commitment to recruiting, admitting, and hiring the best talent in the world, even if that talent comes from less-advantaged or atypical backgrounds. Periodically examining the mechanisms by which the Institute pursues its mission is essential, but those examinations must be grounded in both data and an understanding of the MIT ethos. Brandon Briscoe’s execrable and intellectually dishonest rant against diversity and inclusion at the Institute is neither, serving as a disheartening call to take MIT in precisely the wrong direction. By mischaracterizing MIT’s admission and hiring processes as a de facto quota system, Briscoe effects a brilliant takedown of a straw man of his own creation and manages to cast aspersions on the intellect of every MIT-affiliated woman and underrepresented minority, … all based on little more than a few sloppy citations and the courage of his own biased convictions.

Fundamentally, Briscoe draws a false dilemma between diversity and merit. Unlike wannabe peer institutions, the Institute neither kowtows to pedigree nor slavishly adheres to test scores and GPAs. Briscoe would have MIT break this tradition and emulate the admission process of second-rate institutions, namely by picking the best test scores, GPAs, and AP scores out of a hat. But what do these variables actually measure? An SAT score is a better measure of the wealth of one’s parents than eventual success in college or the labor market. And how can we honestly claim that GPA and AP classes are a fair measure of merit given the gross disparities across schools, which are more racially segregated today than they were before the Civil Rights legislation Briscoe applauds? In our broken and disparate system, the hardest-working, highest-achieving student in a terrible school wouldn’t stand a chance against a middle-of-the-road student in an exceptional school without the incorporation of context into admissions decisions. Similarly, what do we do about the application of the utterly-capable female student who is steered away from AP science classes by her counselors, teachers, and parents? Given these realities, it is clear that Briscoe’s idea of a blind meritocracy is antithetical to his simultaneous calls for fairness and equality. How would it be fair to discount the star student because he went to a bad school? How would it be equitable to ignore the potential of the young woman because societal forces told her she couldn’t be something?

Briscoe refers to discrimination as “past,” which is another egregious oversight. Silly claims of “post-racial” America notwithstanding, racial discrimination is still widespread, as is gender discrimination. We don’t have to look too hard to see evidence of bias. We know that blacks and Latinos were targeted for expensive, dangerous subprime loans during the housing boom and we know that black and Latino homeowners were targeted for foreclosure action during the bust. We know that because of the sad reality of racial segregation, black and Latino children are served by unsatisfactory schools. We know that women are penalized in the workplace for having children when men are not. We know that young girls are discouraged from pursuing careers in science and engineering. The idea that America solved all of these issues with the Civil Rights Movement and the Women’s Rights Movement is a fantasy largely perpetuated by those who have something to gain by race and gender discrimination.

Even at MIT, discrimination persists: Briscoe’s article proves that ipso facto. We need not rely on his words, however. The 2011 Report on the Status of Women Faculty in the Schools of Science and Engineering at MIT, for example, is both a story of amazing strides in gender equality at MIT and a sobering report of the prejudice that female faculty face even in 2011. Yet, Briscoe uses the fact that in one year the engineering school hired more women than men as evidence of systematic discrimination against men. He supports this point by noting that women are a minority of MIT graduate students and engineers nationally. The questionable nature of this argument could not be more obvious. Comparing one year of data to the summed result of decades of systematic discrimination against women would be laughable if it were not such a perfect example of the cognitive dissonance driving much of sexism.

Unfortunately, this fallacious argument is just the tip of the iceberg in Briscoe’s article, which takes an intellectually ugly turn when Briscoe reimagines President Hockfield’s decidedly uncontroversial remarks at a Martin Luther King Jr. Day celebration as a claim that MIT is “reserving job positions for certain racial groups” and acting “necessarily at the expense of white and Asian men.” These interpretations are, of course, both false and contrary to Hockfield’s meaning, yet Briscoe practically presents the latter as a direct quotation. Briscoe follows this up with a poor man’s legal analysis of affirmative action based on Title VII of the Civil Rights Act, declaring that MIT is flouting Federal law. Somehow Briscoe manages to leave out all of the modern case law (cf. Grutter v. Bollinger) that directly contradicts his claims.

At one point, the article brings up the aforementioned Report on the Status of Women Faculty, cherry-picks the one quotation within reasonable Hamming distance of supporting his argument, and distorts it until it does. He makes unfounded claims about women being overrepresented on committees and brazenly quotes a woman out of context, implying that she claimed that her gender is an advantage in certain fields at MIT. This is false. Upon actually reading the report, it’s clear that the person was not speaking about her experience within MIT. As a matter of fact, the very same paragraph contains a quote from another woman who says: “[My] field is brutal and sexist. You talk to senior colleagues and they want to talk about anything but science — life, how you look.”

is MIT heading in the wrong direction with affirmative action?

The Tech | A key question brought up at the recent MIT Diversity Summit, and the MLK Jr. annual breakfast, was how can MIT balance excellence with diversity? It has been commonly noted that students and faculty alike perceive tension within the Institute between the frequent appeals for increased diversity, and the culture of hard work and meritocracy that make MIT what it is. This question received heavy emphasis in the 2010 Report on the Initiative for Faculty Race and Diversity. One of the final statements of that report was that, “While almost everyone at MIT would like the Institute to be an institution of merit and inclusion, it will be difficult to reach this ideal if race and ethnicity are ignored and presumed irrelevant.”

For the good of the Institute, I feel compelled to rephrase this — while almost everyone at MIT would like the Institute to be an institution of merit and inclusion, it will be difficult to reach this ideal if race, ethnicity, and gender continue to play such a big role in the social engineering agenda of the administration of MIT.

This agenda actively pursued across the Institute — the goals of which are to dramatically increase the number of women and underrepresented minorities in the student and faculty body at MIT, and thereby to attempt to increase nationwide participation by the same in STEM fields — is well-intentioned, but eroding not only the meritocracy at MIT, but the quality of experience that these same females, minority students, and faculty experience here.

To anyone who claims that MIT’s affirmative action policies only focus on outreach recruiting but do not provide preference in admissions, faculty hiring, or positions, and therefore do not discriminate, then please explain the following: last spring, a gloating announcement was made by the interim dean of the School of Engineering stating that, for the first time ever, more women than men were hired for faculty positions that year. Compare this with the fact that in 2011 women comprised only 26 percent of the graduate student body in the MIT School of Engineering, and only 11 percent of career engineers nationally. Unless we conclude that the female student and postdoc engineering population is vastly more qualified then their male peers, which we have no reason to believe, then clearly there is more going on at MIT than just “attracting” more female faculty. The same can be said for racial and ethnic considerations.

There is more concrete evidence of the way in which affirmative action at MIT really works. At the MLK Jr. breakfast this year, President Hockfield stated, “We need to engineer a set of underlying institutional mechanisms, expectations, habits, and rhythms that make diversity and inclusion simply part of what we work on here, every day.” She then went further to point out that, as reported by MIT News, the School of Science is identifying new funds to expand its pool of URM faculty. Wait a second — last time I checked, reserving job positions for certain racial groups is blatantly against federal law. Title VII of the Civil Rights Act of 1964 prohibits not only intentional discrimination, but also practices that have the effect of discriminating against individuals because of their race in any aspect of employment including: hiring and firing, recruitment, and training and apprenticeship programs. Can you imagine the outrage if President Hockfield stated that the School of Science was raising funding specifically for hiring more white faculty?

MIT claims to be a fair, equitable, inclusive, and merit-based institution. Yet, when the powers that be at this institute essentially declare that, “We are doing everything we can to admit, hire, and promote more women and underrepresented minorities, necessarily at the expense of white and Asian men” — and we compare this to the definition of discrimination: “Treatment or consideration of, or making a distinction in favor of, a person based on the group, class, or category to which that person belongs rather than on individual merit,” then how is MIT not being discriminatory and hypocritical?

Hizmet: the Gulen Schools Movement

Townhall | The charter school movement was presented to the American people as a way to have more parental control over public school education. Charter schools are public schools financed by local taxpayers and federal grants.

Charter schools are able to hire and fire teachers, administrators and staff and avoid control by education department bureaucrats and the teachers unions. No doubt there are some good charter schools, but loose controls have allowed a very different kind of school to emerge.

Charter schools have opened up a path for foreigners to run schools at the expense of the U.S. taxpayers, without much news coverage. One of the few breakthroughs in the media was a June 7, 2011, front-page article in The New York Times, which carried over to two full inside pages, about the many charter schools run by a secretive and powerful sect from Turkey called the Gulen Movement.

Headed by a Turkish preacher named Fethullah Gulen who had already founded a network of schools in 100 other countries, this movement opened its first U.S. charter school in 1999. Gulen's schools spread rapidly after he figured out how to work our system and get the U.S. taxpayers to import and finance his recruitment of followers for his worldwide religious and social movement.

The Gulen Movement now operates the largest charter school network in the United States. It has at least 135 schools, teaching more than 45,000 students in at least 26 states, financed by millions of U.S. taxpayer dollars a year.

The principals and school board members are usually Turkish men. Hundreds of Turkish teachers (referred to as "international teachers") and administrators have been admitted to the United States, often using H-1B visas, after claiming that qualified Americans cannot be found.

In addition, the Gulen Movement has nurtured a close-knit network of businesses and organizations run by Turkish immigrants. These include the big contractors who built or renovated the schools, plus a long list of vendors selling school lunches, uniforms, after-school programs, web design, teacher training, and special education materials. Fist tap Big Don.

Tuesday, April 10, 2012

asia: the growing hub of scientific research

asianscientist | The scientific global village has a new member: Asia. The shifting face of science reflects the strides made by Asian nations in recognizing R&D as a valuable industry.
  • Asia-8′s R&D expenditure is second only to the US, surpassing the EU-27
  • One-third of all scientific researchers worldwide are Asian
  • One-quarter of the world’s publications are from Asia
  • China’s scientific publishing output may overtake the US in 2013
The Asian research landscape is dynamic and burgeoning, with its researchers making significant contributions in academic publications, research & development, and high-technology manufacturing and exports.

The emerging Asia-8 economies (China, India, Japan, Malaysia, Singapore, South Korea, Taiwan and Thailand) are currently leading this change in status quo, driving a shift from the traditional hubs of research in the US and countries in the EU-27.

Asian R&D Spending Has Risen

An important measure of an industry’s growth, Asian R&D expenditure has grown significantly with China’s spending now US$100 billion of the worldwide total of US$1.1 trillion in 2007.

In figures from the 2010 National Science Foundation Key Science and Engineering Indicators, spending by Asia-8 economies have now reached second place behind the US, surpassing those of the EU-27. Overall, R&D growth in US and Europe has plateaued, averaging 5-6 percent annually over the period 1996–2007, whereas R&D growth rates of Asian economies during the same period often exceeded 10 percent, with Chinese spending growing at 20 percent since 1999.

Reflecting an increase in private spending by domestic and foreign firms as well as public R&D spending, Asia-8 member Singapore has nearly doubled its spending between 1996 and 2007 from 1.37 to 2.61 percent of its GDP. This unprecedented growth is part of the island nation’s policy designed to raise its competitiveness through the development of a knowledge-intensive economy.

One Out Of Three Researchers Are Asian

Other signs of a shift in research can be observed by the distribution of researcher nationalities. Asia now contributes nearly one-third of the 5.8 million researchers worldwide.

The combined number of researchers of South Korea, Taiwan, China, and Singapore rose from 16 percent in 2003, to 31 percent in 2007, driven mostly by China’s rapid growth in R&D. In contrast, the number of US and EU researchers declined from 51 to 49 percent; Japan’s share dropped from 17 to 12 percent.

There has also been a surge in a new generation of researchers from Asia, with 1.5 million students in China alone currently enrolled in postgraduate programs. This number is an increase in 57 percent compared to the previous year, according to the Chinese National Bureau of Statistics (NBS). In total, China has 31 million students in higher education institutions in 2010, an increase of 35 percent compared to 2005, and almost double compared to 2002.

Publication Output From Asia Is Increasing

Another metric – publishing output – indicates that the world’s scientific hub is slowly shifting east. Between 1995 to 2007, the growth rate in science and engineering article output from mature economies of the US (0.7 percent) and EU (1.9 percent) has plateaued, in contrast to the rapidly developing science base of Asia-8 countries (9.0 percent) and China (16.5 percent).

Although the UK and US together still account for 38 percent of publications in 2004-2008, this figure is down from 45 percent in the previous five years. This is contrasted by Asia-8, China and Japan, which now account for 22 percent of the world’s total academic publication output. Singapore’s output, though small in comparison, has tripled between 1996 and 2008, from 2620 to 8506 papers.

Together, China and Spain have now edged Australia and Switzerland out of the top ten publishers for the last five years.

chinese academy of sciences has big plans for nation's research

nature | Chemist Bai Chunli, President of the Chinese Academy of Sciences (CAS) talks to Nature about science in China and his vision for the prestigious institution.

What is the CAS's role in shaping science policy in China?

As the top think-tank for central government, the CAS advises on science policies and priority areas of research. In the past few decades, it has been instrumental in planning and instigating major initiatives, such as the establishment of scientific funding systems and major national research and development projects.

The CAS has also been a testing ground for reforms in policies and infrastructures. Under the Knowledge Innovation Program, for instance, reform and restructuring of the academy led to significant improvement in scientific output. And as part of Innovation 2020, the CAS will strive to boost national innovation capacity.

How will you deal with the relationship between basic research and applied science?

The CAS is committed to building a strong capacity for basic research, allowing for sufficient funding and research freedom. It will reduce the frequency of research evaluation while improving its quality; for research directly related to the public interest, evaluation will be based on national needs and socioeconomic benefits.

How will the CAS accelerate the conversion of basic research to products?

First, the CAS will strengthen its ties with the industrial sector by setting up joint research and development centres and working with industry on major national projects. We will house incubation programmes for promising business ideas.

Second, we will promote collaboration with provincial governments and set up regional research programmes. Finally, the CAS will establish incentives to encourage patents and their commercialization, and will improve its management and supporting infrastructure to better protect intellectual-property rights.

China's output of scientific papers has increased rapidly in recent years, but the impact of those papers is still relatively low. How do you propose to remedy that?

The quantity and quality of papers published by the CAS have increased significantly in the past decades — although, admittedly, the overall quality of papers in China needs to be improved. The CAS will continue to encourage its scientists to take on challenges in frontier research areas, and will support risky and long-term projects.

Meanwhile, our evaluation system, which is largely based on the number and quality of papers, will shift towards assessing the quality of innovation, its actual contribution to society and its state of development.

How do you want to cultivate the relationship between scientists in China and in the rest of the world?

The CAS will consolidate its collaborations with developed nations, and further promote cooperation with developing nations, especially China's neighbours. It will strive to set up long-term, strategic partnerships with first-rate research institutions, international science organizations and multinational research and development corporations. The CAS encourages its scientists to participate in international research projects and to take up positions in international organizations. We also warmly welcome scientists from other nations to visit and work in the academy.

There have been growing calls for reforms of the allocation and management of science funding. What is your position on that?

From a relatively low level, science in China has made significant progress in the past few decades. This is due to the efforts of the Chinese scientific community as well as government administrations.

failed states run on pig sh*t...,



bloomberg | Mexico, one of three Latin American nations that uses nuclear power, is abandoning plans to build as many as 10 new reactors and will focus on natural gas-fired electricity plants after boosting discoveries of the fuel.

The country, which found evidence of trillions of cubic feet of gas in the past year, is “changing all its decisions, amid the very abundant existence of natural-gas deposits,” Energy Minister Jordy Herrera said in a Nov. 1 interview. Mexico will seek private investment of about $10 billion during five years to expand its natural gas pipeline network, he said.

Mexico, Latin America’s second-largest economy, is boosting estimated gas reserves after Petroleos Mexicanos discovered new deposits in deep waters of the Gulf of Mexico and shale gas in the border state of Coahuila. The country was considering nuclear power as part of plans to boost capacity by almost three-quarters to 86 gigawatts within 15 years, from about 50 gigawatts, and now prefers gas for cost reasons, he said.

“This is a very good decision by the Mexican government,” said James Williams, an economist at WTRG Economics, an energy research firm in London, Arkansas. With a power generation project based on gas “you can build multiple plants at a much lower cost and much faster pace than a nuclear facility.”

Nations around the world are also reconsidering plans for increasing their reliance on nuclear power after the March 11 earthquake in Japan that wrecked the Fukushima Dai-Ichi plant, causing a loss of cooling, the meltdown of three reactors and the worst atomic disaster since the leak at Chernobyl in 1986.

Monday, April 09, 2012

bearing in mind that chernobyl caused the collapse of the soviet union...,



akiomatsumura | Japan’s former Ambassador to Switzerland, Mr. Mitsuhei Murata, was invited to speak at the Public Hearing of the Budgetary Committee of the House of Councilors on March 22, 2012, on the Fukushima nuclear power plants accident. Before the Committee, Ambassador Murata strongly stated that if the crippled building of reactor unit 4—with 1,535 fuel rods in the spent fuel pool 100 feet (30 meters) above the ground—collapses, not only will it cause a shutdown of all six reactors but will also affect the common spent fuel pool containing 6,375 fuel rods, located some 50 meters from reactor 4. In both cases the radioactive rods are not protected by a containment vessel; dangerously, they are open to the air. This would certainly cause a global catastrophe like we have never before experienced. He stressed that the responsibility of Japan to the rest of the world is immeasurable. Such a catastrophe would affect us all for centuries. Ambassador Murata informed us that the total numbers of the spent fuel rods at the Fukushima Daiichi site excluding the rods in the pressure vessel is 11,421 (396+615+566+1,535+994+940+6375).

I asked top spent-fuel pools expert Mr. Robert Alvarez, former Senior Policy Adviser to the Secretary and Deputy Assistant Secretary for National Security and the Environment at the U.S. Department of Energy, for an explanation of the potential impact of the 11,421 rods.

I received an astounding response from Mr. Alvarez [updated 4/5/12]:
In recent times, more information about the spent fuel situation at the Fukushima-Dai-Ichi site has become known. It is my understanding that of the 1,532 spent fuel assemblies in reactor No. 304 assemblies are fresh and unirradiated. This then leaves 1,231 irradiated spent fuel rods in pool No. 4, which contain roughly 37 million curies (~1.4E+18 Becquerel) of long-lived radioactivity. The No. 4 pool is about 100 feet above ground, is structurally damaged and is exposed to the open elements. If an earthquake or other event were to cause this pool to drain this could result in a catastrophic radiological fire involving nearly 10 times the amount of Cs-137 released by the Chernobyl accident.

The infrastructure to safely remove this material was destroyed as it was at the other three reactors. Spent reactor fuel cannot be simply lifted into the air by a crane as if it were routine cargo. In order to prevent severe radiation exposures, fires and possible explosions, it must be transferred at all times in water and heavily shielded structures into dry casks.. As this has never been done before, the removal of the spent fuel from the pools at the damaged Fukushima-Dai-Ichi reactors will require a major and time-consuming re-construction effort and will be charting in unknown waters. Despite the enormous destruction cased at the Da–Ichi site, dry casks holding a smaller amount of spent fuel appear to be unscathed.

Based on U.S. Energy Department data, assuming a total of 11,138 spent fuel assemblies are being stored at the Dai-Ichi site, nearly all, which is in pools. They contain roughly 336 million curies (~1.2 E+19 Bq) of long-lived radioactivity. About 134 million curies is Cesium-137 — roughly 85 times the amount of Cs-137 released at the Chernobyl accident as estimated by the U.S. National Council on Radiation Protection (NCRP). The total spent reactor fuel inventory at the Fukushima-Daichi site contains nearly half of the total amount of Cs-137 estimated by the NCRP to have been released by all atmospheric nuclear weapons testing, Chernobyl, and world-wide reprocessing plants (~270 million curies or ~9.9 E+18 Becquerel).

It is important for the public to understand that reactors that have been operating for decades, such as those at the Fukushima-Dai-Ichi site have generated some of the largest concentrations of radioactivity on the planet.
Many of our readers might find it difficult to appreciate the actual meaning of the figure, yet we can grasp what 85 times more Cesium-137 than the Chernobyl would mean. It would destroy the world environment and our civilization. This is not rocket science, nor does it connect to the pugilistic debate over nuclear power plants. This is an issue of human survival. Fist tap Dale.

keep it moving, keep it moving, nothing to see here...,

Smithsonian | Recent research supports the conclusions of a controversial environmental study released 40 years ago: The world is on track for disaster. So says Australian physicist Graham Turner, who revisited perhaps the most groundbreaking academic work of the 1970s,The Limits to Growth.

Written by MIT researchers for an international think tank, the Club of Rome, the study used computers to model several possible future scenarios. The business-as-usual scenario estimated that if human beings continued to consume more than nature was capable of providing, global economic collapse and precipitous population decline could occur by 2030.

However, the study also noted that unlimited economic growth was possible, if governments forged policies and invested in technologies to regulate the expansion of humanity’s ecological footprint. Prominent economists disagreed with the report’s methodology and conclusions. Yale’s Henry Wallich opposed active intervention, declaring that limiting economic growth too soon would be “consigning billions to permanent poverty.”

Turner compared real-world data from 1970 to 2000 with the business-as-usual scenario. He found the predictions nearly matched the facts. “There is a very clear warning bell being rung here,” he says. “We are not on a sustainable trajectory.”

Sunday, April 08, 2012

a universe of self-replicating code

kurzweilai |What we’re missing now, on another level, is not just biology, but cosmology. People treat the digital universe as some sort of metaphor, just a cute word for all these products. The universe of Apple, the universe of Google, the universe of Facebook, that these collectively constitute the digital universe, and we can only see it in human terms and what does this do for us?

We’re missing a tremendous opportunity. We’re asleep at the switch because it’s not a metaphor. In 1945 we actually did create a new universe. This is a universe of numbers with a life of their own, that we only see in terms of what those numbers can do for us. Can they record this interview? Can they play our music? Can they order our books on Amazon? If you cross the mirror in the other direction, there really is a universe of self-reproducing digital code. When I last checked, it was growing by five trillion bits per second. And that’s not just a metaphor for something else. It actually is. It’s a physical reality.

[GEORGE DYSON:] When I started looking at the beginnings of the modern digital universe — at the origin of this two-dimensional address matrix — I became interested in the question of what had been done with it at the beginning. Of course, one of the things was the work on the hydrogen bomb.

Another thing that surprised and delighted me was to find that a Norwegian-Italian mathematical biologist and viral geneticist, Nils Aall Barricelli, had tried to come to Princeton in 1951, as soon as he heard this machine was being built. He had trouble getting a visa, so he finally shows up in early 1953 when the machine is running, and immediately begins these experiments, to see if he could inoculate this two-dimensional matrix with random strings of one-dimensional numbers that can self-replicate and cross-breed, and do all the things that we know that code does in biology, and see what happens.

And he observed. He was an observational biologist. He saw all sorts of behavior that he read all sorts of biological implications into. He was way too far ahead of the time, so no one paid attention and this was forgotten.

We now live in a world where everything he dreamed of really did happen. And, for some reason, von Neumann never publicized Barricelli’s work. I don’t know if there was a personal rivalry or what happened, but von Neumann died, and his papers on self-reproducing automata were published posthumously [edited by Arthur W. Burks] and there was no mention of Barricelli. Part of it was this fear that it really would provoke the public. They called computers “electronic brains” at that time. It was scary enough that we might be building machines that would think. But the idea of producing artificial life was even more Frankenstein-like. I think that’s one reason we never heard about that.

Just as we later worried about recombinant DNA, what if these things escaped? What would they do to the world? Could this be the end of the world as we know it if these self-replicating numerical creatures got loose?

But, we now live in a world where they did get loose — a world increasingly run by self-replicating strings of code. Everything we love and use today is, in a lot of ways, self-reproducing exactly as Turing, von Neumann, and Barricelli prescribed. It’s a very symbiotic relationship: the same way life found a way to use the self-replicating qualities of these polynucleotide molecules to the great benefit of life as a whole, there’s no reason life won’t use the self-replicating abilities of digital code, and that’s what’s happening. If you look at what people like Craig Venter and the thousand less-known companies are doing, we’re doing exactly that, from the bottom up.

What’s, in a way, missing in today’s world is more biology of the Internet. More people like Nils Barricelli to go out and look at what’s going on, not from a business or what’s legal point of view, but just to observe what’s going on.