Saturday, June 07, 2014

history shows that mass spying is always aimed at crushing dissent



During the Vietnam war, the NSA spied on Senator Frank Church because of his criticism of the Vietnam War. The NSA also spied on Senator Howard Baker.
Senator Church – the head of a congressional committee investigating Cointelpro – warned in 1975:
[NSA's] capability at any time could be turned around on the American people, and no American would have any privacy left, such is the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. [If a dictator ever took over, the N.S.A.] could enable it to impose total tyranny, and there would be no way to fight back.
This is, in fact, what’s happened …

Initially, American constitutional law experts say that the NSA is doing exactly the same thing to the American people today which King George did to the Colonists … using “general warrant” type spying.

And it is clear that the government is using its massive spy programs in order to track those who question government policies. See this, this, this and this.

Todd Gitlin – chair of the PhD program in communications at Columbia University, and a professor of journalism and sociology – notes:

Under the Freedom of Information Act, the Partnership for Civil Justice Fund (PCJF) has unearthed documents showing that, in 2011 and 2012, the Department of Homeland Security (DHS) and other federal agencies were busy surveilling and worrying about a good number of Occupy groups — during the very time that they were missing actual warnings about actual terrorist actions.

From its beginnings, the Occupy movement was of considerable interest to the DHS, the FBI, and other law enforcement and intelligence agencies, while true terrorists were slipping past the nets they cast in the wrong places. In the fall of 2011, the DHS specifically asked its regional affiliates to report on “Peaceful Activist Demonstrations, in addition to reporting on domestic terrorist acts and ‘significant criminal activity.’”

Aware that Occupy was overwhelmingly peaceful, the federally funded Boston Regional Intelligence Center (BRIC), one of 77 coordination centers known generically as “fusion centers,” was busy monitoring Occupy Boston daily. As the investigative journalist Michael Isikoff recently reported, they were not only tracking Occupy-related Facebook pages and websites but “writing reports on the movement’s potential impact on ‘commercial and financial sector assets.’”

It was in this period that the FBI received the second of two Russian police warnings about the extremist Islamist activities of Tamerlan Tsarnaev, the future Boston Marathon bomber. That city’s police commissioner later testified that the federal authorities did not pass any information at all about the Tsarnaev brothers on to him, though there’s no point in letting the Boston police off the hook either. The ACLU has uncovered documents showing that, during the same period, they were paying close attention to the internal workings of…Code Pink and Veterans for Peace.

***

In Alaska, Alabama, Florida, Mississippi, Tennessee, and Wisconsin, intelligence was not only pooled among public law enforcement agencies, but shared with private corporations — and vice versa.

Nationally, in 2011, the FBI and DHS were, in the words of Mara Verheyden-Hilliard, executive director of the Partnership for Civil Justice Fund, “treating protests against the corporate and banking structure of America as potential criminal and terrorist activity.” Last December using FOIA, PCJF obtained 112 pages of documents (heavily redacted) revealing a good deal of evidence for what might otherwise seem like an outlandish charge: that federal authorities were, in Verheyden-Hilliard’s words, “functioning as a de facto intelligence arm of Wall Street and Corporate America.” Consider these examples from PCJF’s summary of federal agencies working directly not only with local authorities but on behalf of the private sector:

• “As early as August 19, 2011, the FBI in New York was meeting with the New York Stock Exchange to discuss the Occupy Wall Street protests that wouldn’t start for another month. By September, prior to the start of the OWS, the FBI was notifying businesses that they might be the focus of an OWS protest.”

• “The FBI in Albany and the Syracuse Joint Terrorism Task Force disseminated information to… [22] campus police officials… A representative of the State University of New York at Oswego contacted the FBI for information on the OWS protests and reported to the FBI on the SUNY-Oswego Occupy encampment made up of students and professors.”

• An entity called the Domestic Security Alliance Council (DSAC), “a strategic partnership between the FBI, the Department of Homeland Security, and the private sector,” sent around information regarding Occupy protests at West Coast ports [on Nov. 2, 2011] to “raise awareness concerning this type of criminal activity.” The DSAC report contained “a ‘handling notice’ that the information is ‘meant for use primarily within the corporate security community. Such messages shall not be released in either written or oral form to the media, the general public or other personnel…’ Naval Criminal Investigative Services (NCIS) reported to DSAC on the relationship between OWS and organized labor.”

• DSAC gave tips to its corporate clients on “civil unrest,” which it defined as running the gamut from “small, organized rallies to large-scale demonstrations and rioting.” ***

• The FBI in Anchorage, Jacksonville, Tampa, Richmond, Memphis, Milwaukee, and Birmingham also gathered information and briefed local officials on wholly peaceful Occupy activities.

• In Jackson, Mississippi, FBI agents “attended a meeting with the Bank Security Group in Biloxi, MS with multiple private banks and the Biloxi Police Department, in which they discussed an announced protest for ‘National Bad Bank Sit-In-Day’ on December 7, 2011.” Also in Jackson, “the Joint Terrorism Task Force issued a ‘Counterterrorism Preparedness’ alert” that, despite heavy redactions, notes the need to ‘document…the Occupy Wall Street Movement.’”

***

In 2010, the American Civil Liberties Union of Tennessee learned … that the Tennessee Fusion Center was “highlighting on its website map of ‘Terrorism Events and Other Suspicious Activity’ a recent ACLU-TN letter to school superintendents. The letter encourages schools to be supportive of all religious beliefs during the holiday season.”

***

Consider an “intelligence report” from the North Central Texas fusion center, which in a 2009 “Prevention Awareness Bulletin” described, in the ACLU’s words, “a purported conspiracy between Muslim civil rights organizations, lobbying groups, the anti-war movement, a former U.S. Congresswoman, the U.S. Treasury Department, and hip hop bands to spread tolerance in the United States, which would ‘provide an environment for terrorist organizations to flourish.’”

***

And those Virginia and Texas fusion centers were hardly alone in expanding the definition of “terrorist” to fit just about anyone who might oppose government policies. According to a 2010 report in the Los Angeles Times, the Justice Department Inspector General found that “FBI agents improperly opened investigations into Greenpeace and several other domestic advocacy groups after the Sept. 11 terrorist attacks in 2001, and put the names of some of their members on terrorist watch lists based on evidence that turned out to be ‘factually weak.’” The Inspector General called “troubling” what the Los Angeles Times described as “singling out some of the domestic groups for investigations that lasted up to five years, and were extended ‘without adequate basis.’

Subsequently, the FBI continued to maintain investigative files on groups like Greenpeace, the Catholic Worker, and the Thomas Merton Center in Pittsburgh, cases where (in the politely put words of the Inspector General’s report) “there was little indication of any possible federal crimes… In some cases, the FBI classified some investigations relating to nonviolent civil disobedience under its ‘acts of terrorism’ classification.”  Fist tap Big Don.

vodafone reveals that other governments use them like a 3rd world nsa


WaPo | Britain’s Vodafone revealed Friday that several governments are collecting surveillance data directly from its networks without any legal review and publicly urged more safeguards against such unfettered access to the private communications of its customers.

The declarations, made by the world’s second-largest cellular carrier, show that the type of access to telecommunications networks enjoyed by the U.S. National Security Agency also occurs in other countries where legal protections almost certainly are lower. Vodafone’s networks span much of Europe and parts of Africa and Asia.

The company said that voice, Internet and other data could be collected without any court review in “a small number” of nations. Although the company does not name them, news reports suggested that one is Britain, whose GCHQ intelligence agency is a close partner of the NSA in filtering the world’s Internet traffic.

“It is a healthy reminder that no amount of legal reform in the United States will solve the problem if there isn’t an international solution,” said Peter Eckersley, director of technology projects for the Electronic Frontier Foundation, a civil liberties group that is based in San Francisco.
Vodafone’s statements, coming in the company’s first report on data demands made by authorities in the countries where it operates, were unusually pointed, detailed and sober by the standards of the “transparency reports” issued by a growing number of companies since the revelations by former NSA contractor Edward Snowden.

The Vodafone report includes an 88-page annex detailing laws and experiences in 29 nations where, collectively, government agencies have made millions of data requests of the company.
In several of those countries — South Africa, Turkey, Egypt and others — publishing even such rudimentary totals of requests are prohibited by law. The report merely summarizes the legal standards there rather than quantifying the extent of government data collection.

“Refusal to comply with a country’s laws is not an option,” the company said in its report. “If we do not comply with a lawful demand for assistance, governments can remove our licence to operate, preventing us from providing services to our customers. Our employees who live and work in the country concerned may also be at risk of criminal sanctions, including imprisonment.”

Friday, June 06, 2014

sons of wichita


WaPo |  Much of “Sons of Wichita” is spent tracing the two-decade legal battle between the siblings over control of Koch Industries — Charles and David on one side vs. Bill and, occasionally, Frederick. The fight became so nasty that the brothers at one point hired private investigators to dig up dirt on one another. Bill’s investigators “pilfered trash from the homes and offices of Charles, David, and three of their lawyers, bribing janitors and trash collectors,” Schulman writes.

The family war played out before jurors in a Topeka, Kan., courtroom in 1998, in the case of Koch v. Koch Industries, during which David broke down in tears on the stand recounting his twin’s attempt to assert control over the company. When Charles and David prevailed, Bill told reporters that he would appeal, adding, “These guys are crooks.”

The brothers would not reconcile until 2001, meeting for dinner at Bill’s Palm Beach mansion to sign a final settlement that divvied up their father’s property. It was the first time they had shared a meal in almost 20 years.

For those who follow the Kochs and their political activities, “Sons of Wichita” does not provide major revelations about how they operate. But Schulman lays out a cogent narrative of Charles Koch’s political evolution, starting as a member of his father’s John Birch Society. When an acquaintance visited the family home in the 1960s, Charles Koch blanched when he saw him carrying a worn copy of Ernest Hemingway’s “The Sun Also Rises.” Hemingway “was a communist,” Charles explained to the guest, who had to leave the book on the stoop outside.

After Fred Koch’s death in 1967, Charles broke with the Birch Society over its support for the Vietnam War. He embraced libertarianism and began bankrolling the movement, helping launch a new think tank, the Cato Institute, in 1977. It was the beginning of what would grow into a far-flung constellation of academic programs, think tanks and politically active nonprofits that now make up the Koch network. (The term “Kochtopus” was coined early on by a libertarian critic who charged Charles with trying to buy the party.)

As his political involvement deepened, Charles initially regarded the GOP with disdain.

what economics can learn from theology about human beings...,


journaltalk |  It seems to me that the dominant narrative of mainstream economics in the past few decades has been one of conquest. If economists engaged with ‘foreigners’ from other fields, it was usually only because they wanted to colonize them. After all, economics may quite well be the last social science where the word imperialism is treated with affection. George Stigler endorsed the field as “an imperial science” that had “been aggressive in addressing central problems in a considerable number of neighboring social disciplines, and without any invitations,” offering an eschatological vision in which he praised “Heinrich Gossen, a high priest of the theory of utility-maximizing behavior” and heralded “the spread of the economists’ theory of behavior to the entire domain of the social sciences” (Stigler 1984, 311-313). Similarly, Gary Becker (1997/1993, 52) argued that “The rational choice model provides the most promising basis presently available for a unified approach to the analysis of the social world by scholars from different social sciences.”

Yet behind this imperialistic rhetoric there has also been a growing feeling of frustration: despite all the battles, economists’ rational proposals, chiseled to perfection, are often ignored. What’s worse, the very methodological foundations of economic science seem to be crumbling as it spreads over an ever growing territory—just like in the case of the (temporarily) eternal imperial Rome (Cullenberg, Amariglio, and Ruccio 2001). Today, the paths to truths seem to be winding and numerous, and some economists are finally willing to admit that unrealistic assumptions are likely to lead to unrealistic, and irrelevant, worlds.2

The core of the trouble with mainstream economics is, I believe, its vision of a utility maximizing human being—the infamous Max U (McCloskey 2010, 297; Lipka 2013). How can we overcome the flatness of the Beckerian-Stiglerian framework? It will perhaps sound daring to economists who have pride in the practicality of their science when I suggest that the place to ask for help is—take a deep breath—theology.

in the u.s., 42% believe in a creationist view of human origins...,


gallup |  More than four in 10 Americans continue to believe that God created humans in their present form 10,000 years ago, a view that has changed little over the past three decades. Half of Americans believe humans evolved, with the majority of these saying God guided the evolutionary process. However, the percentage who say God was not involved is rising.

This latest update is from Gallup's Values and Beliefs survey conducted May 8-11. Gallup first asked the three-part question about human origins in 1982.

The percentage of the U.S. population choosing the creationist perspective as closest to their own view has fluctuated in a narrow range between 40% and 47% since the question's inception. There is little indication of a sustained downward trend in the proportion of the U.S. population who hold a creationist view of human origins. At the same time, the percentage of Americans who adhere to a strict secularist viewpoint -- that humans evolved over time, with God having no part in this process -- has doubled since 1999.

Religiousness, Age, Education Related to Americans' Views
Historically, Americans' views on the origin of humans have been related to their religiousness, education, and age.
  • Religiousness relates most strongly to these views, which is not surprising, given that this question deals directly with God's role in human origins. The percentage of Americans who accept the creationist viewpoint ranges from 69% among those who attend religious services weekly to 23% among those who seldom or never attend.
  • Educational attainment is also related to these attitudes, with belief in the creationist perspective dropping from 57% among Americans with no more than a high school education to less than half that (27%) among those with a college degree. Those with college degrees are, accordingly, much more likely to choose one of the two evolutionary explanations.
  • Younger Americans -- who are typically less religious than their elders -- are less likely to choose the creationist perspective than are older Americans. Americans aged 65 and older -- the most religious of any age group -- are most likely to choose the creationist perspective.

Thursday, June 05, 2014

are you ready for nuclear war?


paulcraigroberts |  Pay close attention to Steven Starr’s guest column, “The Lethality of Nuclear Weapons.” http://www.paulcraigroberts.org/2014/05/30/lethality-nuclear-weapons/ Washington thinks nuclear war can be won and is planning for a first strike on Russia, and perhaps China, in order to prevent any challenge to Washington’s world hegemony.

The plan is far advanced, and the implementation of the plan is underway. As I have reported previously, US strategic doctrine was changed and the role of nuclear missiles was elevated from a retaliatory role to an offensive first strike role. US anti-ballistic missile (ABM) bases have been established in Poland on Russia’s frontier, and other bases are planned. When completed Russia will be ringed with US missile bases.

Anti-ballistic missiles, known as “star wars,” are weapons designed to intercept and destroy ICBMs. In Washington’s war doctrine, the US hits Russia with a first strike, and whatever retaliatory force Russia might have remaining is prevented from reaching the US by the shield of ABMs.

The reason Washington gave for the change in war doctrine is the possibility that terrorists might obtain a nuclear weapon with which to destroy an American city. This explanation is nonsensical. Terrorists are individuals or a group of individuals, not a country with a threatening military. To use nuclear weapons against terrorists would destroy far more than the terrorists and be pointless as a drone with a conventional missile would suffice.

The reason Washington gave for the ABM base in Poland is to protect Europe from Iranian ICBMs. Washington and every European government knows that Iran has no ICBMs and that Iran has not indicated any intent to attack Europe.

No government believes Washington’s reasons. Every government realizes that Washington’s reasons are feeble attempts to hide the fact that it is creating the capability on the ground to win a nuclear war.
The Russian government understands that the change in US war doctrine and the US ABM bases on its borders are directed at Russia and are indications that Washington plans a first strike with nuclear weapons on Russia.

China has also understood that Washington has similar intentions toward China. As I reported several months ago, in response to Washington’s threat China called the world’s attention to China’s ability to destroy the US should Washington initiate such a conflict.

a metaphor for how we explain acts of violence?

slate |  It’s so much easier to talk about Slender Man than about the girl who was stabbed 19 times over the weekend. Horrific violence sends us reeling, hunting for significance and explanations, veering down side streets to avoid our own loss of power. We seek out skeleton key–like details we can use to unlock the newest awful narrative. We want to know why it happened.

In the wake of the Isla Vista, California, shootings in May, many people pored over the reasons for Elliot Rodger’s rampage. Did all those men and women die because of guns? Mental illness?
Misogyny? Hollywood representations of college hedonism? Or, wait—were we focusing too hard on the psychology of a maniac? Whatever the precise mix of factors ultimately was, everyone on the Internet had a different theory. The #YesAllWomen hashtag proliferated across Twitter like a house fire, situating Rodger on a sexist continuum that drew in everyday examples: being catcalled, being groped at a bar. As a productive and necessary conversation unfolded, and a lot of men woke up to the realities of misogyny, others asked whether there wasn’t something unseemly in how writers were shaping the tragedy, reducing its convolutions to tidy arguments about pet causes. And then more people countered that these arguments matter.

A vacuum of meaning opens up behind atrocity, and people fill it by looking at the facts through their personal viewfinders. That is why there’s a kind of poetic justice to the two girls attributing their acts to Slender Man. He’s the perfect metaphor for the becauses we collectively brainstorm—an Internet phantom who looks a little different to everyone. Slender Man is not an explanation for anything—he’s a bogey with elastic limbs who can look like a normal guy or a tentacled nightmare—but, it seems, he’s better than no reason at all.  

the slender man

wikipedia |  The Slender Man (also known as Slender Man or Slenderman) is a fictional character that originated as an Internet meme created by Something Awful forums user Eric Knudsen (a.k.a. "Victor Surge") in 2009. It is depicted as resembling a thin, unnaturally tall man with a blank and usually featureless face, wearing a black suit. Stories of the Slender Man commonly feature him stalking, abducting, or traumatizing people, particularly children.[1] The Slender Man is not confined to a single narrative, but appears in many disparate works of fiction, mostly composed online.[2]

Origin 
The Slender Man was created on a thread in the Something Awful Internet forum begun on June 8, 2009, with the goal of editing photographs to contain supernatural entities. On June 10, a forum poster with the user name "Victor Surge" contributed two black and white images of groups of children, to which he added a tall, thin spectral figure wearing a black suit.[3][4] Previous entries had consisted solely of photographs; however, Surge supplemented his submission with snatches of text, supposedly from witnesses, describing the abductions of the groups of children, and giving the character the name, "The Slender Man":
We didn't want to go, we didn't want to kill them, but its persistent silence and outstretched arms horrified and comforted us at the same time…
1983, photographer unknown, presumed dead.
One of two recovered photographs from the Stirling City Library blaze. Notable for being taken the day which fourteen children vanished and for what is referred to as “The Slender Man”. Deformities cited as film defects by officials. Fire at library occurred one week later. Actual photograph confiscated as evidence.
1986, photographer: Mary Thomas, missing since June 13th, 1986.[4]
These additions effectively transformed the photographs into a work of fiction. Subsequent posters expanded upon the character, adding their own visual or textual contributions.[3][4]

Wednesday, June 04, 2014

where business interests ARE political interests...,


billmoyers |  In this excerpt from Winner-Take-All Politics: How Washington Made the Rich Richer — and Turned Its Back on the Middle Class, authors Jacob S. Hacker and Paul Pierson explain the significance of the Powell Memorandum, a call-to-arms for American corporations written by Virginia lawyer (and future U.S. Supreme Court justice) Lewis Powell to a neighbor working with the U.S. Chamber of Commerce.

 In the fall of 1972, the venerable National Association of Manufacturers (NAM) made a surprising announcement: It planned to move its main offices from New York to Washington, D.C. As its chief, Burt Raynes, observed:
We have been in New York since before the turn of the century, because
we regarded this city as the center of business and industry.
But the thing that affects business most today is government. The
interrelationship of business with business is no longer so important
as the interrelationship of business with government. In the last several
years, that has become very apparent to us.[1]
To be more precise, what had become very apparent to the business community was that it was getting its clock cleaned. Used to having broad sway, employers faced a series of surprising defeats in the 1960s and early 1970s. As we have seen, these defeats continued unabated when Richard Nixon won the White House. Despite electoral setbacks, the liberalism of the Great Society had surprising political momentum. “From 1969 to 1972,” as the political scientist David Vogel summarizes in one of the best books on the political role of business, “virtually the entire American business community experienced a series of political setbacks without parallel in the postwar period.” In particular, Washington undertook a vast expansion of its regulatory power, introducing tough and extensive restrictions and requirements on business in areas from the environment to occupational safety to consumer protection.[2]

In corporate circles, this pronounced and sustained shift was met with disbelief and then alarm. By 1971, future Supreme Court justice Lewis Powell felt compelled to assert, in a memo that was to help galvanize business circles, that the “American economic system is under broad attack.” This attack, Powell maintained, required mobilization for political combat: “Business must learn the lesson . . . that political power is necessary; that such power must be assiduously cultivated; and that when necessary, it must be used aggressively and with determination—without embarrassment and without the reluctance which has been so characteristic of American business.” Moreover, Powell stressed, the critical ingredient for success would be organization: “Strength lies in organization, in careful long-range planning and implementation, in consistency of action over an indefinite period of years, in the scale of financing available only through joint effort, and in the political power available only through united action and national organizations.”[3]

Powell was just one of many who pushed to reinvigorate the political clout of employers. Before the policy winds shifted in the ’60s, business had seen little need to mobilize anything more than a network of trade associations. It relied mostly on personal contacts, and the main role of lobbyists in Washington was to troll for government contracts and tax breaks. The explosion of policy activism, and rise of public interest groups like those affiliated with Ralph Nader, created a fundamental challenge. And as the 1970s progressed, the problems seemed to be getting worse. Powell wrote in 1971, but even after Nixon swept to a landslide reelection the following year, the legislative tide continued to come in. With Watergate leading to Nixon’s humiliating resignation and a spectacular Democratic victory in 1974, the situation grew even more dire. “The danger had suddenly escalated,” Bryce Harlow, senior Washington representative for Procter & Gamble and one of the engineers of the corporate political revival was to say later. “We had to prevent business from being rolled up and put in the trash can by that Congress.”[4]

Powell, Harlow, and others sought to replace the old boys’ club with a more modern, sophisticated, and diversified apparatus — one capable of advancing employers’ interests even under the most difficult political circumstances. They recognized that business had hardly begun to tap its potential for wielding political power. Not only were the financial resources at the disposal of business leaders unrivaled. The hierarchical structures of corporations made it possible for a handful of decision-makers to deploy those resources and combine them with the massive but underutilized capacities of their far-flung organizations. These were the preconditions for an organizational revolution that was to remake Washington in less than a decade — and, in the process, lay the critical groundwork for winner-take-all politics.

the memo that spawned right-wing think tanks, lobbies, and the contemporary "corporations as persons" movement...,


reclaimdemocracy | Introduction - In 1971, Lewis Powell, then a corporate lawyer and member of the boards of 11 corporations, wrote a memo to his friend Eugene Sydnor, Jr., the Director of the U.S. Chamber of Commerce. The memorandum was dated August 23, 1971, two months prior to Powell’s nomination by President Nixon to the U.S. Supreme Court.

The Powell Memo did not become available to the public until long after his confirmation to the Court. It was leaked to Jack Anderson, a liberal syndicated columnist, who stirred interest in the document when he cited it as reason to doubt Powell’s legal objectivity. Anderson cautioned that Powell “might use his position on the Supreme Court to put his ideas into practice…in behalf of business interests.”

Though Powell’s memo was not the sole influence, the Chamber and corporate activists took his advice to heart and began building a powerful array of institutions designed to shift public attitudes and beliefs over the course of years and decades. The memo influenced or inspired the creation of the Heritage Foundation, the Manhattan Institute, the Cato Institute, Citizens for a Sound Economy, Accuracy in Academe, and other powerful organizations. Their long-term focus began paying off handsomely in the 1980s, in coordination with the Reagan Administration’s “hands-off business” philosophy.

Most notable about these institutions was their focus on education, shifting values, and movement-building — a focus we share, though often with sharply contrasting goals.*  (See our endnote for more on this.)

So did Powell’s political views influence his judicial decisions? The evidence is mixed. Powell did embrace expansion of corporate privilege and wrote the majority opinion in First National Bank of Boston v. Bellotti, a 1978 decision that effectively invented a First Amendment “right” for corporations to influence ballot questions. On social issues, he was a moderate, whose votes often surprised his backers.

lying without opposition: reagan's veto of the fairness doctrine laid the groundwork for the partisan peasant right wing...,


latimes |  President Reagan, intensifying the debate over whether the nation's broadcasters must present opposing views of controversial issues, has vetoed legislation to turn into law the 38-year-old "fairness doctrine," the White House announced Saturday.

The doctrine, instituted by the Federal Communications Commission as public policy in 1949, requires the nation's radio and television stations to "afford reasonable opportunity for the discussion of conflicting views on issues of public importance."

"This type of content-based regulation by the federal government is, in my judgment, antagonistic to the freedom of expression guaranteed by the First Amendment," Reagan said in his veto message. "In any other medium besides broadcasting, such federal policing of the editorial judgment of journalists would be unthinkable."

Staunch Opposition
The legislation had been staunchly opposed not only by the Administration, but also by the nation's broadcasters, who maintain that the FCC policy is an unconstitutional intrusion that has a chilling effect on their operations.

Opponents also contend that the explosive growth of the telecommunications industry in recent years makes the fairness doctrine obsolete. In his veto message, Reagan noted that the FCC has concluded "that the doctrine is an unnecessary and detrimental regulatory mechanism."

The legislation containing the doctrine passed the House on a 302-102 vote on June 3 and had been approved by the Senate in April on a 59-31 vote.

If the measure does not become law, the fairness doctrine and its obligations still will remain in effect as FCC policy. However, supporters have been seeking to codify the regulation for fear that the FCC could act to repeal it--particularly in light of a federal appeals court ruling last year that concluded that the doctrine was not a law, leaving its enforcement up to the FCC.

Former FCC Chairman Mark S. Fowler had pressed for repeal of the fairness doctrine and, the June 22 issue of Broadcasting magazine said, helped to write Reagan's veto message.

In 1985 the FCC, under Fowler's leadership, issued a report on the doctrine calling it constitutionally "suspect" and said that "if it were up to the commission, it would hold the doctrine unconstitutional."

what about the ratings agencies? REDUX (originally posted 6/23/13)


rollingstone | That's what "they" always say about the financial crisis and the teeming rat's nest of corruption it left behind. Everybody else got plenty of blame: the greed-fattened banks, the sleeping regulators, the unscrupulous mortgage hucksters like spray-tanned Countrywide ex-CEO Angelo Mozilo.

But what about the ratings agencies? Isn't it true that almost none of the fraud that's swallowed Wall Street in the past decade could have taken place without companies like Moody's and Standard & Poor's rubber-stamping it? Aren't they guilty, too?

Man, are they ever. And a lot more than even the least generous of us suspected.
Thanks to a mountain of evidence gathered for a pair of major lawsuits by the San Diego-based law firm Robbins Geller Rudman & Dowd, documents that for the most part have never been seen by the general public, we now know that the nation's two top ratings companies, Moody's and S&P, have for many years been shameless tools for the banks, willing to give just about anything a high rating in exchange for cash.
In incriminating e-mail after incriminating e-mail, executives and analysts from these companies are caught admitting their entire business model is crooked.
"Lord help our fucking scam . . . this has to be the stupidest place I have worked at," writes one Standard & Poor's executive. "As you know, I had difficulties explaining 'HOW' we got to those numbers since there is no science behind it," confesses a high-ranking S&P analyst. "If we are just going to make it up in order to rate deals, then quants [quantitative analysts] are of precious little value," complains another senior S&P man. "Let's hope we are all wealthy and retired by the time this house of card[s] falters," ruminates one more.
Ratings agencies are the glue that ostensibly holds the entire financial industry together. These gigantic companies – also known as Nationally Recognized Statistical Rating Organizations, or NRSROs – have teams of examiners who analyze companies, cities, towns, countries, mortgage borrowers, anybody or anything that takes on debt or creates an investment vehicle.
Their primary function is to help define what's safe to buy, and what isn't. A triple-A rating is to the financial world what the USDA seal of approval is to a meat-eater, or virginity is to a Catholic. It's supposed to be sacrosanct, inviolable: According to Moody's own reports, AAA investments "should survive the equivalent of the U.S. Great Depression."

a little living-memory, partisan, political dot-connecting to get you through the hump...,

Why the Republican National Debt is $12 Trillion

post-gazette |  OK, the beast is starving. Now what? That's the question confronting Republicans. But they're refusing to answer, or even to engage in any serious discussion about what to do.

For readers who don't know what I'm talking about: Ever since Ronald Reagan, the GOP has been run by people who want a much smaller government. In the famous words of the activist Grover Norquist, conservatives want to get the government "down to the size where we can drown it in the bathtub."

But there has always been a political problem with this agenda. Voters may say that they oppose big government, but the programs that actually dominate federal spending -- Medicare, Medicaid and Social Security -- are very popular. So how can the public be persuaded to accept large spending cuts?

The conservative answer, which evolved in the late 1970s, would be dubbed "starving the beast" during the Reagan years. The idea -- propounded by many members of the conservative intelligentsia, from Alan Greenspan to Irving Kristol -- was basically that sympathetic politicians should engage in a game of bait-and-switch. Rather than proposing unpopular spending cuts, Republicans would push through popular tax cuts, with the deliberate intention of worsening the government's fiscal position. Spending cuts could then be sold as a necessity rather than a choice, the only way to eliminate an unsustainable budget deficit.

And the deficit came. True, more than half of this year's budget deficit is the result of the Great Recession, which has both depressed revenues and required a temporary surge in spending to contain the damage. But even when the crisis is over, the budget will remain deeply in the red, largely as a result of George W. Bush-era tax cuts and unfunded wars. In addition, the combination of an aging population and rising medical costs will, unless something is done, lead to explosive debt growth after 2020.

Tuesday, June 03, 2014

calling an ordinary problem a "disease" leads to bigger problems

NYTimes | There’s plenty of blame to go around for this mess. But broadening our definition of disease probably made all of this possible.

My friend and colleague Dr. Beth Tarini, a health services researcher at the University of Michigan, published a study last year that examined how parents react when given a diagnosis of GERD for their infants. Dr. Tarini and her colleagues randomly chose certain parents to be told that an infant with symptoms of reflux had GERD or, instead, “a problem.” Half of each of these groups were also told that medications were ineffective.

Parents who were told that their infant had GERD were significantly more interested in having their child put on medication, even when they were told that medication was ineffective. Parents of infants who were not labeled with GERD were not interested in medication once they were told it didn’t work.

Words matter. Studies have shown that once people with high blood pressure are labeled “hypertensive,” they are significantly more likely to be absent from work, regardless of whether treatment was begun. Many diseases have become so much broader in definition that they now encompass huge swaths of the public.

When statins were first approved, they were used to treat people with very high levels of cholesterol. Their benefit was thought to be clear in that population. Last year, however, the release of new guidelines meant that more than 87 percent of all men age 60 to 75 would be recommended to be on statins, and the same for more than 53 percent of women in the same age group. Nearly every single African-American man over 65 would be recommended to be on the drug.

The American Academy of Pediatrics released guidelines a number of years ago recommending that children as young as 8 years old be treated with medication for an LDL cholesterol level above 190. Many think this is going too far. No one knows the long-term consequences of being on such drugs for decades.

Allowing the medicalization of normal variations in physiology to be transformed into “treatable conditions” is leading to unintended consequences. We’re spending billions of dollars on treatments that might not, or don’t, work. We’re making people worry when they don’t have to. And we may be causing actual health problems in the process.

As Dr. Tarini puts it, “Our job as doctors is to make sick patients healthy, not to make healthy patients sick.”

internet addiction?


scientificamerican |  How do we size up such an addiction? One way is to look at chemistry and the brain’s wiring. Drugs and behaviors are viewed as triggers for the same chemical changes in the brain. Researchers are also testing substance-abuse treatment drugs in experimental trials for Internet addiction and gambling. And the DSM-5 has a new behavioral addictions category, of which gambling is now a part, moved from its past classification as an “impulse-control disorder.” The APA has thus hinted that behaviors can be addictive in medical-speak.

Another way to look at addictions, however, is to look at the symptoms and consequences. You could diagnose addictions differently—alcohol, Internet gaming, etc.—or you could call them patients of a single condition: an addiction syndrome. Each overdose is viewed as a manifestation of this syndrome, driven by circumstance and inherent traits. The syndrome model buckets addictions into one category with a set of symptoms and a spectrum of severity. More than a habit, it’s the consequence that defines the addiction.

A third way is to rethink an addiction like Internet gaming as the development of a new worldview. An addiction often starts off as an innocuous experience. The experience triggers a series of pleasurable feelings but it also plants a series of memories. Taken to an extreme, what an addict wants is the recreation of the memory, an alternate reality. To simply abstain from whatever it is that is addictive is to deny a worldview. The body serves as a medium for the known route (the drug or behavior) that is the ticket to the desired world (the alternate reality). Of course, there are very real chemical changes that happen in an addict’s brain. But this alternate way of looking at addiction illustrates that it is a process, not a condition, and that circumstance influences chemistry.
And thus, the final question: Who decides what matters?

Over 400 years ago to be addicted was to simply have a strong inclination toward substances or behaviors. It was a choice. But over time, addictions started to mean inclinations that were less about choice and more about lack of control. Deviance then became a problem that could be fixed through religious discourse, medicine and social pressures. Today, there’s a psychiatric manual.

The DSM wields power. It’s gone from a 130-page manual in 1952 to a 900-page bestseller that competed with J. K. Rowling and Dan Brown on Amazon’s top-selling of 2013 before settling in at #12. The book is used as a treatment guide for picking out the right mental condition, providing the basis for insurance claims.

Monday, June 02, 2014

hitler attempted that dog-breeding approach...,


medicalexpress | A single-letter change in the genetic code is enough to generate blond hair in humans, in dramatic contrast to our dark-haired ancestors. A new analysis by Howard Hughes Medical Institute (HHMI) scientists has pinpointed that change, which is common in the genomes of Northern Europeans, and shown how it fine-tunes the regulation of an essential gene.

"This particular genetic variation in humans is associated with blond hair, but it isn't associated with eye color or other pigmentation traits," says David Kingsley, an HHMI investigator at Stanford University who led the study. "The specificity of the switch shows exactly how independent color changes can be encoded to produce specific traits in humans." Kingsley and his colleagues published their findings in the June 1, 2014, issue of the journal Nature Genetics.

Kingsley says a handful of genes likely determine in humans, however, the precise molecular basis of the trait remains poorly understood. But Kingsley's discovery of the genetic hair-color switch didn't begin with a deep curiosity about golden locks. It began with fish.

islam INFINITELY preferable to these backward-assed lynch mobs...,


butterfliesandwheels |  Last Sunday, a 45 year old woman, Christine Jemeli Koech, was accused of witchcraft. A neighbour claimed that Koech, a mother of six, had been responsible for her child’s illness. A local mob stormed Koech’s house early in the morning while she was asleep. They murdered her and burnt her body. This gruesome practice of lynching continues in the East African country of Kenya.

According to media reports, the neighbour has been arrested but the people who carried out the killing are still at large. Witch burning is common in Kenya and in other parts of the region. Men and women accused of bewitching people are executed by a lynch mob. Some years ago, a graphic video of ‘witches’ being burnt in Kenya was circulated on the internet. It attracted international outrage and condemnation.

It drew the attention of the world to the scale of the problem in Kenya and in other parts of Africa. People in Kenya engage in witch burning with apparent impunity. People who attack and lynch ‘witches’ more often than not get away with their crimes. This has to stop. The government of Kenya needs to take a proactive rather than its current reactive approach to combat the accusation of witchcraft and the burning of witches in their country.

Saturday, May 31, 2014

understanding creationism


pandasthumb |  In this short series, David MacMillan explains how misinformation and misconceptions allow creationists to maintain their beliefs even in the face of overwhelming evidence to the contrary. A former creationist blogger and writer, Mr. MacMillan earned his BS degree in physics from the University of North Alabama and now works as a technical writer when he isn’™t frequenting the PT comment boards. Since leaving creationism, he has written several columns discussing the public dialogue between creation and evolution. This series will outline the core beliefs creationists use as the basis for their reasoning while pointing out the challenges faced in re-educating against creationist misconceptions.

1. Introduction and overview: Philosophy of pseudoscience
During my tenure as an active young-earth creationist, I never once heard other creationists accurately describe what evolutionary theory is or how it is supposed to work. Nor did I understand it myself. Creationists often seem familiar with a lot of scientific terminology, but their understanding is filled with gross misinformation. Thus, a host of misconceptions is believed and taught throughout creationist circles, making it almost impossible for actual evidence to really sink in.

There are plenty of comprehensive lists of creationist claims with exhaustive refutations, such as the TalkOrigins archive. Rather than try to replicate those, I will attempt to explain why creationist claims persist in the face of contrary evidence, even when individuals are otherwise well-educated. To do so, I’m going to go over the major areas where creationists get the science itself completely wrong. My list doesn’t represent all such misconceptions, of course. These are the misconceptions I personally recall hearing or using myself. I’ve chosen not to provide specific examples of each misconception from the creationist literature, though they are all easy to find. Citations for my explanations can be found online by anyone who wants to see them; this series is not about any particular facts so much as it’s about how false beliefs are used to support false conclusions.

We understand the theory of evolution to be a series of conclusions drawn from over a century of research, predictions, and discoveries. This theory allows us to understand the mechanisms in biology and make further predictions about the sort of evidence we will uncover in the future. Its predictive power is vital to success in real-life applications like medicine, genetic engineering, and agriculture.
However, creationists don’t see it the same way. Creationists artificially classify medicine, genetic research, and agriculture as “operational science,” and believe that those disciplines function in a different way than research in evolutionary biology. They understand the theory of evolution, along with mainstream geology and a variety of other disciplines, as a philosophical construct created for the express purpose of explaining life on Earth apart from divine intervention. Thus, they approach the concept of evolution from a defensive position; they believe it represents an attack on all religious faith.

This defensive posture is reflected in nearly all creationist literature, even in the less overt varieties such as intelligent-design creationism. It dictates responses. When creationists see a particular argument or explanation about evolution, their initial reaction is to ask, “How does this attack the truth of God as Creator? What philosophical presuppositions are dictating beliefs here? How can I challenge those underlying assumptions and thus demonstrate the truth?” Recognizing this basis for creationist arguments is a helpful tool for understanding why such otherwise baffling arguments are proposed.

In reality, we understand that although various philosophical implications may be constructed around evolution, it is not driven by any atheistic philosophy. The fundamental principle undergirding the theory of evolution is the same as the fundamental principle behind all science: that hypotheses can be tested and confirmed by prediction. But creationists instead insist that evolution arises out of explicitly atheistic axioms. This series will look at the arguments and objections which flow from this worldview in six different areas.

Creationists accept certain aspects of variation, adaptation, and speciation, but they artificially constrain the mechanism for adaptation to produce an imagined barrier between “œmicroevolution” and “œmacroevolution” (Part 2). They conceptualize evolutionary adaptation as a series of individual changes, missing the entire mechanism provided by the population as a whole (Part 3). They make the extraordinary claim that no transitional fossils exist, simply by redefining “transitional” into something that could not possibly exist (Part 4). Creationists attempt to rewrite the last two centuries of scientific progress in order to avoid dealing with the multiple lines of evidence all independently affirming common descent and deep time (Part 5). They have far-reaching misapprehensions concerning microbiology and DNA (Part 6). On top of all this, they assign ethical and moral failings to evolutionary science in order to make evolution seem dangerous and anti-religion (Part 7). I will address each of these topics in the coming posts.

dna doesn't determine race, society does...,

psmag |  Genes certainly reflect geography, but unlike geography, human genetic differences don’t fall along obvious natural boundaries that might define races. As my Washington University colleague Alan Templeton has shown, by objective genetic definitions of race, human races don’t exist. Writing in Studies in History and Philosophy of Biological and Biomedical Sciences, Templeton notes that “Human populations certainly show genetic differences across geographical space, but this does not necessarily mean that races exist in humans.” For an objective, biological definition of race, this genetic differentiation has to occur “across sharp boundaries and not as gradual changes.” Templeton examined two genetic definitions of race that are commonly applied by biologists to vertebrate species. In both cases, races clearly exist in chimpanzees, our nearest relatives, but not in humans.

One natural definition of race is a group whose members are genetically much more similar to each other than they are to other groups. Putting a number on what counts as “much more” is a somewhat arbitrary exercise, but Templeton found that the genetic differentiation between populations of chimpanzees is over seven times greater than the genetic differentiation between broad geographical populations of humans. Furthermore, the level of genetic differentiation between human populations falls well below the threshold that biologists typically use to define races in non-human species.

Races could also be defined by genetic branches on the family tree. For most of us, this is the most intuitive definition of race. It’s one that, at first glance, is consistent with recent human evolution: After originating in Africa, part of our species branched out first into Asia and Europe, and then to the rest of the world. We should thus expect different geographical populations to be distinct genetic limbs on our species’ recent evolutionary tree.

But as it turns out, our species’ family history is not so arboreal. Geneticists have methods for measuring the “treeness” of genetic relationships between populations. Templeton found that the genetic relationships between human populations don’t have a very tree-like structure, while chimpanzee populations do. Rather than a family tree with distinct racial branches, humans have a family trellis that lacks clear genetic boundaries between different groups.

These findings reflect our unusual recent evolutionary history. Unlike the distinct populations of chimps, humans continued to exchange both goods and genes with each other even as they rapidly settled an enormous geographical range. Those ongoing contacts, plus the fact that we were a small, genetically homogeneous species to begin with, has resulted in relatively close genetic relationships, despite our worldwide presence. The DNA differences between humans increase with geographical distance, but boundaries between populations are, as geneticists Kenneth Weiss and Jeffrey Long put it, “multilayered, porous, ephemeral, and difficult to identify.” Pure, geographically separated ancestral populations are an abstraction: “There is no reason to think that there ever were isolated, homogeneous parental populations at any time in our human past.”

the gene is obsolete...,

psmag | In the aftermath of the Human Genome Project, biologists are struggling with the definition of a gene, but why should this matter to anyone else? It matters because the molecular concept of the gene that has dominated biomedical research for the last half-century is increasingly ill-suited for our efforts to understand the role of genetics in human biology. Giving a physical meaning to the concept of a gene was a triumph of 20th-century biology, but as it turns out, this scientific success hasn’t solved the problems we hoped it would.

The Human Genome Project was conceived as part of a research program to develop a set of clear molecular explanations for our biology. The idea was to inventory all of our genes and assign each of them a function; with this annotated inventory in hand, we would possess a molecular explanation of our genetic underpinnings and discover druggable target genes for specific diseases. While this gene-focused approach has been successful in many cases, it’s increasingly clear that we will never understand the role of genetics in our biology by merely making an annotated inventory of those DNA entities that we call genes.

Life isn’t so simple, and perhaps Wilhelm Johannsen’s more agnostic definition of a gene is a better match to the mixed bag of genetic elements in our genomes. The molecular concept of a gene was supposed to explain the influence of our DNA on our biology, our behaviors, and our ailments. That explanation is much more elusive than we hoped, and the role of DNA in our lives is more complex and subtle than we expected.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...