Wednesday, June 13, 2018

Elites Have No Skin In The Game


mises  |  To review Skin in the Game is a risky undertaking. The author has little use for book reviewers who, he tells us, “are bad middlemen. … Book reviews are judged according to how plausible and well-written they are; never in how they map the book (unless of course the author makes them responsible for misrepresentations).”

The risk is very much worth undertaking, though, because Skin in the Game is an excellent book, filled with insights. These insights stress a central antithesis. Irresponsible people, with what C.D. Broad called “clever silly” intellectuals prominent among them, defend reckless policies that impose risks on others but not on themselves. They have no “skin in the game,” and in this to Taleb lies their chief defect.

Interventionist foreign policy suffers from this defect. “A collection of people classified as interventionistas … who promoted the Iraq invasion of 2003, as well as the removal of the Libyan leader in 2011, are advocating the imposition of additional such regime change on another batch of countries, which includes Syria, because it has a ‘dictator’. So we tried that thing called regime change in Iraq, and failed miserably. … But we satisfied the objective of ‘removing a dictator.’ By the same reasoning, a doctor would inject a patient with ‘moderate’ cancer cells to improve his cholesterol numbers, and proudly claim victory after the patient is dead, particularly if the postmortem showed remarkable cholesterol readings.”

But what has this to do with risk? The fallacy of the interventionists, Taleb tells us, is that they disregard the chance that their schemes will fail to work as planned. A key theme of Taleb’s work is that uncertain outcomes mandate caution.

“And when a blowup happens, they invoke uncertainty, something called a Black Swan (a high-impact unexpected event), … not realizing that one should not mess with a system if the results are fraught with uncertainty, or, more generally, should avoid engaging in an action with a big downside if one has no idea of the outcomes.”

The same mistaken conception of risk affects economic policy. “For instance, bank blowups came in 2008 because of the accumulation of hidden and asymmetric risks in the system: bankers, master risk transferors, could make steady money from a certain class of concealed explosive risks, use academic risk models that don’t work except on paper … then invoke uncertainty after a blowup … and keep past income — what I have called the Bob Rubin trade.”

Instead of relying on mathematical models, economists should realize that the free market works. Why use misguided theory to interfere with success in practice? “Under the right market structure, a collection of idiots produces a well-functioning market. … Friedrich Hayek has been, once again, vindicated. Yet one of the most cited ideas in history, that of the invisible hand, appears to be the least integrated into the modern psyche.”

Upsetting a complex system like the free market, can have disastrous consequences. Given this truth, libertarianism is the indicated course of action. “We libertarians share a minimal set of beliefs, the central one being to substitute the rule of law for the rule of authority. Without necessarily realizing it, libertarians believe in complex systems.”

Tuesday, June 12, 2018

"Privacy" Isn't What's Really At Stake...,


NewYorker  |  The question about national security and personal convenience is always: At what price? What do we have to give up? On the criminal-justice side, law enforcement is in an arms race with lawbreakers. Timothy Carpenter was allegedly able to orchestrate an armed-robbery gang in two states because he had a cell phone; the law makes it difficult for police to learn how he used it. Thanks to lobbying by the National Rifle Association, federal law prohibits the National Tracing Center from using a searchable database to identify the owners of guns seized at crime scenes. Whose privacy is being protected there?

Most citizens feel glad for privacy protections like the one in Griswold, but are less invested in protections like the one in Katz. In “Habeas Data,” Farivar analyzes ten Fourth Amendment cases; all ten of the plaintiffs were criminals. We want their rights to be observed, but we also want them locked up.

On the commercial side, are the trade-offs equivalent? The market-theory expectation is that if there is demand for greater privacy then competition will arise to offer it. Services like Signal and WhatsApp already do this. Consumers will, of course, have to balance privacy with convenience. The question is: Can they really? The General Data Protection Regulation went into effect on May 25th, and privacy-advocacy groups in Europe are already filing lawsuits claiming that the policy updates circulated by companies like Facebook and Google are not in compliance. How can you ever be sure who is eating your cookies?

Possibly the discussion is using the wrong vocabulary. “Privacy” is an odd name for the good that is being threatened by commercial exploitation and state surveillance. Privacy implies “It’s nobody’s business,” and that is not really what Roe v. Wade is about, or what the E.U. regulations are about, or even what Katz and Carpenter are about. The real issue is the one that Pollak and Martin, in their suit against the District of Columbia in the Muzak case, said it was: liberty. This means the freedom to choose what to do with your body, or who can see your personal information, or who can monitor your movements and record your calls—who gets to surveil your life and on what grounds.

As we are learning, the danger of data collection by online companies is not that they will use it to try to sell you stuff. The danger is that that information can so easily fall into the hands of parties whose motives are much less benign. A government, for example. A typical reaction to worries about the police listening to your phone conversations is the one Gary Hart had when it was suggested that reporters might tail him to see if he was having affairs: “You’d be bored.” They were not, as it turned out. We all may underestimate our susceptibility to persecution. “We were just talking about hardwood floors!” we say. But authorities who feel emboldened by the promise of a Presidential pardon or by a Justice Department that looks the other way may feel less inhibited about invading the spaces of people who belong to groups that the government has singled out as unpatriotic or undesirable. And we now have a government that does that. 


Smarter ____________ WILL NOT Take You With Them....,



nautilus  |  When it comes to artificial intelligence, we may all be suffering from the fallacy of availability: thinking that creating intelligence is much easier than it is, because we see examples all around us. In a recent poll, machine intelligence experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after.1 But, like a tribe on a tropical island littered with World War II debris imagining that the manufacture of aluminum propellers or steel casings would be within their power, our confidence is probably inflated.

AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. What evolution accomplished required tremendous resources. While silicon-based technologies are increasingly capable of simulating a mammalian or even human brain, we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior.

But there is hope. By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.
It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability.
The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun.

 

Proposed Policies For Advancing Embryonic Cell Germline-Editing Technology


niskanencenter |  In a previous post, I touched on the potential social and ethical consequences that will likely emerge in the wake of Dr. Shoukhrat Mitalipov’s recent experiment in germline-edited embryos. The short version: there’s probably no stopping the genetic freight train. However, there are steps we can take to minimize the potential costs, while capitalizing on the many benefits these advancements have to offer us. In order to do that, however, we need to turn our attention away from hyperbolic rhetoric of “designer babies” and focus on the near-term practical considerations—mainly, how we will govern the research, development, and application of these procedures.
Before addressing the policy concerns, however, it’s important to understand the fundamentals of what is being discussed in this debate. In the previous blog, I noted the difference between somatic cell editing and germline editing—one of the major ethical faultlines in this issue space. In order to have a clear perspective of the future possibilities, and current limitations, of genetic modification, let’s briefly examine how CRISPR actually works in practice. 

CRISPR stands for “clustered regularly interspaced short palindromic repeats”—a reference to segments of DNA that function as a defense used by bacteria to ward off foreign infections. That defense system essentially targets specific patterns of DNA in a virus, bacteria, or other threat and destroys it. This approach uses Cas9—an RNA-guided protein—to search through a cell’s genetic material until it finds a genetic sequence that matches the sequence programmed into its guide RNA. Once it finds its target, the protein splices the two strands of the DNA helix. Repair enzymes can then heal the gap in the broken DNA, or filled using new genetic information introduced into the sequence. Conceptually, we can think of CRISPR as the geneticist’s variation of a “surgical laser knife, which allows a surgeon to cut out precisely defective body parts and replace them with new or repaired ones.”

The technology is still cutting edge, and most researchers are still trying to get a handle on the technical difficulties associated with its use. Right now, we’re still in the Stone Age of genetic research. Even though we’ve made significant advancements in recent years, we’re still a long, long way from editing human IQs of our children on-demand. That technology is much further into the future and some doubt that we’ll ever be able to “program” inheritable traits into our individual genomes. In short, don’t expect any superhumanly intelligent, disease-resistant super soldiers any time soon.

The Parallels Between Artificial Intelligence and Genetic Modification
There are few technologies that inspire fantastical embellishments in popular media as much as the exaggerations surrounding genetic modification. In fact, the only technology that comes close to comparison—and indeed, actually parallels the rhetoric quite closely—is artificial intelligence (AI).

Monday, June 11, 2018

Office 365 CRISPR Editing Suite


gizmodo |  The gene-editing technology CRISPR could very well one day rid the world of its most devastating diseases, allowing us to simply edit away the genetic code responsible for an illness. One of the things standing in the way of turning that fantasy into reality, though, is the problem of off-target effects. Now Microsoft is hoping to use artificial intelligence to fix this problem. 

You see, CRISPR is fawned over for its precision. More so than earlier genetic technologies, it can accurately target and alter a tiny fragment of genetic code. But it’s still not always as accurate as we’d like it to be. Thoughts on how often this happens vary, but at least some of the time, CRISPR makes changes to DNA it was intended to leave alone. Depending on what those changes are, they could inadvertently result in new health problems, such as cancer.

Scientists have long been working on ways to fine-tune CRISPR so that less of these unintended effects occur. Microsoft thinks that artificial intelligence might be one way to do it. Working with computer scientists and biologists from research institutions across the U.S., the company has developed a new tool called Elevation that predicts off-target effects when editing genes with the CRISPR. 

It works like this: If a scientist is planning to alter a specific gene, they enter its name into Elevation. The CRISPR system is made up of two parts, a protein that does the cutting and a synthetic guide RNA designed to match a DNA sequence in the gene they want to edit. Different guides can have different off-target effects depending on how they are used. Elevation will suggest which guide is least likely to result in off-target effects for a particular gene, using machine learning to figure it out. It also provides general feedback on how likely off-target effects are for the gene being targeted. The platform bases its learning both on Microsoft research and publicly available data about how different genetic targets and guides interact. 

The work is detailed in a paper published Wednesday in the journal Nature Biomedical Engineering. The tool is publicly available for researchers to use for free. It works alongside a tool released by Microsoft in 2016 called Azimuth that predicts on-target effects.

There is lots of debate over how problematic the off-target effects of CRISPR really are, as well as over how to fix them. Microsoft’s new tool, though, will certainly be a welcome addition to the toolbox. Over the past year, Microsoft has doubled-down on efforts to use AI to attack health care problems.

Who Will Have Access To Advanced Reproductive Technology?



futurism |  In November 2017, a baby named Emma Gibson was born in the state of Tennessee. Her birth, to a 25-year-old woman, was fairly typical, but one aspect made her story unique: she was conceived 24 years prior from anonymous donors, when Emma’s mother was just a year old.  The embryo had been frozen for more than two decades before it was implanted into her mother’s uterus and grew into the baby who would be named Emma.

Most media coverage hailed Emma’s birth as a medical marvel, an example of just how far reproductive technology has come in allowing people with fertility issues to start a family.

Yet, the news held a small detail that gave others pause. The organization that provided baby Emma’s embryo to her parents, the National Embryo Donation Center (NEDC), has policies that state they will only provide embryos to married, heterosexual couples, in addition to several health requirements. Single women and non-heterosexual couples are not eligible.

In other industries, this policy would effectively be labeled as discriminatory. But for reproductive procedures in the United States, such a policy is completely legal. Because insurers do not consider reproductive procedures to be medically necessary, the U.S. is one of the few developed nations without formal regulations or ethical requirements for fertility medicine. This loose legal climate also gives providers the power to provide or deny reproductive services at will.

The future of reproductive technology has many excited about its potential to allow biological birth for those who might not otherwise have been capable of it. Experiments going on today, such as testing functional 3D-printed ovaries and incubating animal fetuses in artificial wombs, seem to suggest that future is well on its way, that fertility medicine has already entered the realm of what was once science fiction.

Yet, who will have access to these advances? Current trends seem to suggest that this will depend on the actions of regulators and insurance agencies, rather than the people who are affected the most.

Cognitive Enhancement In The Context Of Neurodiversity And Psychopathology


ama-assn |  In the basement of the Bureau International des Poids et Mesures (BIPM) headquarters in Sevres, France, a suburb of Paris, there lies a piece of metal that has been secured since 1889 in an environmentally controlled chamber under three bell jars. It represents the world standard for the kilogram, and all other kilo measurements around the world must be compared and calibrated to this one prototype. There is no such standard for the human brain. Search as you might, there is no brain that has been pickled in a jar in the basement of the Smithsonian Museum or the National Institute of Health or elsewhere in the world that represents the standard to which all other human brains must be compared. Given that this is the case, how do we decide whether any individual human brain or mind is abnormal or normal? To be sure, psychiatrists have their diagnostic manuals. But when it comes to mental disorders, including autism, dyslexia, attention deficit hyperactivity disorder, intellectual disabilities, and even emotional and behavioral disorders, there appears to be substantial uncertainty concerning when a neurologically based human behavior crosses the critical threshold from normal human variation to pathology.

A major cause of this ambiguity is the emergence over the past two decades of studies suggesting that many disorders of the brain or mind bring with them strengths as well as weaknesses. People diagnosed with autism spectrum disorder (ASD), for example, appear to have strengths related to working with systems (e.g., computer languages, mathematical systems, machines) and in experiments are better than control subjects at identifying tiny details in complex patterns [1]. They also score significantly higher on the nonverbal Raven’s Matrices intelligence test than on the verbal Wechsler Scales [2]. A practical outcome of this new recognition of ASD-related strengths is that technology companies have been aggressively recruiting people with ASD for occupations that involve systemizing tasks such as writing computer manuals, managing databases, and searching for bugs in computer code [3].

Valued traits have also been identified in people with other mental disorders. People with dyslexia have been found to possess global visual-spatial abilities, including the capacity to identify “impossible objects” (of the kind popularized by M. C. Escher) [4], process low-definition or blurred visual scenes [5], and perceive peripheral or diffused visual information more quickly and efficiently than participants without dyslexia [6]. Such visual-spatial gifts may be advantageous in jobs requiring three-dimensional thinking such as astrophysics, molecular biology, genetics, engineering, and computer graphics [7, 8]. In the field of intellectual disabilities, studies have noted heightened musical abilities in people with Williams syndrome, the warmth and friendliness of individuals with Down syndrome, and the nurturing behaviors of persons with Prader-Willi syndrome [9, 10]. Finally, researchers have observed that subjects with attention deficit hyperactivity disorder (ADHD) and bipolar disorder display greater levels of novelty-seeking and creativity than matched controls [11-13].

Such strengths may suggest an evolutionary explanation for why these disorders are still in the gene pool. A growing number of scientists are suggesting that psychopathologies may have conferred specific evolutionary advantages in the past as well as in the present [14]. The systemizing abilities of individuals with autism spectrum disorder might have been highly adaptive for the survival of prehistoric humans. As autism activist Temple Grandin, who herself has autism, surmised: “Some guy with high-functioning Asperger’s developed the first stone spear; it wasn’t developed by the social ones yakking around the campfire” [15].

Sunday, June 10, 2018

Cognitive Enhancement of Other Species?



singularityhub |  Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.

Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans. 

Others are less convinced. Forbes Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.

The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
 
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.

Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.

 

The Use of Clustered, Regularly Inter-spaced, Short, Palindromic Repeats


fortunascorner | “CRISPRs are elements of an ancient system that protects bacteria, and other, single-celled organisms from viruses, acquiring immunity to them by incorporating genetic elements from the virus invaders,” Mr. Wadhwa wrote.  “And, this bacterial, antiviral defense serves as an astonishingly cheap, simple, elegant way to quickly edit the DNA of any organism in the lab.  To set up a CRISPR editing capability, a lab only needs to order an RNA fragment (costing about $10) and purchase off-the-shelf chemicals and enzymes for $30 or less.”  
 
“Because CRISPR is cheap, and easy to use, it has both revolutionized, and democratized genetic research,” Mr. Wadhwa observes.  “Hundreds, if not thousands of labs are now experimenting with CRISPR-based editing projects.” And, access to the WorldWide Web, provides instantaneous know-how, for a would-be terrorist — bent on killing hundreds of millions of people.  As Mr. Wadhwa warns, “though a nuclear weapon can cause tremendous, long-lasting damage, the ultimate biological doomsday machine — is bacteria, because they can spread so quickly, and quietly.”
 
“No one is prepared for an era, when editing DNA is as easy as editing a Microsoft Word document.”
 
This observation, and warning, is why the current scientific efforts aimed at developing a vaccine for the plague; and, hopefully courses of action for any number of doomsday biological weapons.  With the proliferation of drones as a potential method of delivery, the threat seems overwhelming.  Even if we are successful in eradicating the world of the cancer known as militant Islam, there would still be the demented soul, bent on killing as many people as possible, in the shortest amount of time, no matter if their doomsday bug kills them as well.  That’s why the research currently being done on the plague is so important.  
 
As the science fiction/horror writer Stephen King once wrote  “God punishes us for what we cannot imagine.”

The Ghettoization of Genetic Disease


gizmodo |  Today in America, if you are poor, you are also more likely to suffer from poor health. Low socioeconomic status—and the lack of access to healthcare that often accompanies it—has been tied to mental illness, obesity, heart disease and diabetes, to name just a few. 

Imagine now, that in the future, being poor also meant you were more likely than others to suffer from major genetic disorders like cystic fibrosis, Tay–Sachs disease, and muscular dystrophy. That is a future, some experts fear, that may not be all that far off.

Most genetic diseases are non-discriminating, blind to either race or class. But for some parents, prenatal genetic testing has turned what was once fate into choice. There are tests that can screen for hundreds of disorders, including rare ones like Huntington’s disease and 1p36 deletion syndrome. Should a prenatal diagnosis bring news of a genetic disease, parents can either arm themselves with information on how best to prepare, or make the difficult decision to terminate the pregnancy. That is, if they can pay for it. Without insurance, the costs of a single prenatal test can range from a few hundred dollars up to $2,000. 

And genome editing, should laws ever be changed to allow for legally editing a human embryo in the United States, could also be a far-out future factor. It’s difficult to imagine how much genetically engineering an embryo might cost, but it’s a safe bet that it won’t be cheap.

“Reproductive technology is technology that belongs to certain classes,” Laura Hercher, a genetic counselor and professor at Sarah Lawrence College, told Gizmodo. “Restricting access to prenatal testing threatens to turn existing inequalities in our society into something biological and permanent.”

Hercher raised this point earlier this month in pages of Genome magazine, in a piece provocatively titled, “The Ghettoization of Genetic Disease.” Within the genetics community, it caused quite a stir. It wasn’t that no one had ever considered the idea. But for a community of geneticists and genetic counsellors focused on how to help curb the impact of devastating diseases, it was a difficult thing to see articulated in writing.

Prenatal testing is a miraculous technology that has drastically altered the course of a woman’s pregnancy since it was first developed in the 1960s. The more recent advent of noninvasive prenatal tests made the procedure even less risky and more widely available. Today, most women are offered screenings for diseases like Down syndrome that result from an abnormal presence of chromosomes, and targeted testing of the parents can hunt for inherited disease traits like Huntington’s at risk of being passed on to a child, as well. 

But there is a dark side to this miracle of modern medicine, which is that choice is exclusive to those who can afford and access it.


Saturday, June 09, 2018

Genetics in the Madhouse: The Unknown History of Human Heredity


nature  |  Who founded genetics? The line-up usually numbers four. William Bateson and Wilhelm Johannsen coined the terms genetics and gene, respectively, at the turn of the twentieth century. In 1910, Thomas Hunt Morgan began showing genetics at work in fruit flies (see E. Callaway Nature 516, 169; 2014). The runaway favourite is generally Gregor Mendel, who, in the mid-nineteenth century, crossbred pea plants to discover the basic rules of heredity.

Bosh, says historian Theodore Porter. These works are not the fount of genetics, but a rill distracting us from a much darker source: the statistical study of heredity in asylums for people with mental illnesses in late-eighteenth- and early-nineteenth-century Britain, wider Europe and the United States. There, “amid the moans, stench, and unruly despair of mostly hidden places where data were recorded, combined, and grouped into tables and graphs”, the first systematic theory of mental illness as hereditary emerged.

For more than 200 years, Porter argues in Genetics in the Madhouse, we have failed to recognize this wellspring of genetics — and thus to fully understand this discipline, which still dominates many individual and societal responses to mental illness and diversity.

The study of heredity emerged, Porter argues, not as a science drawn to statistics, but as an international endeavour to mine data for associations to explain mental illness. Few recall most of the discipline’s early leaders, such as French psychiatrist, or ‘alienist’, Étienne Esquirol; and physician John Thurnam, who made the York Retreat in England a “model of statistical recording”. Better-known figures, such as statistician Karl Pearson and zoologist Charles Davenport — both ardent eugenicists — come later.

Inevitably, study methods changed over time. The early handwritten correlation tables and pedigrees of patients gave way to more elaborate statistical tools, genetic theory and today’s massive gene-association studies. Yet the imperatives and assumptions of that scattered early network of alienists remain intact in the big-data genomics of precision medicine, asserts Porter. And whether applied in 1820 or 2018, this approach too readily elevates biology over culture and statistics over context — and opens the door to eugenics.

Tipping point for large-scale social change


sciencedaily  |  According to a new paper published in Science, there is a quantifiable answer: Roughly 25% of people need to take a stand before large-scale social change occurs. This idea of a social tipping point applies to standards in the workplace and any type of movement or initiative.
Online, people develop norms about everything from what type of content is acceptable to post on social media, to how civil or uncivil to be in their language. We have recently seen how public attitudes can and do shift on issues like gay marriage, gun laws, or race and gender equality, as well as what beliefs are or aren't publicly acceptable to voice.

During the past 50 years, many studies of organizations and community change have attempted to identify the critical size needed for a tipping point, purely based on observation. These studies have speculated that tipping points can range anywhere between 10 and 40%.

The problem for scientists has been that real-world social dynamics are complicated, and it isn't possible to replay history in precisely the same way to accurately measure how outcomes would have been different if an activist group had been larger or smaller.

"What we were able to do in this study was to develop a theoretical model that would predict the size of the critical mass needed to shift group norms, and then test it experimentally," says lead author Damon Centola, Ph.D., associate professor at the University of Pennsylvania's Annenberg School for Communication and the School of Engineering and Applied Science.

Drawing on more than a decade of experimental work, Centola has developed an online method to test how large-scale social dynamics can be changed.

In this study, "Experimental Evidence for Tipping Points in Social Convention," co-authored by Joshua Becker, Ph.D., Devon Brackbill, Ph.D., and Andrea Baronchelli, Ph.D., 10 groups of 20 participants each were given a financial incentive to agree on a linguistic norm. Once a norm had been established, a group of confederates -- a coalition of activists that varied in size -- then pushed for a change to the norm.

When a minority group pushing change was below 25% of the total group, its efforts failed. But when the committed minority reached 25%, there was an abrupt change in the group dynamic, and very quickly the majority of the population adopted the new norm. In one trial, a single person accounted for the difference between success and failure.

Friday, June 08, 2018

Status Update On The CIA Democrats



WSWS |  The Democratic Party has made a strategic decision to bypass candidates from its progressive wing and recruit former members of the military and intelligence agencies to compete with Republicans in the upcoming midterm elections. The shift away from liberal politicians to center-right government agents and military personnel is part of a broader plan to rebuild the party so it better serves the interests of its core constituents, Wall Street, big business, and the foreign policy establishment. Democrat leaders want to eliminate left-leaning candidates who think the party should promote issues that are important to working people and replace them with career bureaucrats who will be more responsive to the needs of business. The ultimate objective of this organization-remake is to create a center-right superparty comprised almost entirely of trusted allies from the national security state who can be depended on to implement the regressive policies required by their wealthy contributors. 

The busiest primary day of the US congressional election season saw incumbent Democrats and Republicans winning renomination easily, while in contests for open congressional seats the Democratic Party continued its push to select first-time candidates drawn from the national-security apparatus.

On the ballot Tuesday were the nominations for 85 congressional seats—one-fifth of the US House of Representatives—together with five state governorships and five US Senate seats.

Of the five Senate seats, only one is thought competitive, in Montana, where incumbent two-term Democrat Jon Tester will face Republican State Auditor Matt Rosendale, who has the support of the national party, President Trump and most ultra-right groups. Trump carried Montana by a sizeable margin in 2016.

Republican Senator Roger Wicker of Mississippi won renomination and faces only a token Democratic opponent, while three Democratic incumbents, Robert Menendez of New Jersey, Martin Heinrich of New Mexico and Diane Feinstein of California, won their primaries Tuesday and are expected to win reelection easily.

Among the five governorships where nominations were decided Tuesday, Republicans are heavily favored in Alabama and South Dakota and Democrats in California and New Mexico, with only Iowa considered a somewhat competitive race. Republican Kim Reynolds, the lieutenant governor who succeeded Terry Branstad after Trump appointed him US ambassador to China, will face millionaire businessman Fred Hubbell, who defeated a Bernie Sanders-backed candidate, nurses’ union leader Cathy Glasson, to win the Democratic nomination.

The most significant results on Tuesday came in the congressional contests, particularly in the 20 or so seats that are either open due to a retirement or closely contested, based on past results.

Perhaps most revealing was the outcome in New Jersey, where the Democratic Party is seriously contesting all five Republican-held seats. The five Democratic candidates selected in Tuesday’s primary include four whose background lies in the national-security apparatus and a fifth, State Senator Jeff Van Drew, who is a fiscal and cultural conservative. Van Drew opposed gay marriage in the state legislature and has good relations with the National Rifle Association.



Hillary Clinton's Transformative Impact on Society: Africans Sold at Libyan Slave Markets


usatoday |  'We came, we saw, he died,' she joked. But overthrowing Gadhafi was a humanitarian and strategic debacle that now limits our options on North Korea. 

Black Africans are being sold in open-air slave markets,  and it’s Hillary Clinton’s fault. But you won’t hear much about that from the news media or the foreign-policy pundits, so let me explain.
Footage from Libya, released recently by CNN, showed young men from sub-Saharan Africa being auctioned off as farm workers in slave markets.

And how did we get to this point? As the BBC reported back in May, “Libya has been beset by chaos since NATO-backed forces overthrew long-serving ruler Col. Moammar Gadhafi in October 2011.”

And who was behind that overthrow? None other than then-Secretary of State Hillary Clinton.
Under President George W. Bush in 2003, the United States negotiated an agreement with Libyan strongman Gadhafi. The deal: He would give up his weapons of mass destruction peacefully, and we wouldn’t try to depose him.

That seemed a good deal at the time, but the Obama administration didn’t stick to it. Instead, in an operation spearheaded by Clinton, the United States went ahead and toppled him anyway.

The overthrow turned out to be a debacle. Libya exploded into chaos and civil war, and refugees flooded Europe, destabilizing governments there. But at the time, Clinton thought it was a great triumph — "We came, we saw, he died,” she joked about Gadhafi’s overthrow — and adviser Sidney Blumenthal encouraged her to tout her "successful strategy" as evidence of her fitness for the highest office in the land.

It’s surprising the extent to which Clinton has gotten a pass for this debacle, which represents a humanitarian and strategic failure of the first order. (And, of course, the damage is still compounding: How likely is North Korea’s Kim Jong Un to give up his nuclear weapons after seeing the worthlessness of U.S. promises to Gadhafi?)


Thursday, June 07, 2018

Storytelling IS What Distinguishes The Obamas From Other Primates...,


NewYorker |  Barack Obama was a writer before he became a politician, and he saw his Presidency as a struggle over narrative. “We’re telling a story about who we are,” he instructed his aide Ben Rhodes early in the first year of his first term. He said it again in his last months in office, on a trip to Asia—“I mean, that’s our job. To tell a really good story about who we are”—adding that the book he happened to be reading argued for storytelling as the trait that distinguishes us from other primates. Obama’s audience was both the American public and the rest of the world. His characteristic rhetorical mode was to describe and understand both sides of a divide—black and white, liberal and conservative, Muslim and non-Muslim—before synthesizing them into a unifying story that seemed to originate in and affirm his own.

At the heart of Obama’s narrative was a belief that progress, in the larger scheme of things, was inevitable, and this belief underscored his position on every issue from marriage equality to climate change. His idea of progress was neither the rigid millennial faith of Woodrow Wilson nor Bush’s shallow God-blessed optimism. It was human-scale and incremental. Temperamentally the opposite of zealous, he always acknowledged our human imperfection—his Nobel Peace Prize lecture was a Niebuhrian meditation on the tragic necessity of force in affairs of state. But, whatever the setbacks of the moment, he had faith that the future belonged to his expansive vision and not to the narrow, backward-pointing lens of his opponents.

This progressive story emerged in Obama’s account of his own life, in his policies, and in his speeches. Many of them were written by Rhodes, who joined the campaign as a foreign-policy speechwriter in mid-2007, when he was twenty-nine; rose to become a deputy national-security adviser; accompanied Obama on every trip overseas but one; stayed to the last day of the Presidency; and even joined the Obamas on the flight to their first post-Presidential vacation, in Palm Springs, wanting to ease the loneliness of their sudden return to private life. Today, Rhodes still works alongside Obama.

The journalistic cliché of a “mind meld” doesn’t capture the totality of Rhodes’s identification with the President. He came to Obama with an M.F.A. in fiction writing from New York University and a few years on the staff of a Washington think tank. He became so adept at anticipating Obama’s thoughts and finding Obamaesque words for them that the President made him a top foreign-policy adviser, with a say on every major issue. Rhodes’s advice mostly took the form of a continuous effort to understand and apply the President’s thinking. His decade with Obama blurred his own identity to the vanishing point, and he was sensitive enough—unusually so for a political operative—to fear losing himself entirely in the larger story. Meeting Obama was a fantastic career opportunity and an existential threat.

Wednesday, June 06, 2018

Barack Hussein Obama Worst Thing To Happen To Black Folks Since The End Of Jim Crow?


Counterpunch |  A New York Times article on May 30 entitled “How Trump’s Election Shook Obama: ‘What if We Were Wrong?’” provided an opportunity to indulge in this sordid pastime. According to one of his aides, after the election Obama speculated that the cosmopolitan internationalism of enlightened intellectuals like him had been responsible for the stunning outcome. “Maybe we pushed too far,” he said. “Maybe people just want to fall back into their tribe.” In other words, we were too noble and forward-thinking for the benighted masses, who want nothing more than to remain submerged in their comforting provincial identities. We were too ambitious and idealistic for our flawed compatriots.

“Sometimes I wonder whether I was 10 or 20 years too early,” Obama sighed. The country hadn’t been ready for the first black president and his lofty post-racial vision.

These quotations are all the evidence one needs to understand what goes on in the mind of someone like Barack Obama.

In fact, the last quotation is revealing enough in itself: it alone suggests the stupefying dimensions of Obama’s megalomania. It is hardly news that Obama is a megalomaniac, but what is moderately more interesting is the contemptible and deluded nature of his megalomania. (In some cases, after all, egomania might be justified. I could forgive Noam Chomsky for being an egomaniac—if he were one, which his self-effacing humility shows is far from the case.) Obama clearly sees himself as the culmination of the Civil Rights Movement—he who participated in no sit-ins, no Freedom Rides, no boycotts or harrowing marches in the Deep South, who suffered no police brutality or nights in jail, who attended Harvard Law and has enjoyed an easy and privileged adulthood near or in the corridors of power. This man who has apparently never taken a courageous and unpopular moral stand in his life decided long ago that it was his historic role to bring the struggles of SNCC and the SCLC, of Ella Baker and Bob Moses, of A. Philip Randolph and Martin Luther King, Jr. to their fruition—by sailing into the Oval Office on the wave of millions of idealistic supporters, tireless and selfless organizers. With his accession to power, and that of such moral visionaries as Lawrence Summers, Hillary Clinton, Timothy Geithner, Eric Holder, Arne Duncan, Robert Gates, and Samantha Power, MLK’s dream was at last realized.

Obama was continuing in the tradition of Abraham Lincoln and the abolitionists when his administration deported more than three million undocumented immigrants and broke up tens of thousands of immigrant families. He was being an inspiring idealist when he permittedarms shipments to Israel in July and August 2014 in the midst of the Gaza slaughter—because, as he said with characteristic eloquence and moral insight, “Israel has a right to defend itself” (against children and families consigned to desperate poverty in an open-air prison).

He was being far ahead of his time, a hero of both civil rights and enlightened globalism, when he presided over “the greatest disintegration of black wealth in recent memory” by doing nothing to halt the foreclosure crisis or hold anyone accountable for the damage it caused. Surely it was only irrational traditions of tribalism that got Trump elected, and not, say, the fact that Obama’s administration was far more friendly to the banking sector than George H. W. Bush’s was, as shown for instance by the (blatantly corrupt) hiring of financial firms’ representatives to top positions in the Justice Department.

William Jefferson Clinton Don't Say GAPING!!!


mediaite |   Former President Bill Clinton appeared on the Today show Monday for an interview about his upcoming novel, and he faced the type of questioning that has become common practice in the aftermath of the Me Too movement: a challenge of his treatment of Monica Lewinsky, the woman with which he had his infamous West Wing affair.

NBC News’s Craig Melvin kicked things off by asking Clinton how he would have approached the accusations lobbed against him if he were president in 2018, noting some have recently said he should’ve resigned in the 1998.

“I don’t think it would be an issue because people would be using the facts instead of the imagined facts,” Clinton said. “If the facts were the same today, I wouldn’t [resign].”

“A lot of the facts have been conveniently omitted to make the story work, I think partly because they are frustrated that they got all these serious allegations against the current occupant of the Oval Office, and his voters don’t seem to care,” Clinton said. “I think I did the right thing, I defended the Constitution.”

“You think this president’s been given a pass, with regards to the women who have come forward and accused him of sexual misconduct?” Melvin asked.

“No. But it hasn’t gotten anything like the coverage you would expect,” Clinton said.

The former president continued that he likes the Me Too movement, saying “it’s way overdue.” He added, “That doesn’t mean I agree with everything.”

Melvin confronted Clinton with a line from the former White House intern’s op-ed in Vanity Fair in which she accused the president of taking advantage of her.

“Looking back on what happened then, through the lens of Me Too now, do you think differently, or feel more responsibility?” Melvin pressed.

“No, I felt terrible then, and I came to grips with it,” Clinton said.

“Did you ever apologize to her?” Melvin asked.'

“Yes,” Clinton said. “And nobody believes that I got out of that for free. I left the White House $16 million in debt. But you, typically, have ignored gaping facts in describing this and I bet you don’t even know them. This was litigated 20 years ago. Two-thirds of the American people sided with me. They were not insensitive to that.”



Tuesday, June 05, 2018

You Can't Make This Shit Up!

Sign The Petition

BEFORE: Trump THREATENED reporters with violence at his campaign rallies.

AND NOW: He wants to PUNISH liberal reporters by revoking their press credentials!

That means Trump could BAN MSNBC reporters and force us into only watching FOX NEWS. EW!
Trump has a backup plan to DESTROY MSNBC:

The Sinclair Broadcasting Group is buying up local T.V. news stations nationwide.
They absolutely ADORE Trump.
If they have their way, they’ll broadcast GOP propaganda to 72% of U.S. viewers!

We’d ALL be forced to watch pro-Trump propaganda instead of the objective and fact-based news like on MSNBC.

It would be like watching Fox News in your home 24/7. AWFUL!
Please don’t let your favorite reporters down!

- The Progressive Turnout Project
Progressive Turnout Project
P.O. Box 617614
Chicago, IL 60661

One Contrived Narrative To Bind Them All...,


steemit |  MSNBC host Joy Reid still has a job. Despite blatantly lying about time-traveling hackers bearing responsibility for bigoted posts a decade ago in her then-barely-known blog, despite her reportedly sparking an FBI investigation on false pretenses, despite her colleagues at MSNBC being completely fed up with how the network is handling the controversy surrounding her, her career just keeps trundling forward like a bullet-riddled zombie.

To be clear, I do not particularly care that Joy Reid has done any of these things. I write about war, nuclear escalations and the sociopathy of US government agencies which kill millions of people; I don't care that Joy Reid is or was a homophobe, and I don't care that she lied to cover it up. The war agendas that MSNBC itself promotes on a daily basis are infinitely worse than either of these things, and if that isn't obvious to you it's because military propaganda has caused you to compartmentalize yourself out of an intellectually honest understanding of what war is.

What is interesting to me, however, is the fact that Reid's bosses are protecting her career so adamantly. Both by refusing to fire her, and by steering the conversation into being about her controversial blog posts rather than the fact that she told a spectacular lie in an attempt to cover them up, Reid is being propped up despite this story constantly re-emerging and making new headlines with new embarrassing details, and despite her lack of any discernible talent or redeeming personal characteristics. This tells us something important about what is going on in the world.

It is not difficult to find someone to read from a teleprompter for large amounts of money. What absolutely is difficult is finding someone who is willing to deceive and manipulate to advance the agendas of the privileged few day after day. Who else would be willing to spend all day on Twitter smearing everyone to the left of Hillary Clinton while still claiming to stand on the political left?

Who else would advance the point-blank lie about "17 intelligence agencies" having declared Russia guilty in US election meddling months after that claim had been famously and virally debunked? Who else would publicly claim that Edward Snowden's NSA leaks did not benefit anyone besides Russia? Who else could oligarchs like Comcast CEO Brian L Roberts, whose company controls MSNBC, count on to consistently advance his agendas?

While it's easy to find someone you can count on to advance one particular lie at one particular time, it is difficult to find someone you can be absolutely certain will lie for you day after day, year after year, through election cycles and administration changes and new war agendas and changing political climates. A lot of the people who used to advance perspectives which ran against the grain of the political orthodoxy at MSNBC like Phil Donahue, Ed Schultz and Dylan Ratigan have vanished from the airwaves never to return, while reporters who consistently keep their heads down and toe the line for the Democratic establishment like Chris Hayes, Rachel Maddow and Joy Reid are richly rewarded and encouraged to remain.

Monday, June 04, 2018

Our Civil War is Actually The Kochtopus vs. The Vampire Squid


economicnoise |  Two or more sides disagree on who runs the country. And they can’t settle the question through elections because they don’t even agree that elections are how you decide who’s in charge.  That’s the basic issue here. Who decides who runs the country? When you hate each other but accept the election results, you have a country. When you stop accepting election results, you have a countdown to a civil war.

The Mueller investigation is about removing President Trump from office and overturning the results of an election. We all know that. But it’s not the first time they’ve done this. The first time a Republican president was elected this century, they said he didn’t really win. The Supreme Court gave him the election. There’s a pattern here.

What do sure odds of the Democrats rejecting the next Republican president really mean? It means they don’t accept the results of any election that they don’t win. It means they don’t believe that transfers of power in this country are determined by elections.

That’s a civil war.

There’s no shooting. At least not unless you count the attempt to kill a bunch of Republicans at a charity baseball game practice. But the Democrats have rejected our system of government.

This isn’t dissent. It’s not disagreement. You can hate the other party. You can think they’re the worst thing that ever happened to the country. But then you work harder to win the next election. When you consistently reject the results of elections that you don’t win, what you want is a dictatorship.
Your very own dictatorship.

The only legitimate exercise of power in this country, according to Democrats, is its own. Whenever Republicans exercise power, it’s inherently illegitimate. The Democrats lost Congress. They lost the White House. So what did they do? They began trying to run the country through Federal judges and bureaucrats. Every time that a Federal judge issues an order saying that the President of the United States can’t scratch his own back without his say so, that’s the civil war.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...