Saturday, June 04, 2016

little things mean everything...,


thescientist |  Little things mean a lot. To any biologist, this time-worn maxim is old news. But it’s worth revisiting. As several articles in this issue of The Scientist illustrate, how researchers define and examine the “little things” does mean a lot.

Consider this month’s cover story, “Noncoding RNAs Not So Noncoding,” by TS correspondent Ruth Williams. Combing the human genome for open reading frames (ORFs), sequences bracketed by start and stop codons, yielded a protein-coding count somewhere in the neighborhood of 24,000. That left a lot of the genome relegated to the category of junk—or, later, to the tens of thousands of mostly mysterious long noncoding RNAs (lncRNAs). But because they had only been looking for ORFs that were 300 nucleotides or longer (i.e., coding for proteins at least 100 amino acids long), genome probers missed so-called short ORFs (sORFs), which encode small peptides. “Their diminutive size may have caused these peptides to be overlooked, their sORFs to be buried in statistical noise, and their RNAs to be miscategorized, but it does not prevent them from serving important, often essential functions, as the micropeptides characterized to date demonstrate,” writes Williams.

How little things work definitely informs another field of life science research: synthetic biology. As the functions of genes and gene networks are sussed out, bioengineers are using the information to design small, synthetic gene circuits that enable them to better understand natural networks. In “Synthetic Biology Comes into Its Own,” Richard Muscat summarizes the strides made by synthetic biologists over the last 15 years and offers an optimistic view of how such networks may be put to use in the future. And to prove him right, just as we go to press, a collaborative group led by one of syn bio’s founding fathers, MIT’s James Collins, has devised a paper-based test for Zika virus exposure that relies on a freeze-dried synthetic gene circuit that changes color upon detection of RNAs in the viral genome. The results are ready in a matter of hours, not the days or weeks current testing takes, and the test can distinguish Zika from dengue virus. “What’s really exciting here is you can leverage all this expertise that synthetic biologists are gaining in constructing genetic networks and use it in a real-world application that is important and can potentially transform how we do diagnostics,” commented one researcher about the test.

RNA-Targetting CRISPR


thescientist |  Much attention paid to the bacterial CRISPR/Cas9 system has focused on its uses as a gene-editing tool. But there are other CRISPR/Cas sytems. Researchers from MIT and the National Center for Biotechnology Information (NCBI) last year identified additional CRISPR proteins. One of these proteins, C2c2, seemed to be a putative RNA-cleaving—rather than a DNA-targeting—enzyme, the researchers reported at the time. Now, the same group has established that C2c2 indeed cleaves single-stranded RNA (ssRNA), providing the first example of a CRISPR/Cas system that exclusively targets RNA. The team’s latest results, published today (June 2) in Science, confirm the diversity of CRISPR systems and point to the possibility of precise in vivo RNA editing.

“This protein does what we expected, performing RNA-guided cleavage of cognate RNA with high specificity, and can be programmed to cleave any RNA at will,” study coauthor Eugene Koonin, of the NCBI and the National Library of Medicine, told The Scientist.

“I am very excited about the paper,” said Gene Yeo, an RNA researcher at the University of California, San Diego, who was not involved in the work. “The community was expecting to find native RNA CRISPR systems, so it’s great that one of these has now been characterized.”

non-coding rna not so non-coding...,


thescientist |  In 2002, a group of plant researchers studying legumes at the Max Planck Institute for Plant Breeding Research in Cologne, Germany, discovered that a 679-nucleotide RNA believed to function in a noncoding capacity was in fact a protein-coding messenger RNA (mRNA).1 It had been classified as a long (or large) noncoding RNA (lncRNA) by virtue of being more than 200 nucleotides in length. The RNA, transcribed from a gene called early nodulin 40 (ENOD40), contained short open reading frames (ORFs)—putative protein-coding sequences bookended by start and stop codons—but the ORFs were so short that they had previously been overlooked. When the Cologne collaborators examined the RNA more closely, however, they found that two of the ORFs did indeed encode tiny peptides: one of 12 and one of 24 amino acids. Sampling the legumes confirmed that these micropeptides were made in the plant, where they interacted with a sucrose-synthesizing enzyme.

Five years later, another ORF-containing mRNA that had been posing as a lncRNA was discovered in Drosophila.2,3 After performing a screen of fly embryos to find lncRNAs, Yuji Kageyama, then of the National Institute for Basic Biology in Okazaki, Japan, suppressed each transcript’s expression. “Only one showed a clear phenotype,” says Kageyama, now at Kobe University. Because embryos missing this particular RNA lacked certain cuticle features, giving them the appearance of smooth rice grains, the researchers named the RNA “polished rice” (pri).

Turning his attention to how the RNA functioned, Kageyama thought he should first rule out the possibility that it encoded proteins. But he couldn’t. “We actually found it was a protein-coding gene,” he says. “It was an accident—we are RNA people!” The pri gene turned out to encode four tiny peptides—three of 11 amino acids and one of 32—that Kageyama and colleagues showed are important for activating a key developmental transcription factor.4

Since then, a handful of other lncRNAs have switched to the mRNA ranks after being found to harbor micropeptide-encoding short ORFs (sORFs)—those less than 300 nucleotides in length. And given the vast number of documented lncRNAs—most of which have no known function—the chance of finding others that contain micropeptide codes seems high.

The hunt for these tiny treasures is now on, but it’s a challenging quest. After all, there are good reasons why these itty-bitty peptides and their codes went unnoticed for so long.

Friday, June 03, 2016

The Genome Project - Write


NYTimes |  “By focusing on building the 3Gb of human DNA, HGP-write would push current conceptual and technical limits by orders of magnitude and deliver important scientific advances,” they write, referring to three gigabases, the three billion letters in the human genome.

Scientists already can change DNA in organisms or add foreign genes, as is done to make medicines like insulin or genetically modified crops. New “genome editing” tools, like one called Crispr, are making it far easier to re-engineer an organism’s DNA blueprint.

But George Church, a professor of genetics at Harvard Medical School and one of the organizers of the new project, said that if the changes desired are extensive, at some point it becomes easier to synthesize the needed DNA from scratch.

“Editing doesn’t scale very well,” he said. “When you have to make changes to every gene in the genome it may be more efficient to do it in large chunks.”

Besides Dr. Church, the other organizers of the project are Jef Boeke, director of the Institute for Systems Genetics at NYU Langone Medical Center; Andrew Hessel, a futurist at the software company Autodesk; and Nancy J. Kelley, who works raising money for projects. The paper in Science lists a total of 25 authors, many of them involved in DNA engineering.

Autodesk, which has given $250,000 to the project, is interested in selling software to help biologists design DNA sequences to make organisms perform particular functions. Dr. Church is a founder of Gen9, a company that sells made-to-order strands of DNA.

Dr. Boeke of N.Y.U. is leading an international project to synthesize the complete genome of yeast, which has 12 million base pairs. It would be the largest genome synthesized to date, though still much smaller than the human genome.

Is Everything You Think You Know About AI Wrong?


WaPo |  consumers are already seeing our machine learning research reflected in the sudden explosion of digital personal assistants like Siri, Alexa and Google Now — technologies that are very good at interpreting voice-based requests but aren't capable of much more than that. These "narrow AI" have been designed with a specific purpose in mind: To help people do the things regular people do, whether it's looking up the weather or sending a text message.

Narrow, specialized AI is also what companies like IBM have been pursuing. It includes, for example, algorithms to help radiologists pick out tumors much more accurately by "learning" all the cancer research we've ever done and by "seeing" millions of sample X-rays and MRIs. These robots act much more like glorified calculators — they can ingest way more data than a single person could hope to do with his or her own brain, but they still operate within the confines of a specific task like cancer diagnosis. These robots are not going to be launching nuclear missiles anytime soon. They wouldn't know how, or why. And the more pervasive this type of AI becomes, the more we'll understand about how best to build the next generation of robots.

So who is going to lose their job?
Partly because we're better at designing these limited AI systems, some experts predict that high-skilled workers will adapt to the technology as a tool, while lower-skill jobs are the ones that will see the most disruption. When the Obama administration studied the issue, it found that as many as 80 percent of jobs currently paying less than $20 an hour might someday be replaced by AI.

"That's over a long period of time, and it's not like you're going to lose 80 percent of jobs and not reemploy those people," Jason Furman, a senior economic advisor to President Obama, said in an interview. "But [even] if you lose 80 percent of jobs and reemploy 90 percent or 95 percent of those people, it's still a big jump up in the structural number not working. So I think it poses a real distributional challenge."

Policymakers will need to come up with inventive ways to meet this looming jobs problem. But the same estimates also hint at a way out: Higher-earning jobs stand to be less negatively affected by automation. Compared to the low-wage jobs, roughly a third of those who earn between $20 and $40 an hour are expected to fall out of work due to robots, according to Furman. And only a sliver of high-paying jobs, about 5 percent, may be subject to robot replacement.

Those numbers might look very different if researchers were truly on the brink of creating sentient AI that can really do all the same things a human can. In this hypothetical scenario, even high-skilled workers might have more reason to fear. But the fact that so much of our AI research right now appears to favor narrow forms of artificial intelligence at least suggests we could be doing a lot worse.

Thursday, June 02, 2016

tomorrowland



genes: convenient tokens of our time


ecodevoevo |  Our culture, like any culture, creates symbols to use as tokens as we go about our lives. Tokens are reassuring or explanatory symbols, and we naturally use them in the manipulations for various resources that culture is often about.  Nowadays, a central token is the gene.

Genes are proffered as the irrefutable ubiquitous cause of things, the salvation, the explanation, in ways rather similar to the way God and miracles are proffered by religion.  Genes conveniently lead to manipulation by technology, and technology sells in our industrial culture. Genes are specific rather than vague, are enumerable, can be seen as real core 'data' to explain the world.  Genes are widely used as ultimate blameworthy causes, responsible for disease which comes to be defined as what happens when genes go 'wrong'.  Being literally unseen, like angels, genes can take on an aura of pervasive power and mystery.  The incantation by scientists is that if we can only be enabled to find them we can even cure them (with CRISPR or some other promised panacea), exorcising their evil. All of this invocation of fundamental causal tokens is particulate enough to be marketable for grants and research proposals, great for publishing in journals and for news media to gawk at in wonder. Genes provide impressively mysterious tokens for scientists to promise almost to create miracles by manipulating.  Genes stand for life's Book of Truth, much as sacred texts have traditionally done and, for many, still do.

Genes provide fundamental symbolic tokens in theories of life--its essence, its evolution, of human behavior, of good and evil traits, of atoms of causation from which everything follows. They lurk in the background, responsible for all good and evil.  So in our age in human history, it is not surprising that reports of finding genes 'for' this or that have unbelievable explanatory panache.  It's not a trivial aspect of this symbolic role that people (including scientists) have to take others' word for what they claim as insights. 


genes without prominence...,


inference-review |  The cell is a complex dynamic system in which macromolecules such as DNA and the various proteins interact within a free energy flux provided by nutrients. Its phenotypes can be represented by quasi-stable attractors embedded in a multi-dimensional state space whose dimensions are defined by the activities of the cell’s constituent proteins.1
 
This is the basis for the dynamical model of the cell.

The current molecular genetic or machine model of the cell, on the other hand, is predicated on the work of Gregor Mendel and Charles Darwin. Mendel framed the laws of inheritance on the basis of his experimental work on pea plants. The first law states that inheritance is a discrete and not a blending process: crossing purple and white flowered varieties produces some offspring with white and some with purple flowers, but generally not intermediately colored offspring.2 Mendel concluded that whatever was inherited had a material or particulate nature; it could be segregated.3

According to the machine cell model, those particles are genes or sequences of nucleobases in the genomic DNA. They constitute Mendel’s units of inheritance. Gene sequences are transcribed, via messenger RNA, to proteins, which are folded linear strings of amino acids called peptides. The interactions between proteins are responsible for phenotypic traits. This assumption relies on two general principles affirmed by Francis Crick in 1958, namely the sequence hypothesis and the central dogma.4 The sequence hypothesis asserts that the sequence of bases in the genomic DNA determines the sequence of amino acids in the peptide and the three-dimensional structure of the folded peptide. The central dogma states that the sequence hypothesis represents a flow of information from DNA to the proteins and rules out a flow in reverse.

In 1961, the American biologist Christian Anfinsen demonstrated that when the enzyme ribonuclease was denatured, it lost its activity, but regained it on re-naturing. Anfinsen concluded from the kinetics of re-naturation that the amino acid sequence of the peptide determined how the peptide folded.5 He did not cite Crick’s 1958 paper or the sequence hypothesis, although he had apparently read the first and confirmed the second.

The central dogma and the sequence hypothesis proved to be wonderful heuristic tools with which to conduct bench work in molecular biology.

The machine model recognizes cells to be highly regulated entities; genes are responsible for that regulation through gene regulatory networks (GRNs).6 Gene sequences provide all the information needed to build and regulate the cell.

Both a naturalist and an experimentalist, Darwin observed that breeding populations exhibit natural variations. Limited resources mean a struggle for existence. Individuals become better and better adapted to their environments. This process is responsible for both small adaptive improvements and dramatic changes. Darwin insisted evolution was, in both cases, gradual, and predicted that intermediate forms between species should be found both in the fossil record and in existing populations. Today, these ideas are part of the modern evolutionary synthesis, a term coined by Julian Huxley in 1942.7 Like the central dogma, it has been subject to controversy, despite its early designation as the set of principles under which all of biology is conducted.8

The modern synthesis, we now understand, does not explain trans-generational epigenetic inheritance, consciousness, and niche construction.9 It is possible that the concept of the gene and the claim that evolution depends on genetic diversity may both need to be modified or replaced.

This essay is a step towards describing biology as a science founded on the laws of physics. It is a step in the right direction.

Wednesday, June 01, 2016

the constructal law in intelligence?



Stuart Kauffman's 1993 book, Origins of Order, is a technical treatise on his life's work in Mathematical Biology. Kauffman greatly extends Alan Turing 's early work in Mathematical Biology. The intended audience is other mathematical and theoretical biologists. It's chock full of advanced mathematics. Of particular note, Origins of Order seems to be Kauffman's only published work in which he states his experimental results about the interconnection between complex systems and neural networks.

Kauffman explains that a complex system tuned with particular parameters is a neural network.  I can not overstate the importance of the last sentence in the paragraph above. The implication is that one basis for intelligence, biological neural networks, can spontaneously self-generate given the correct starting parameters. Kauffman provides the mathematics to do this, discusses his experimental results, and points out that the parameters in question are an attractor state.

What's the meaning of life? Physics.


nationalgeographic |  According to your book, physics describes the actions or tendencies of every living thing—and inanimate ones as well. Does that mean we can unite all behavior under physics?

Absolutely. Our narrow definition of the discipline is something that’s happened in the past hundred years, thanks to the immense impact of Albert Einstein and atomic physics and relativity at the turn of the [20th] century.

But we need to go back farther. In Latin, nature—physics—means “everything that happens.”

One thing that came directly from Charles Darwin is that humans are part of nature, along with all the other animate beings. Therefore all the things that we make—our tools, our homes, our technologies—are natural as well. It’s all part of the same thing.

In your magazine and on your TV channel, we see many animals doing this—extending their reach with tools, with intelligence, with social organization. Everything is naturally interconnected.

Your new book is premised on a law of physics that you formulated in 1996. The Constructal Law says there’s a universal evolutionary tendency toward design in nature, because everything is composed of systems that change and evolve to flow more easily. That’s correct. But I would specify and say that the tendency is toward evolving freely—changing on the go in order to provide greater and greater ease of movement. That’s physics, stage four—more precise, more specific expressions of the same idea.

Flow systems are everywhere. They describe the ways that animals move and migrate, the ways that river deltas form, the ways that people build fires. In each case, they evolve freely to reduce friction and to flow better—to improve themselves and minimize their mistakes or imperfections. Blood flow and water flow essentially evolve the same way.

Why does life exist?


quantamagazine |  Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Tuesday, May 31, 2016

your brain is not a computer...,


aeon |  No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.

Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.

Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.

Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

the minecraft generation


NYTimes |  Since its release seven years ago, Minecraft has become a global sensation, captivating a generation of children. There are over 100 million registered players, and it’s now the third-best-­selling video game in history, after Tetris and Wii Sports. In 2014, Microsoft bought Minecraft — and Mojang, the Swedish game studio behind it — for $2.5 billion.

There have been blockbuster games before, of course. But as Jordan’s experience suggests — and as parents peering over their children’s shoulders sense — Minecraft is a different sort of phenomenon.
For one thing, it doesn’t really feel like a game. It’s more like a destination, a technical tool, a cultural scene, or all three put together: a place where kids engineer complex machines, shoot videos of their escapades that they post on YouTube, make art and set up servers, online versions of the game where they can hang out with friends. It’s a world of trial and error and constant discovery, stuffed with byzantine secrets, obscure text commands and hidden recipes. And it runs completely counter to most modern computing trends. Where companies like Apple and Microsoft and Google want our computers to be easy to manipulate — designing point-and-click interfaces under the assumption that it’s best to conceal from the average user how the computer works — Minecraft encourages kids to get under the hood, break things, fix them and turn mooshrooms into random-­number generators. It invites them to tinker.

In this way, Minecraft culture is a throwback to the heady early days of the digital age. In the late ’70s and ’80s, the arrival of personal computers like the Commodore 64 gave rise to the first generation of kids fluent in computation. They learned to program in Basic, to write software that they swapped excitedly with their peers. It was a playful renaissance that eerily parallels the embrace of Minecraft by today’s youth. As Ian Bogost, a game designer and professor of media studies at Georgia Tech, puts it, Minecraft may well be this generation’s personal computer.

At a time when even the president is urging kids to learn to code, Minecraft has become a stealth gateway to the fundamentals, and the pleasures, of computer science. Those kids of the ’70s and ’80s grew up to become the architects of our modern digital world, with all its allures and perils. What will the Minecraft generation become?

“Children,” the social critic Walter Benjamin wrote in 1924, “are particularly fond of haunting any site where things are being visibly worked on. They are irresistibly drawn by the detritus generated by building, gardening, housework, tailoring or carpentry.”

blockchain 'smart contracts' to disrupt lawyers


afr |  Among the blockchain cognoscenti, everyone is talking about Ethereum.

A rival blockchain and virtual currency to bitcoin, Ethereum allows for the programming of "smart contracts", or computer code which facilitates or enforces a set of rules. Ethereum was first described by the programmer Vitalik Buterin in late 2013; the first full public version of the platform was released in February.

Commercial lawyers are watching the arrival of Ethereum closely given the potential for smart contracts in the future to disintermediate their highly  lucrative role in drafting and exchanging paper contracts. Smart contracts are currently being used to digitise business rules, but may soon move to codify legal agreements.

The innovation has been made possible because Ethereum provides developers with a more liberal "scripting language" than bitcoin. This is allowing companies to create their own private blockchains and build applications. Already, apps for music distribution, sports betting and a new type of financial auditing are being tested.

Some of the world's largest technology companies, from Microsoft to IBM, are lining up to work with Ethereum, while the R3 CEV banking consortium has also been trialling its technology as it tests blockchain-style applications for the banking industry including trading commercial paper. Banks are interested in blockchain because distributed ledgers can remove intermediaries and speed up transactions, thereby reducing costs. But if banks move business to blockchains in the future, financial services lawyers will need to begin re-drafting into digital form the banking contracts that underpin the capital markets.

The global director of IBM Blockchain Labs, Nitin Gaur, who was in Sydney last week, says he is a "huge fan" of Ethereum, pointing to its "rich ecosystem of developers". He predicts law to be among the industries disrupted by the technology.

Monday, May 30, 2016

the selfish gene turns forty...,


theguardian |  It’s 40 years since Richard Dawkins suggested, in the opening words of The Selfish Gene, that, were an alien to visit Earth, the question it would pose to judge our intellectual maturity was: “Have they discovered evolution yet?” We had, of course, by the grace of Charles Darwin and a century of evolutionary biologists who had been trying to figure out how natural selection actually worked. In 1976, The Selfish Gene became the first real blockbuster popular science book, a poetic mark in the sand to the public and scientists alike: this idea had to enter our thinking, our research and our culture.

The idea was this: genes strive for immortality, and individuals, families, and species are merely vehicles in that quest. The behaviour of all living things is in service of their genes hence, metaphorically, they are selfish. Before this, it had been proposed that natural selection was honing the behaviour of living things to promote the continuance through time of the individual creature, or family, or group or species. But in fact, Dawkins said, it was the gene itself that was trying to survive, and it just so happened that the best way for it to survive was in concert with other genes in the impermanent husk of an individual.

This gene-centric view of evolution also began to explain one of the oddities of life on Earth – the behaviour of social insects. What is the point of a drone bee, doomed to remain childless and in the service of a totalitarian queen? Suddenly it made sense that, with the gene itself steering evolution, the fact that the drone shared its DNA with the queen meant that its servitude guarantees not the individual’s survival, but the endurance of the genes they share. Or as the Anglo-Indian biologist JBS Haldane put it: “Would I lay down my life to save my brother? No, but I would to save two brothers or eight cousins.”

These ideas were espoused by only a handful of scientists in the middle decades of the 20th century – notably Bob Trivers, Bill Hamilton, John Maynard Smith and George Williams. In The Selfish Gene, Dawkins did not merely recapitulate them; he made an impassioned argument for the reality of natural selection. Previous attempts to explain the mechanics of evolution had been academic and rooted in maths. Dawkins walked us through it in prose. Many great popular science books followed – Stephen Hawking’s A Brief History of Time, Stephen Pinker’s The Blank Slate, and, currently, The Vital Question by Nick Lane.

For many of us, The Selfish Gene was our first proper taste of evolution. I don’t remember it being a controversial subject in my youth. In fact, I don’t remember it being taught at all. Evolution, Darwin and natural selection were largely absent from my secondary education in the late 1980s. The national curriculum, introduced in the UK in 1988, included some evolution, but before 1988 its presence in schools was far from universal. As an aside, in my opinion the subject is taught bafflingly minimally and late in the curriculum even today; evolution by natural selection is crucial to every aspect of the living world. In the words of the Russian scientist Theodosius Dobzhansky: “Nothing in biology makes sense except in the light of evolution.”


Sunday, May 29, 2016

a guide to being human in the 21st century


themonkeytrap |  I currently teach a class called Reality 101 at the University of Minnesota.  It is a 15 week exploration of ‘the human ecosystem’ – what drives us, what powers us and what we are doing. Only when viewed from such an ecological lens can ‘better’ choices be made by individuals, who in turn impact societies.  Our situation cannot be described in an hour -but this was my latest and best attempt. The talk is 60% new from prior talks – I start with brief summaries of energy, economy, behavior and environment, followed by a listing of 25 of the current ‘flawed assumptions’ underpinning modern human culture.  I close with a list of 20 new ways of thinking about ones future-for consideration – and possibly to work towards – for a young person alive this century.  It is my opinion we need more pro-social, pro-future players on the gameboard, whatever their beliefs and priorities might be.  2 books should be finished this year and I will post a note here about progress/etc

Saturday, May 28, 2016

Why Granny Goodness is Imperial Corporatism's Next Choice for CEO





telesur |  The 2016 election campaign is remarkable not only for the rise of Donald Trump and Bernie Sanders but also for the resilience of an enduring silence about a murderous self-bestowed divinity. A third of the members of the United Nations have felt Washington's boot, overturning governments, subverting democracy, imposing blockades and boycotts. Most of the presidents responsible have been liberal – Truman, Kennedy, Johnson, Carter, Clinton, Obama.

The breathtaking record of perfidy is so mutated in the public mind, wrote the late Harold Pinter, that it “never happened …Nothing ever happened. Even while it was happening it wasn't happening. It didn't matter. It was of no interest. It didn't matter … “. Pinter expressed a mock admiration for what he called “a quite clinical manipulation of power worldwide while masquerading as a force for universal good. It's a brilliant, even witty, highly successful act of hypnosis.”

Take Obama. As he prepares to leave office, the fawning has begun all over again. He is “cool." One of the more violent presidents, Obama gave full reign to the Pentagon war-making apparatus of his discredited predecessor. He prosecuted more whistleblowers – truth-tellers – than any president. He pronounced Chelsea Manning guilty before she was tried. Today, Obama runs an unprecedented worldwide campaign of terrorism and murder by drone.

History was declared over, class was abolished and gender promoted as feminism; lots of women became New Labour MPs. They voted on the first day of Parliament to cut the benefits of single parents, mostly women, as instructed. A majority voted for an invasion that produced 700,000 Iraqi widows.

The equivalent in the US are the politically correct warmongers on the New York Times, Washington Post, and network TV who dominate political debate. I watched a furious debate on CNN about Trump's infidelities. It was clear, they said, a man like that could not be trusted in the White House. No issues were raised. Nothing on the 80 per cent of Americans whose income has collapsed to 1970s levels. Nothing on the drift to war. The received wisdom seems to be “hold your nose” and vote for Clinton: anyone but Trump. That way, you stop the monster and preserve a system gagging for another war.

granny goodness won't say how much the vampire squid put into her son-in-law...,


theintercept |  When Hillary Clinton’s son-in-law sought funding for his new hedge fund in 2011, he found financial backing from one of the biggest names on Wall Street: Goldman Sachs chief executive Lloyd Blankfein.

The fund, called Eaglevale Partners, was founded by Chelsea Clinton’s husband, Marc Mezvinsky, and two of his partners. Blankfein not only personally invested in the fund, but allowed his association with it to be used in the fund’s marketing.

The investment did not turn out to be a savvy business decision. Earlier this month, Mezvinsky was forced to shutter one of the investment vehicles he launched under Eaglevale, called Eaglevale Hellenic Opportunity, after losing 90 percent of its money betting on the Greek recovery. The flagship Eaglevale fund has also lost money, according to the New York Times.

There has been minimal reporting on the Blankfein investment in Eaglevale Partners, which is a private fund that faces few disclosure requirements. At a campaign rally in downtown San Francisco on Thursday, I attempted to ask Hillary Clinton if she knew the amount that Blankfein invested in her son-in-law’s fund.

why young people are right about hillary clinton...,


rollingstone |  This is why her shifting explanations and flippant attitude about the email scandal are almost more unnerving than the ostensible offense. She seems confident that just because her detractors are politically motivated, as they always have been, that they must be wrong, as they often were.
But that's faulty thinking. My worry is that Democrats like Hillary have been saying, "The Republicans are worse!" for so long that they've begun to believe it excuses everything. It makes me nervous to see Hillary supporters like law professor Stephen Vladeck arguing in the New York Times that the real problem wasn't anything Hillary did, but that the Espionage Act isn't "practical."

If you're willing to extend the "purity" argument to the Espionage Act, it's only a matter of time before you get in real trouble. And even if it doesn't happen this summer, Democrats may soon wish they'd picked the frumpy senator from Vermont who probably checks his restaurant bills to make sure he hasn't been undercharged.

But in the age of Trump, winning is the only thing that matters, right? In that case, there's plenty of evidence suggesting Sanders would perform better against a reality TV free-coverage machine like Trump than would Hillary Clinton. This would largely be due to the passion and energy of young voters.

Young people don't see the Sanders-Clinton race as a choice between idealism and incremental progress. The choice they see is between an honest politician, and one who is so profoundly a part of the problem that she can't even see it anymore.

They've seen in the last decades that politicians who promise they can deliver change while also taking the money, mostly just end up taking the money.

And they're voting for Sanders because his idea of an entirely voter-funded electoral "revolution" that bars corporate money is, no matter what its objective chances of success, the only practical road left to break what they perceive to be an inexorable pattern of corruption.

Young people aren't dreaming. They're thinking. And we should listen to them.

corporate america bought hillary clinton for $21million


NYPost |  "Follow the money.” That telling phrase, which has come to summarize the Watergate scandal, has been a part of the lexicon since 1976. It’s shorthand for political corruption: At what point do “contributions” become bribes, “constituent services” turn into quid pro quos and “charities” become slush funds?

Ronald Reagan was severely criticized in 1989 when, after he left office, he was paid $2 million for a couple of speeches in Japan. “The founding fathers would have been stunned that an occupant of the highest office in this land turned it into bucks,” sniffed a Columbia professor.

So what would Washington and Jefferson make of Hillary Rodham Clinton? Mandatory financial disclosures released this month show that, in just the two years from April 2013 to March 2015, the former first lady, senator and secretary of state collected $21,667,000 in “speaking fees,” not to mention the cool $5 mil she corralled as an advance for her 2014 flop book, “Hard Choices.”

Throw in the additional $26,630,000 her ex-president husband hoovered up in personal-appearance “honoraria,” and the nation can breathe a collective sigh of relief that the former first couple — who, according to Hillary, were “dead broke” when they left the White House in 2001 with some of the furniture in tow — can finally make ends meet.

No wonder Donald Trump calls her “crooked Hillary.”

A look at Mrs. Clinton’s speaking venues and the whopping sums she’s received since she left State gives us an indication who’s desperate for a place at the trough — and whom another Clinton administration might favor.

First off, there’s Wall Street and the financial-services industry. Democratic champions of the Little Guy are always in bed with the Street — they don’t call Barack Obama “President Goldman Sachs” for nothing, but Mrs. Clinton has room for Bob and Carol and Ted and Alice and their 10 best friends. Multiple trips to Goldman Sachs. Morgan Stanley. Deutsche Bank. Kohlberg Kravis Roberts. UBS Wealth Management.

As the character of Che Guevara sings in “Evita”: “And the money kept rolling in.” And all at the bargain price of $225,000 a pop . . . to say what? We don’t know, because Hillary won’t release the transcripts.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...