Showing posts with label Primitive Model. Show all posts
Showing posts with label Primitive Model. Show all posts

Sunday, September 19, 2021

The Selfish Gene Is Actually A Crippling, Zero-Sum Theory Of Evolution

aeon  |  In late summer of 1976, two colleagues at Oxford University Press, Michael Rodgers and Richard Charkin, were discussing a book on evolution soon to be published. It was by a first-time author, a junior zoology don in town, and had been given an initial print run of 5,000 copies. As the two publishers debated the book’s fate, Charkin confided that he doubted it would sell more than 2,000 copies. In response, Rodgers, who was the editor who had acquired the manuscript, suggested a bet whereby he would pay Charkin £1 for every 1,000 copies under 5,000, and Charkin was to buy Rodgers a pint of beer for every 1,000 copies over 5,000. By now, the book is one of OUP’s most successful titles, and it has sold more than a million copies in dozens of languages, spread across four editions. That book was Richard Dawkins’s The Selfish Gene, and Charkin is ‘holding back payment in the interests of [Rodgers’s] health and wellbeing’.

In the decades following that bet, The Selfish Gene has come to play a unique role in evolutionary biology, simultaneously influential and contentious. At the heart of the disagreements lay the book’s advocacy of what has become known as the gene’s-eye view of evolution. To its supporters, the gene’s-eye view presents an unrivalled introduction to the logic of natural selection. To its critics, ‘selfish genes’ is a dated metaphor that paints a simplistic picture of evolution while failing to incorporate recent empirical findings. To me, it is one of biology’s most powerful thinking tools. However, as with all tools, in order to make the most of it, you must understand what it was designed to do.

When Charles Darwin first introduced his theory of evolution by natural selection in 1859, he had in mind a theory about individual organisms. In Darwin’s telling, individuals differ in how long they live and how good they are at attracting mates; if the traits that enhance these strengths are heritable, they will become more abundant over time. The gene’s-eye view discussed by Dawkins introduces a shift in perspective that might seem subtle at first, but which comes with rather radical implications.

The idea emerged from the tenets of population genetics in the 1920s and ’30s. Here, scientists said that you could mathematically describe evolution through changes in the frequency of certain genetic variants, known as alleles, over time. Population genetics was an integral part of the modern synthesis of evolution and married Darwin’s idea of gradual evolutionary change with a functioning theory of inheritance, based on Gregor Mendel’s discovery that genes were transmitted as discrete entities. Under the framework of population genetics, evolution is captured by mathematically describing the increase and decrease of alleles in a population over time.

The gene’s-eye view took this a step further, to argue that biologists are always better off thinking about evolution and natural selection in terms of genes rather than organisms. This is because organisms lack the evolutionary longevity required to be the central unit in evolutionary explanations. They are too temporary on an evolutionary timescale, a unique combination of genes and environment – here in this generation but gone in the next. Genes, in contrast, pass on their structure intact from one generation to the next, ignoring mutation and recombination. Therefore, only they possess the required evolutionary longevity. Traits that you can see, the argument goes, such as the particular fur of a polar bear or the flower of an orchid (known as a phenotype), are not for the good of the organism, but of the genes. The genes, and not the organism, are the ultimate beneficiaries of natural selection.

This approach has also been called selfish-gene thinking, because natural selection is conceptualised as a struggle between genes, typically through how they affect the fitness of the organism in which they reside, for transmission to the next generation. At an after-dinner speech at a conference banquet, Dawkins once summarised the key argument in limerick form:

An itinerant selfish gene
Said: ‘Bodies a-plenty I’ve seen.
You think you’re so clever,
But I’ll live for ever.
You’re just a survival machine.’

In this telling, evolution is the process by which immortal selfish genes housed in transient organisms struggle for representation in future generations. Moving beyond the poetry and making the point more formally, Dawkins argued that evolution involves two entities: replicators and vehicles, playing complementary roles. Replicators are those entities that copies are made of and that are transmitted faithfully from one generation to the next; in practice, this usually means genes. The second entity, vehicles, are where replicators are bundled together: this is the entity that actually comes into contact with the external environment and interacts with it. The most common kind of vehicle is the organism, such as an animal or a plant, though it can also be a cell, as in the case of cancer.

Cell Signaling Neither Random Or Chaotic - Just Exceedingly Complicated

quanta |  Back in 2000, when Michael Elowitz of the California Institute of Technology was still a grad student at Princeton University, he accomplished a remarkable feat in the young field of synthetic biology: He became one of the first to design and demonstrate a kind of functioning “circuit” in living cells. He and his mentor, Stanislas Leibler, inserted a suite of genes into Escherichia coli bacteria that induced controlled swings in the cells’ production of a fluorescent protein, like an oscillator in electronic circuitry.

It was a brilliant illustration of what the biologist and Nobel laureate François Jacob called the “logic of life”: a tightly controlled flow of information from genes to the traits that cells and other organisms exhibit.

But this lucid vision of circuit-like logic, which worked so elegantly in bacteria, too often fails in more complex cells. “In bacteria, single proteins regulate things,” said Angela DePace, a systems biologist at Harvard Medical School. “But in more complex organisms, you get many proteins involved in a more analog fashion.”

Recently, by looking closely at the protein interactions within one key developmental pathway that shapes the embryos of humans and other complex animals, Elowitz and his co-workers have caught a glimpse of what the logic of complex life is really like. This pathway is a riot of molecular promiscuity that would make a libertine blush, where the component molecules can unite in many different combinations. It might seem futile to hope that this chaotic dance could convey any coherent signal to direct the fate of a cell. Yet this sort of helter-skelter coupling among biomolecules may be the norm, not some weird exception. In fact, it may be why multicellular life works at all.

“Biological cell-cell communication circuits, with their families of promiscuously interacting ligands and receptors, look like a mess and use an architecture that is the opposite of what we synthetic biologists might have designed,” Elowitz said.

Yet this apparent chaos of interacting components is really a sophisticated signal-processing system that can extract information reliably and efficiently from complicated cocktails of signaling molecules. “Understanding cells’ natural combinatorial language could allow us to control [them] with much greater specificity than we have now,” he said.

The emerging picture does more than reconfigure our view of what biomolecules in our cells are up to as they build an organism — what logic they follow to create complex life. It might also help us understand why living things are able to survive at all in the face of an unpredictable environment, and why that randomness permits evolution rather than frustrating it. And it could explain why molecular medicine is often so hard: why many candidate drugs don’t do what we hoped, and how we might make ones that do.

The Computational Complexity Of A Single Biological Neuron

quanta |  Today, the most powerful artificial intelligence systems employ a type of machine learning called deep learning. Their algorithms learn by processing massive amounts of data through hidden layers of interconnected nodes, referred to as deep neural networks. As their name suggests, deep neural networks were inspired by the real neural networks in the brain, with the nodes modeled after real neurons — or, at least, after what neuroscientists knew about neurons back in the 1950s, when an influential neuron model called the perceptron was born. Since then, our understanding of the computational complexity of single neurons has dramatically expanded, so biological neurons are known to be more complex than artificial ones. But by how much?

To find out, David Beniaguev, Idan Segev and Michael London, all at the Hebrew University of Jerusalem, trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of one single biological neuron.

Even the authors did not anticipate such complexity. “I thought it would be simpler and smaller,” said Beniaguev. He expected that three or four layers would be enough to capture the computations performed within the cell.

Timothy Lillicrap, who designs decision-making algorithms at the Google-owned AI company DeepMind, said the new result suggests that it might be necessary to rethink the old tradition of loosely comparing a neuron in the brain to a neuron in the context of machine learning. “This paper really helps force the issue of thinking about that more carefully and grappling with to what extent you can make those analogies,” he said.

The most basic analogy between artificial and real neurons involves how they handle incoming information. Both kinds of neurons receive incoming signals and, based on that information, decide whether to send their own signal to other neurons. While artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is far more complicated in biological neurons. Computational neuroscientists use an input-output function to model the relationship between the inputs received by a biological neuron’s long treelike branches, called dendrites, and the neuron’s decision to send out a signal.

This function is what the authors of the new work taught an artificial deep neural network to imitate in order to determine its complexity. They started by creating a massive simulation of the input-output function of a type of neuron with distinct trees of dendritic branches at its top and bottom, known as a pyramidal neuron, from a rat’s cortex. Then they fed the simulation into a deep neural network that had up to 256 artificial neurons in each layer. They continued increasing the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicted the behavior of the neuron’s input-output function with at least five — but no more than eight — artificial layers. In most of the networks, that equated to about 1,000 artificial neurons for just one biological neuron.

 

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...