Sunday, June 05, 2016

NASA, Jesus, and Templeton...,


HuffPo |  More of NASA’s astrobiology strategy for the next decade can be found in its latest roadmap: Astrobiology Strategy 2015. Lindsay Hays of California Institute of Technology’s Jet Propulsion Laboratory is editor-in-chief. 

Microbes are given some attention in a section titled: “How Does Our Ignorance About Microbial Life on Earth Hinder Our Understanding of the Limits to Life?” Curiously, however, there’s not a word in the entire 256-page document (including the glossary) about the existence of viruses — the biggest part of the biosphere — let alone their consortial and persistent nature, when the new thinking in science is “virus first“ and that persistence may be just as crucial to life as replication.
Templeton last year also awarded $5.4 M for origin of life investigations to the Foundation for Applied Molecular Evolution, with funds being administered by FAME synthetic biologist Steve Benner (who once quipped, “If you don’t have a theory of life, you can’t find aliens — unless they shoot you in the leg with a ray gun.”) AND $5.6M to ELSI — the Japanese government’s earth science institute in Tokyo - for its ELSI Origins Network, headed by astrophysicist Piet Hut also of the Institute for Advanced Study in Princeton.
Steve Benner is listed as a reviewer on NASA’s latest roadmap and is on the editorial board of Astrobiology Journal whose senior editors include NAI’s new chief Penny Boston as well as ISSOL (International Society for the Study of Origin of Life) president Dave Deamer.

Astrobiology Journal is put together in the Kennewick, Washington home of Sherry Cady, a geologist who serves as editor in chief, and her husband Lawrence P. Cady, a fiction writer who serves as the journal’s managing editor and copy editor — according to LP Cady. The magazine is one of 80 of Mary Ann Liebert Inc.’s “authoritative” journals and has close ties to other NASA-funded scientists who serve as reviewers.
If anything substantive is likely to happen as a result of (or in spite of) Templeton funding on origin of life, I would expect it to come from Steve Benner’s project, which includes people like George E. Fox who collaborated early on with Carl Woese on Archaea, and Harry Lonsdale origin of life research funds recipient, Niles Lehman — plus Benner himself and eight others.
On the other hand, I have serious reservations about the NASA award of $1.1M of public funds to CTI. What ever happened to the separation of church and state?

Saturday, June 04, 2016

little things mean everything...,


thescientist |  Little things mean a lot. To any biologist, this time-worn maxim is old news. But it’s worth revisiting. As several articles in this issue of The Scientist illustrate, how researchers define and examine the “little things” does mean a lot.

Consider this month’s cover story, “Noncoding RNAs Not So Noncoding,” by TS correspondent Ruth Williams. Combing the human genome for open reading frames (ORFs), sequences bracketed by start and stop codons, yielded a protein-coding count somewhere in the neighborhood of 24,000. That left a lot of the genome relegated to the category of junk—or, later, to the tens of thousands of mostly mysterious long noncoding RNAs (lncRNAs). But because they had only been looking for ORFs that were 300 nucleotides or longer (i.e., coding for proteins at least 100 amino acids long), genome probers missed so-called short ORFs (sORFs), which encode small peptides. “Their diminutive size may have caused these peptides to be overlooked, their sORFs to be buried in statistical noise, and their RNAs to be miscategorized, but it does not prevent them from serving important, often essential functions, as the micropeptides characterized to date demonstrate,” writes Williams.

How little things work definitely informs another field of life science research: synthetic biology. As the functions of genes and gene networks are sussed out, bioengineers are using the information to design small, synthetic gene circuits that enable them to better understand natural networks. In “Synthetic Biology Comes into Its Own,” Richard Muscat summarizes the strides made by synthetic biologists over the last 15 years and offers an optimistic view of how such networks may be put to use in the future. And to prove him right, just as we go to press, a collaborative group led by one of syn bio’s founding fathers, MIT’s James Collins, has devised a paper-based test for Zika virus exposure that relies on a freeze-dried synthetic gene circuit that changes color upon detection of RNAs in the viral genome. The results are ready in a matter of hours, not the days or weeks current testing takes, and the test can distinguish Zika from dengue virus. “What’s really exciting here is you can leverage all this expertise that synthetic biologists are gaining in constructing genetic networks and use it in a real-world application that is important and can potentially transform how we do diagnostics,” commented one researcher about the test.

RNA-Targetting CRISPR


thescientist |  Much attention paid to the bacterial CRISPR/Cas9 system has focused on its uses as a gene-editing tool. But there are other CRISPR/Cas sytems. Researchers from MIT and the National Center for Biotechnology Information (NCBI) last year identified additional CRISPR proteins. One of these proteins, C2c2, seemed to be a putative RNA-cleaving—rather than a DNA-targeting—enzyme, the researchers reported at the time. Now, the same group has established that C2c2 indeed cleaves single-stranded RNA (ssRNA), providing the first example of a CRISPR/Cas system that exclusively targets RNA. The team’s latest results, published today (June 2) in Science, confirm the diversity of CRISPR systems and point to the possibility of precise in vivo RNA editing.

“This protein does what we expected, performing RNA-guided cleavage of cognate RNA with high specificity, and can be programmed to cleave any RNA at will,” study coauthor Eugene Koonin, of the NCBI and the National Library of Medicine, told The Scientist.

“I am very excited about the paper,” said Gene Yeo, an RNA researcher at the University of California, San Diego, who was not involved in the work. “The community was expecting to find native RNA CRISPR systems, so it’s great that one of these has now been characterized.”

non-coding rna not so non-coding...,


thescientist |  In 2002, a group of plant researchers studying legumes at the Max Planck Institute for Plant Breeding Research in Cologne, Germany, discovered that a 679-nucleotide RNA believed to function in a noncoding capacity was in fact a protein-coding messenger RNA (mRNA).1 It had been classified as a long (or large) noncoding RNA (lncRNA) by virtue of being more than 200 nucleotides in length. The RNA, transcribed from a gene called early nodulin 40 (ENOD40), contained short open reading frames (ORFs)—putative protein-coding sequences bookended by start and stop codons—but the ORFs were so short that they had previously been overlooked. When the Cologne collaborators examined the RNA more closely, however, they found that two of the ORFs did indeed encode tiny peptides: one of 12 and one of 24 amino acids. Sampling the legumes confirmed that these micropeptides were made in the plant, where they interacted with a sucrose-synthesizing enzyme.

Five years later, another ORF-containing mRNA that had been posing as a lncRNA was discovered in Drosophila.2,3 After performing a screen of fly embryos to find lncRNAs, Yuji Kageyama, then of the National Institute for Basic Biology in Okazaki, Japan, suppressed each transcript’s expression. “Only one showed a clear phenotype,” says Kageyama, now at Kobe University. Because embryos missing this particular RNA lacked certain cuticle features, giving them the appearance of smooth rice grains, the researchers named the RNA “polished rice” (pri).

Turning his attention to how the RNA functioned, Kageyama thought he should first rule out the possibility that it encoded proteins. But he couldn’t. “We actually found it was a protein-coding gene,” he says. “It was an accident—we are RNA people!” The pri gene turned out to encode four tiny peptides—three of 11 amino acids and one of 32—that Kageyama and colleagues showed are important for activating a key developmental transcription factor.4

Since then, a handful of other lncRNAs have switched to the mRNA ranks after being found to harbor micropeptide-encoding short ORFs (sORFs)—those less than 300 nucleotides in length. And given the vast number of documented lncRNAs—most of which have no known function—the chance of finding others that contain micropeptide codes seems high.

The hunt for these tiny treasures is now on, but it’s a challenging quest. After all, there are good reasons why these itty-bitty peptides and their codes went unnoticed for so long.

Friday, June 03, 2016

The Genome Project - Write


NYTimes |  “By focusing on building the 3Gb of human DNA, HGP-write would push current conceptual and technical limits by orders of magnitude and deliver important scientific advances,” they write, referring to three gigabases, the three billion letters in the human genome.

Scientists already can change DNA in organisms or add foreign genes, as is done to make medicines like insulin or genetically modified crops. New “genome editing” tools, like one called Crispr, are making it far easier to re-engineer an organism’s DNA blueprint.

But George Church, a professor of genetics at Harvard Medical School and one of the organizers of the new project, said that if the changes desired are extensive, at some point it becomes easier to synthesize the needed DNA from scratch.

“Editing doesn’t scale very well,” he said. “When you have to make changes to every gene in the genome it may be more efficient to do it in large chunks.”

Besides Dr. Church, the other organizers of the project are Jef Boeke, director of the Institute for Systems Genetics at NYU Langone Medical Center; Andrew Hessel, a futurist at the software company Autodesk; and Nancy J. Kelley, who works raising money for projects. The paper in Science lists a total of 25 authors, many of them involved in DNA engineering.

Autodesk, which has given $250,000 to the project, is interested in selling software to help biologists design DNA sequences to make organisms perform particular functions. Dr. Church is a founder of Gen9, a company that sells made-to-order strands of DNA.

Dr. Boeke of N.Y.U. is leading an international project to synthesize the complete genome of yeast, which has 12 million base pairs. It would be the largest genome synthesized to date, though still much smaller than the human genome.

Is Everything You Think You Know About AI Wrong?


WaPo |  consumers are already seeing our machine learning research reflected in the sudden explosion of digital personal assistants like Siri, Alexa and Google Now — technologies that are very good at interpreting voice-based requests but aren't capable of much more than that. These "narrow AI" have been designed with a specific purpose in mind: To help people do the things regular people do, whether it's looking up the weather or sending a text message.

Narrow, specialized AI is also what companies like IBM have been pursuing. It includes, for example, algorithms to help radiologists pick out tumors much more accurately by "learning" all the cancer research we've ever done and by "seeing" millions of sample X-rays and MRIs. These robots act much more like glorified calculators — they can ingest way more data than a single person could hope to do with his or her own brain, but they still operate within the confines of a specific task like cancer diagnosis. These robots are not going to be launching nuclear missiles anytime soon. They wouldn't know how, or why. And the more pervasive this type of AI becomes, the more we'll understand about how best to build the next generation of robots.

So who is going to lose their job?
Partly because we're better at designing these limited AI systems, some experts predict that high-skilled workers will adapt to the technology as a tool, while lower-skill jobs are the ones that will see the most disruption. When the Obama administration studied the issue, it found that as many as 80 percent of jobs currently paying less than $20 an hour might someday be replaced by AI.

"That's over a long period of time, and it's not like you're going to lose 80 percent of jobs and not reemploy those people," Jason Furman, a senior economic advisor to President Obama, said in an interview. "But [even] if you lose 80 percent of jobs and reemploy 90 percent or 95 percent of those people, it's still a big jump up in the structural number not working. So I think it poses a real distributional challenge."

Policymakers will need to come up with inventive ways to meet this looming jobs problem. But the same estimates also hint at a way out: Higher-earning jobs stand to be less negatively affected by automation. Compared to the low-wage jobs, roughly a third of those who earn between $20 and $40 an hour are expected to fall out of work due to robots, according to Furman. And only a sliver of high-paying jobs, about 5 percent, may be subject to robot replacement.

Those numbers might look very different if researchers were truly on the brink of creating sentient AI that can really do all the same things a human can. In this hypothetical scenario, even high-skilled workers might have more reason to fear. But the fact that so much of our AI research right now appears to favor narrow forms of artificial intelligence at least suggests we could be doing a lot worse.

Thursday, June 02, 2016

tomorrowland



genes: convenient tokens of our time


ecodevoevo |  Our culture, like any culture, creates symbols to use as tokens as we go about our lives. Tokens are reassuring or explanatory symbols, and we naturally use them in the manipulations for various resources that culture is often about.  Nowadays, a central token is the gene.

Genes are proffered as the irrefutable ubiquitous cause of things, the salvation, the explanation, in ways rather similar to the way God and miracles are proffered by religion.  Genes conveniently lead to manipulation by technology, and technology sells in our industrial culture. Genes are specific rather than vague, are enumerable, can be seen as real core 'data' to explain the world.  Genes are widely used as ultimate blameworthy causes, responsible for disease which comes to be defined as what happens when genes go 'wrong'.  Being literally unseen, like angels, genes can take on an aura of pervasive power and mystery.  The incantation by scientists is that if we can only be enabled to find them we can even cure them (with CRISPR or some other promised panacea), exorcising their evil. All of this invocation of fundamental causal tokens is particulate enough to be marketable for grants and research proposals, great for publishing in journals and for news media to gawk at in wonder. Genes provide impressively mysterious tokens for scientists to promise almost to create miracles by manipulating.  Genes stand for life's Book of Truth, much as sacred texts have traditionally done and, for many, still do.

Genes provide fundamental symbolic tokens in theories of life--its essence, its evolution, of human behavior, of good and evil traits, of atoms of causation from which everything follows. They lurk in the background, responsible for all good and evil.  So in our age in human history, it is not surprising that reports of finding genes 'for' this or that have unbelievable explanatory panache.  It's not a trivial aspect of this symbolic role that people (including scientists) have to take others' word for what they claim as insights. 


genes without prominence...,


inference-review |  The cell is a complex dynamic system in which macromolecules such as DNA and the various proteins interact within a free energy flux provided by nutrients. Its phenotypes can be represented by quasi-stable attractors embedded in a multi-dimensional state space whose dimensions are defined by the activities of the cell’s constituent proteins.1
 
This is the basis for the dynamical model of the cell.

The current molecular genetic or machine model of the cell, on the other hand, is predicated on the work of Gregor Mendel and Charles Darwin. Mendel framed the laws of inheritance on the basis of his experimental work on pea plants. The first law states that inheritance is a discrete and not a blending process: crossing purple and white flowered varieties produces some offspring with white and some with purple flowers, but generally not intermediately colored offspring.2 Mendel concluded that whatever was inherited had a material or particulate nature; it could be segregated.3

According to the machine cell model, those particles are genes or sequences of nucleobases in the genomic DNA. They constitute Mendel’s units of inheritance. Gene sequences are transcribed, via messenger RNA, to proteins, which are folded linear strings of amino acids called peptides. The interactions between proteins are responsible for phenotypic traits. This assumption relies on two general principles affirmed by Francis Crick in 1958, namely the sequence hypothesis and the central dogma.4 The sequence hypothesis asserts that the sequence of bases in the genomic DNA determines the sequence of amino acids in the peptide and the three-dimensional structure of the folded peptide. The central dogma states that the sequence hypothesis represents a flow of information from DNA to the proteins and rules out a flow in reverse.

In 1961, the American biologist Christian Anfinsen demonstrated that when the enzyme ribonuclease was denatured, it lost its activity, but regained it on re-naturing. Anfinsen concluded from the kinetics of re-naturation that the amino acid sequence of the peptide determined how the peptide folded.5 He did not cite Crick’s 1958 paper or the sequence hypothesis, although he had apparently read the first and confirmed the second.

The central dogma and the sequence hypothesis proved to be wonderful heuristic tools with which to conduct bench work in molecular biology.

The machine model recognizes cells to be highly regulated entities; genes are responsible for that regulation through gene regulatory networks (GRNs).6 Gene sequences provide all the information needed to build and regulate the cell.

Both a naturalist and an experimentalist, Darwin observed that breeding populations exhibit natural variations. Limited resources mean a struggle for existence. Individuals become better and better adapted to their environments. This process is responsible for both small adaptive improvements and dramatic changes. Darwin insisted evolution was, in both cases, gradual, and predicted that intermediate forms between species should be found both in the fossil record and in existing populations. Today, these ideas are part of the modern evolutionary synthesis, a term coined by Julian Huxley in 1942.7 Like the central dogma, it has been subject to controversy, despite its early designation as the set of principles under which all of biology is conducted.8

The modern synthesis, we now understand, does not explain trans-generational epigenetic inheritance, consciousness, and niche construction.9 It is possible that the concept of the gene and the claim that evolution depends on genetic diversity may both need to be modified or replaced.

This essay is a step towards describing biology as a science founded on the laws of physics. It is a step in the right direction.

Wednesday, June 01, 2016

the constructal law in intelligence?



Stuart Kauffman's 1993 book, Origins of Order, is a technical treatise on his life's work in Mathematical Biology. Kauffman greatly extends Alan Turing 's early work in Mathematical Biology. The intended audience is other mathematical and theoretical biologists. It's chock full of advanced mathematics. Of particular note, Origins of Order seems to be Kauffman's only published work in which he states his experimental results about the interconnection between complex systems and neural networks.

Kauffman explains that a complex system tuned with particular parameters is a neural network.  I can not overstate the importance of the last sentence in the paragraph above. The implication is that one basis for intelligence, biological neural networks, can spontaneously self-generate given the correct starting parameters. Kauffman provides the mathematics to do this, discusses his experimental results, and points out that the parameters in question are an attractor state.

What's the meaning of life? Physics.


nationalgeographic |  According to your book, physics describes the actions or tendencies of every living thing—and inanimate ones as well. Does that mean we can unite all behavior under physics?

Absolutely. Our narrow definition of the discipline is something that’s happened in the past hundred years, thanks to the immense impact of Albert Einstein and atomic physics and relativity at the turn of the [20th] century.

But we need to go back farther. In Latin, nature—physics—means “everything that happens.”

One thing that came directly from Charles Darwin is that humans are part of nature, along with all the other animate beings. Therefore all the things that we make—our tools, our homes, our technologies—are natural as well. It’s all part of the same thing.

In your magazine and on your TV channel, we see many animals doing this—extending their reach with tools, with intelligence, with social organization. Everything is naturally interconnected.

Your new book is premised on a law of physics that you formulated in 1996. The Constructal Law says there’s a universal evolutionary tendency toward design in nature, because everything is composed of systems that change and evolve to flow more easily. That’s correct. But I would specify and say that the tendency is toward evolving freely—changing on the go in order to provide greater and greater ease of movement. That’s physics, stage four—more precise, more specific expressions of the same idea.

Flow systems are everywhere. They describe the ways that animals move and migrate, the ways that river deltas form, the ways that people build fires. In each case, they evolve freely to reduce friction and to flow better—to improve themselves and minimize their mistakes or imperfections. Blood flow and water flow essentially evolve the same way.

Why does life exist?


quantamagazine |  Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Tuesday, May 31, 2016

your brain is not a computer...,


aeon |  No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.

Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.

Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.

Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

the minecraft generation


NYTimes |  Since its release seven years ago, Minecraft has become a global sensation, captivating a generation of children. There are over 100 million registered players, and it’s now the third-best-­selling video game in history, after Tetris and Wii Sports. In 2014, Microsoft bought Minecraft — and Mojang, the Swedish game studio behind it — for $2.5 billion.

There have been blockbuster games before, of course. But as Jordan’s experience suggests — and as parents peering over their children’s shoulders sense — Minecraft is a different sort of phenomenon.
For one thing, it doesn’t really feel like a game. It’s more like a destination, a technical tool, a cultural scene, or all three put together: a place where kids engineer complex machines, shoot videos of their escapades that they post on YouTube, make art and set up servers, online versions of the game where they can hang out with friends. It’s a world of trial and error and constant discovery, stuffed with byzantine secrets, obscure text commands and hidden recipes. And it runs completely counter to most modern computing trends. Where companies like Apple and Microsoft and Google want our computers to be easy to manipulate — designing point-and-click interfaces under the assumption that it’s best to conceal from the average user how the computer works — Minecraft encourages kids to get under the hood, break things, fix them and turn mooshrooms into random-­number generators. It invites them to tinker.

In this way, Minecraft culture is a throwback to the heady early days of the digital age. In the late ’70s and ’80s, the arrival of personal computers like the Commodore 64 gave rise to the first generation of kids fluent in computation. They learned to program in Basic, to write software that they swapped excitedly with their peers. It was a playful renaissance that eerily parallels the embrace of Minecraft by today’s youth. As Ian Bogost, a game designer and professor of media studies at Georgia Tech, puts it, Minecraft may well be this generation’s personal computer.

At a time when even the president is urging kids to learn to code, Minecraft has become a stealth gateway to the fundamentals, and the pleasures, of computer science. Those kids of the ’70s and ’80s grew up to become the architects of our modern digital world, with all its allures and perils. What will the Minecraft generation become?

“Children,” the social critic Walter Benjamin wrote in 1924, “are particularly fond of haunting any site where things are being visibly worked on. They are irresistibly drawn by the detritus generated by building, gardening, housework, tailoring or carpentry.”

blockchain 'smart contracts' to disrupt lawyers


afr |  Among the blockchain cognoscenti, everyone is talking about Ethereum.

A rival blockchain and virtual currency to bitcoin, Ethereum allows for the programming of "smart contracts", or computer code which facilitates or enforces a set of rules. Ethereum was first described by the programmer Vitalik Buterin in late 2013; the first full public version of the platform was released in February.

Commercial lawyers are watching the arrival of Ethereum closely given the potential for smart contracts in the future to disintermediate their highly  lucrative role in drafting and exchanging paper contracts. Smart contracts are currently being used to digitise business rules, but may soon move to codify legal agreements.

The innovation has been made possible because Ethereum provides developers with a more liberal "scripting language" than bitcoin. This is allowing companies to create their own private blockchains and build applications. Already, apps for music distribution, sports betting and a new type of financial auditing are being tested.

Some of the world's largest technology companies, from Microsoft to IBM, are lining up to work with Ethereum, while the R3 CEV banking consortium has also been trialling its technology as it tests blockchain-style applications for the banking industry including trading commercial paper. Banks are interested in blockchain because distributed ledgers can remove intermediaries and speed up transactions, thereby reducing costs. But if banks move business to blockchains in the future, financial services lawyers will need to begin re-drafting into digital form the banking contracts that underpin the capital markets.

The global director of IBM Blockchain Labs, Nitin Gaur, who was in Sydney last week, says he is a "huge fan" of Ethereum, pointing to its "rich ecosystem of developers". He predicts law to be among the industries disrupted by the technology.

Monday, May 30, 2016

the selfish gene turns forty...,


theguardian |  It’s 40 years since Richard Dawkins suggested, in the opening words of The Selfish Gene, that, were an alien to visit Earth, the question it would pose to judge our intellectual maturity was: “Have they discovered evolution yet?” We had, of course, by the grace of Charles Darwin and a century of evolutionary biologists who had been trying to figure out how natural selection actually worked. In 1976, The Selfish Gene became the first real blockbuster popular science book, a poetic mark in the sand to the public and scientists alike: this idea had to enter our thinking, our research and our culture.

The idea was this: genes strive for immortality, and individuals, families, and species are merely vehicles in that quest. The behaviour of all living things is in service of their genes hence, metaphorically, they are selfish. Before this, it had been proposed that natural selection was honing the behaviour of living things to promote the continuance through time of the individual creature, or family, or group or species. But in fact, Dawkins said, it was the gene itself that was trying to survive, and it just so happened that the best way for it to survive was in concert with other genes in the impermanent husk of an individual.

This gene-centric view of evolution also began to explain one of the oddities of life on Earth – the behaviour of social insects. What is the point of a drone bee, doomed to remain childless and in the service of a totalitarian queen? Suddenly it made sense that, with the gene itself steering evolution, the fact that the drone shared its DNA with the queen meant that its servitude guarantees not the individual’s survival, but the endurance of the genes they share. Or as the Anglo-Indian biologist JBS Haldane put it: “Would I lay down my life to save my brother? No, but I would to save two brothers or eight cousins.”

These ideas were espoused by only a handful of scientists in the middle decades of the 20th century – notably Bob Trivers, Bill Hamilton, John Maynard Smith and George Williams. In The Selfish Gene, Dawkins did not merely recapitulate them; he made an impassioned argument for the reality of natural selection. Previous attempts to explain the mechanics of evolution had been academic and rooted in maths. Dawkins walked us through it in prose. Many great popular science books followed – Stephen Hawking’s A Brief History of Time, Stephen Pinker’s The Blank Slate, and, currently, The Vital Question by Nick Lane.

For many of us, The Selfish Gene was our first proper taste of evolution. I don’t remember it being a controversial subject in my youth. In fact, I don’t remember it being taught at all. Evolution, Darwin and natural selection were largely absent from my secondary education in the late 1980s. The national curriculum, introduced in the UK in 1988, included some evolution, but before 1988 its presence in schools was far from universal. As an aside, in my opinion the subject is taught bafflingly minimally and late in the curriculum even today; evolution by natural selection is crucial to every aspect of the living world. In the words of the Russian scientist Theodosius Dobzhansky: “Nothing in biology makes sense except in the light of evolution.”


Sunday, May 29, 2016

a guide to being human in the 21st century


themonkeytrap |  I currently teach a class called Reality 101 at the University of Minnesota.  It is a 15 week exploration of ‘the human ecosystem’ – what drives us, what powers us and what we are doing. Only when viewed from such an ecological lens can ‘better’ choices be made by individuals, who in turn impact societies.  Our situation cannot be described in an hour -but this was my latest and best attempt. The talk is 60% new from prior talks – I start with brief summaries of energy, economy, behavior and environment, followed by a listing of 25 of the current ‘flawed assumptions’ underpinning modern human culture.  I close with a list of 20 new ways of thinking about ones future-for consideration – and possibly to work towards – for a young person alive this century.  It is my opinion we need more pro-social, pro-future players on the gameboard, whatever their beliefs and priorities might be.  2 books should be finished this year and I will post a note here about progress/etc

Saturday, May 28, 2016

Why Granny Goodness is Imperial Corporatism's Next Choice for CEO





telesur |  The 2016 election campaign is remarkable not only for the rise of Donald Trump and Bernie Sanders but also for the resilience of an enduring silence about a murderous self-bestowed divinity. A third of the members of the United Nations have felt Washington's boot, overturning governments, subverting democracy, imposing blockades and boycotts. Most of the presidents responsible have been liberal – Truman, Kennedy, Johnson, Carter, Clinton, Obama.

The breathtaking record of perfidy is so mutated in the public mind, wrote the late Harold Pinter, that it “never happened …Nothing ever happened. Even while it was happening it wasn't happening. It didn't matter. It was of no interest. It didn't matter … “. Pinter expressed a mock admiration for what he called “a quite clinical manipulation of power worldwide while masquerading as a force for universal good. It's a brilliant, even witty, highly successful act of hypnosis.”

Take Obama. As he prepares to leave office, the fawning has begun all over again. He is “cool." One of the more violent presidents, Obama gave full reign to the Pentagon war-making apparatus of his discredited predecessor. He prosecuted more whistleblowers – truth-tellers – than any president. He pronounced Chelsea Manning guilty before she was tried. Today, Obama runs an unprecedented worldwide campaign of terrorism and murder by drone.

History was declared over, class was abolished and gender promoted as feminism; lots of women became New Labour MPs. They voted on the first day of Parliament to cut the benefits of single parents, mostly women, as instructed. A majority voted for an invasion that produced 700,000 Iraqi widows.

The equivalent in the US are the politically correct warmongers on the New York Times, Washington Post, and network TV who dominate political debate. I watched a furious debate on CNN about Trump's infidelities. It was clear, they said, a man like that could not be trusted in the White House. No issues were raised. Nothing on the 80 per cent of Americans whose income has collapsed to 1970s levels. Nothing on the drift to war. The received wisdom seems to be “hold your nose” and vote for Clinton: anyone but Trump. That way, you stop the monster and preserve a system gagging for another war.

granny goodness won't say how much the vampire squid put into her son-in-law...,


theintercept |  When Hillary Clinton’s son-in-law sought funding for his new hedge fund in 2011, he found financial backing from one of the biggest names on Wall Street: Goldman Sachs chief executive Lloyd Blankfein.

The fund, called Eaglevale Partners, was founded by Chelsea Clinton’s husband, Marc Mezvinsky, and two of his partners. Blankfein not only personally invested in the fund, but allowed his association with it to be used in the fund’s marketing.

The investment did not turn out to be a savvy business decision. Earlier this month, Mezvinsky was forced to shutter one of the investment vehicles he launched under Eaglevale, called Eaglevale Hellenic Opportunity, after losing 90 percent of its money betting on the Greek recovery. The flagship Eaglevale fund has also lost money, according to the New York Times.

There has been minimal reporting on the Blankfein investment in Eaglevale Partners, which is a private fund that faces few disclosure requirements. At a campaign rally in downtown San Francisco on Thursday, I attempted to ask Hillary Clinton if she knew the amount that Blankfein invested in her son-in-law’s fund.

why young people are right about hillary clinton...,


rollingstone |  This is why her shifting explanations and flippant attitude about the email scandal are almost more unnerving than the ostensible offense. She seems confident that just because her detractors are politically motivated, as they always have been, that they must be wrong, as they often were.
But that's faulty thinking. My worry is that Democrats like Hillary have been saying, "The Republicans are worse!" for so long that they've begun to believe it excuses everything. It makes me nervous to see Hillary supporters like law professor Stephen Vladeck arguing in the New York Times that the real problem wasn't anything Hillary did, but that the Espionage Act isn't "practical."

If you're willing to extend the "purity" argument to the Espionage Act, it's only a matter of time before you get in real trouble. And even if it doesn't happen this summer, Democrats may soon wish they'd picked the frumpy senator from Vermont who probably checks his restaurant bills to make sure he hasn't been undercharged.

But in the age of Trump, winning is the only thing that matters, right? In that case, there's plenty of evidence suggesting Sanders would perform better against a reality TV free-coverage machine like Trump than would Hillary Clinton. This would largely be due to the passion and energy of young voters.

Young people don't see the Sanders-Clinton race as a choice between idealism and incremental progress. The choice they see is between an honest politician, and one who is so profoundly a part of the problem that she can't even see it anymore.

They've seen in the last decades that politicians who promise they can deliver change while also taking the money, mostly just end up taking the money.

And they're voting for Sanders because his idea of an entirely voter-funded electoral "revolution" that bars corporate money is, no matter what its objective chances of success, the only practical road left to break what they perceive to be an inexorable pattern of corruption.

Young people aren't dreaming. They're thinking. And we should listen to them.

corporate america bought hillary clinton for $21million


NYPost |  "Follow the money.” That telling phrase, which has come to summarize the Watergate scandal, has been a part of the lexicon since 1976. It’s shorthand for political corruption: At what point do “contributions” become bribes, “constituent services” turn into quid pro quos and “charities” become slush funds?

Ronald Reagan was severely criticized in 1989 when, after he left office, he was paid $2 million for a couple of speeches in Japan. “The founding fathers would have been stunned that an occupant of the highest office in this land turned it into bucks,” sniffed a Columbia professor.

So what would Washington and Jefferson make of Hillary Rodham Clinton? Mandatory financial disclosures released this month show that, in just the two years from April 2013 to March 2015, the former first lady, senator and secretary of state collected $21,667,000 in “speaking fees,” not to mention the cool $5 mil she corralled as an advance for her 2014 flop book, “Hard Choices.”

Throw in the additional $26,630,000 her ex-president husband hoovered up in personal-appearance “honoraria,” and the nation can breathe a collective sigh of relief that the former first couple — who, according to Hillary, were “dead broke” when they left the White House in 2001 with some of the furniture in tow — can finally make ends meet.

No wonder Donald Trump calls her “crooked Hillary.”

A look at Mrs. Clinton’s speaking venues and the whopping sums she’s received since she left State gives us an indication who’s desperate for a place at the trough — and whom another Clinton administration might favor.

First off, there’s Wall Street and the financial-services industry. Democratic champions of the Little Guy are always in bed with the Street — they don’t call Barack Obama “President Goldman Sachs” for nothing, but Mrs. Clinton has room for Bob and Carol and Ted and Alice and their 10 best friends. Multiple trips to Goldman Sachs. Morgan Stanley. Deutsche Bank. Kohlberg Kravis Roberts. UBS Wealth Management.

As the character of Che Guevara sings in “Evita”: “And the money kept rolling in.” And all at the bargain price of $225,000 a pop . . . to say what? We don’t know, because Hillary won’t release the transcripts.

Friday, May 27, 2016

Entropy and the Self-Organization of Information and Value


mdpi |  Adam Smith, Charles Darwin, Rudolf Clausius, and Léon Brillouin considered certain “values” as key quantities in their descriptions of market competition, natural selection, thermodynamic processes, and information exchange, respectively. None of those values can be computed from elementary properties of the particular object they are attributed to, but rather values represent emergent, irreducible properties. In this paper, such values are jointly understood as information values in certain contexts. For this aim, structural information is distinguished from symbolic information. While the first can be associated with arbitrary physical processes or structures, the latter requires conventions which govern encoding and decoding of the symbols which form a message. As a value of energy, Clausius’ entropy is a universal measure of the structural information contained in a thermodynamic system. The structural information of a message, in contrast to its meaning, can be evaluated by Shannon’s entropy of communication. Symbolic information is found only in the realm of life, such as in animal behavior, human sociology, science, or technology, and is often cooperatively valuated by competition. Ritualization is described here as a universal scenario for the self-organization of symbols by which symbolic information emerges from structural information in the course of evolution processes. Emergent symbolic information exhibits the novel fundamental code symmetry which prevents the meaning of a message from being reducible to the physical structure of its carrier. While symbols turn arbitrary during the ritualization transition, their structures preserve information about their evolution history. 

Entropy: Special Issue "Information and Self-Organization"


yet another horrifying trick the umbrella corp. has up its sleeve for the unnecessariat...,


WaPo |  “It’s hard to imagine worse news for public health in the United States,” Lance Price, director of the Antibiotic Resistance Action Center and a George Washington University professor said in a statement Thursday about the Pennsylvania case. “We may soon be facing a world where CRE infections are untreatable."

Scientists rang the alarm bells about the gene in November, but not enough attention was paid. “Now we find that this gene has made its way into pigs and people in the U.S.," Price said. "If our leaders were waiting to act until they could see the cliff’s edge—I hope this opens their eyes to the abyss that lies before us.”

Scientists and public health officials have long warned that if the resistant bacteria continue to spread, treatment options could be seriously limited. Routine operations could become deadly. Minor infections could become life-threatening crises. Pneumonia could be more and more difficult to treat.

Already, doctors had been forced to rely on colistin as a last-line defense against antibiotic-resistant bacteria. The drug is hardly ideal. It is more than half a century old and can seriously damage a patient’s kidneys. And yet, because doctors have run out of weapons to fight a growing number of infections that evade more modern antibiotics, it has become a critical tool in fighting off some of the most tenacious infections.

the Company has long known how to wield this cull-tech against the unnecessariat...,


theeconomist |  “PLEASURE is oft a visitant; but pain clings cruelly,” wrote John Keats. Nowadays pain can often be shrugged off: opioids, a class of drugs that includes morphine and other derivatives of the opium poppy, can dramatically ease the agony of broken bones, third-degree burns or terminal cancer. But the mismanagement of these drugs has caused a pain crisis (see article). It has two faces: one in America and a few other rich countries; the other in the developing world.

In America for decades doctors prescribed too many opioids for chronic pain in the mistaken belief that the risks were manageable. Millions of patients became hooked. Nearly 20,000 Americans died from opioid overdoses in 2014. A belated crackdown is now forcing prescription-opioid addicts to endure withdrawal symptoms, buy their fix on the black market or turn to heroin—which gives a similar high (and is now popular among middle-aged Americans with back problems).

In the developing world, by contrast, even horrifying pain is often untreated. More than 7m people die yearly of cancer, HIV, accidents or war wounds with little or no pain relief. Four-fifths of humanity live in countries where opioids are hard to obtain; they use less than a tenth of the world’s morphine, the opioid most widely used for trauma and terminal pain.

Opioids are tricky. Take too much, or mix them with alcohol or sleeping pills, and you may stop breathing. Long-term patients often need more and more. But for much acute pain, and certainly for the terminally ill, they are often the best treatment. And they are cheap: enough morphine to soothe a cancer patient for a month should cost just $2-5.

In poor countries many people think of pain as inevitable, as it has been for most of human existence. So they seldom ask for pain relief, and seldom get it if they do. The drug war declared by America in the 1970s has made matters worse. It led to laws that put keeping drugs out of the wrong hands ahead of getting them into the right ones. The UN says both goals matter. But through the 1980s and 1990s, as the war on drugs raged, it preached about the menace of illegal highs with barely a whisper about the horror of unrelieved pain.

Thursday, May 26, 2016

after the precariat, the unnecessariat: the humans who are superfluous to corporations


boingboing |  The heroin epidemic in America has a death-toll comparable to the AIDS epidemic at its peak, but this time, there's no movement coalescing to argue for the lives of the economically sidelined, financially ruined dying thousands -- while the AIDS epidemic affected a real community of mutual support, the heroin epidemic specifically strikes down people whose communities are already gone. 

The Occupy movement rallied around the idea of the "precariat," the downwardly mobile former members of the middle class who were one layoff or shift-reduction away from economic ruination. Below the precariat is the unnecessariat, people who are a liability to the modern economic consensus, whom no corporation has any use for, except as a source of revenue from predatory loans, government subsidized "training" programs, and private prisons. 

The precariat benefits from Obamacare, able to pay for coverage despite pre-existing conditions; the unnecessariat suffers under Obamacare, forced to pay into the system before going through the same medical bankruptcies they'd have endured in order to get the coverage they need to survive another day. 

You're likely to be in the unnecessariat if you live in a county that has high levels of addiction and suicide -- the same counties that poll highest for Trump. 

Corporations have realized humanity's long nightmare of a race of immortal, transhuman superbeings who view us as their inconvenient gut-flora. The unnecessariat are an expanding class, and if you're not in it yet, there's no reason to think you won't land there tomorrow. 

UNNECESSARIAT [Anne Amnesia/More Crows] 

physics makes aging inevitable, not biology?



nautilus |  Four years ago, I published a book called Life’s Ratchet, which explains how molecular machines create order in our cells. My main concern was how life avoids a descent into chaos. To my great surprise, soon after the book was published, I was contacted by researchers who study biological aging. At first I couldn’t see the connection. I knew nothing about aging except for what I had learned from being forced to observe the process in my own body.

Then it dawned on me that by emphasizing the role of thermal chaos in animating molecular machines, I encouraged aging researchers to think more about it as a driver of aging. Thermal motion may seem beneficial in the short run, animating our molecular machines, but could it be detrimental in the long run? After all, in the absence of external energy input, random thermal motion tends to destroy order.

This tendency is codified in the second law of thermodynamics, which dictates that everything ages and decays: Buildings and roads crumble; ships and rails rust; mountains wash into the sea. Lifeless structures are helpless against the ravages of thermal motion. But life is different: Protein machines constantly heal and renew their cells.

In this sense, life pits biology against physics in mortal combat. So why do living things die? Is aging the ultimate triumph of physics over biology? Or is aging part of biology itself?

Wednesday, May 25, 2016

complexity beyond the fibonacci sequence


thesciencexplorer |  Sunflowers have long been included with pineapples, artichokes, and pine cones as one of nature’s stunning examples of the Fibonacci sequence — a set in which each number is the sum of the previous two (1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, ...). 

The numbers appear on the giant flower’s head, where the seeds arrange themselves in spirals. Count the spirals turning clockwise and counterclockwise and you will usually find a pair of numbers that sit side by side in the Fibonacci sequence.

Alan Turing first speculated sunflower seedheads adhered to the Fibonacci sequence, but sadly died before accumulating enough data to test his theory.

Four years ago, the Museum of Science and Industry in Manchester, UK picked up where Turing left off. Data on sunflower diversity were lacking, so the museum crowdsourced the problem. Members of the public were invited to grow their own sunflowers and submit photographs and spiral counts.
In a study just published in the journal Royal Society Open Science, researchers who verified the counts on 657 sunflowers provided by citizen scientists reported that one in five flowers did not conform to the Fibonacci sequence.
Some of the non-conforming seedheads approximated Fibonacci sequences, and others approximated even more complex mathematical patterns.
These exceptions to the rule have peaked the interest of the researchers, who wrote: “this paper provides a testbed against which a new generation of mathematical models can and should be built.”

the great transbiological leap of digitalization...,



sciencenews |   Before anybody even had a computer, Claude Shannon figured out how to make computers worth having.

As an electrical engineering graduate student at MIT, Shannon played around with a “differential analyzer,” a crude forerunner to computers. But for his master’s thesis, he was more concerned with relays and switches in electrical circuits, the sorts of things found in telephone exchange networks. In 1937 he produced, in the words of mathematician Solomon Golomb, “one of the greatest master’s theses ever,” establishing the connection between symbolic logic and the math for describing such circuitry. Shannon’s math worked not just for telephone exchanges or other electrical devices, but for any circuits, including the electronic circuitry that in subsequent decades would make digital computers so powerful.

It’s now conveniently a good time to celebrate Shannon’s achievements, on the occasion of the centennial of his birth (April 30) in Petoskey, Michigan, in 1916. Based on the pervasive importance of computing in society today, it wouldn’t be crazy to call the time since then “Shannon’s Century.”

“It is no exaggeration,” wrote Golomb, “to refer to Claude Shannon as the ‘father of the information age,’ and his intellectual achievement as one of the greatest of the twentieth century.”

Shannon is most well-known for creating an entirely new scientific field — information theory — in a pair of papers published in 1948. His foundation for that work, though, was built a decade earlier, in his thesis. There he devised equations that represented the behavior of electrical circuitry. How a circuit behaves depends on the interactions of relays and switches that can connect (or not) one terminal to another. Shannon sought a “calculus” for mathematically representing a circuit’s connections, allowing scientists to be able to design circuits effectively for various tasks. (He provided examples of the circuit math for an electronic combination lock and some other devices.)

information is physical, even in quantum systems...,



sciencenews |  Information may seem ethereal, given how easily we forget phone numbers and birthdays. But scientists say it is physical, and if a new study is correct, that goes for quantum systems, too.

Although pages of text or strings of bits seem easily erased with the press of a button, the act of destroying information has tangible physical impact, according to a principle proposed in 1961 by physicist Rolf Landauer. Deleting information is associated with an increase in entropy, or disorder, resulting in the release of a certain amount of heat for each erased bit. Even the most efficient computer would still output heat when irreversibly scrubbing out data.

This principle has been verified experimentally for systems that follow the familiar laws of classical physics. But the picture has remained fuzzy for quantum mechanical systems, in which particles can be in multiple states at once and their fates may be linked through the spooky process of quantum entanglement.

Now a team of scientists reports April 13 in Proceedings of the Royal Society A that Landauer’s principle holds even in that wild quantum landscape. “Essentially what they’ve done is test [this principle] at a very detailed and quantitative way,” says physicist John Bechhoefer of Simon Fraser University in Burnaby, Canada, who was not involved with the research. “And they’re showing that this works in a quantum system, which is a really important step.” Testing Landauer’s principle in the quantum realm could be important for understanding the fundamental limits of quantum computers, Bechhoefer says. 

a connection between dark energy and time?


thescienceexplorer |  What the heck is dark energy? Physicists have been trying to explain dark energy — the mysterious repulsive force that pushes everything in the universe apart — for years. And even though it makes up nearly 70 percent of all energy in the universe, the only reason we know it exists is due to its influence on other objects. It has never been directly detected.

But dark energy may play a hand in another fundamental quantity of physics. According to a recent paper published in the journal Physical Review E, a team of researchers have postulated that in some cases, dark energy might cause time to propagate forward.

When physicists were first peering into the depths of the cosmos, they expected to find that the universe was slowing down because of the collective gravity from all matter after the big bang. However, they discovered something rather surprising. Everything is speeding up.

As far as we know, the universe operates according to the laws of physics, and almost all the laws are time-reversible, meaning that things look exactly the same whether time runs backwards or forwards.

But why does time have an arrow pointing from the past to the present to the future?

It likely comes down to one very important tenet of physics that is not time-reversible — the second law of thermodynamics. It states that as time moves forward, the amount of disorder in the universe always increases. For this reason, it is currently accepted that the second law is the source of time’s arrow.

Tuesday, May 24, 2016

doing the most: eavesdropping neuronal snmp < hacking dreams


ted |  Now, if you were interested in studying dreams, I would recommend starting first by just looking at people's thoughts when they are awake, and this is what I do. So I am indeed a neuroscientist, but I study the brain in a very non-traditional way, partially inspired by my background. Before I became a neuroscientist, I was a computer hacker. I used to break into banks and government institutes to test their security. And I wanted to use the same techniques that hackers use to look inside black boxes when I wanted to study the brain, looking from the inside out.
4:34
Now, neuroscientists study the brain in one of two typical methods. Some of them look at the brain from the outside using imaging techniques like EEG or fMRI. And the problem there is that the signal is very kind of blurry, coarse. So others look at the brain from the inside, where they stick electrodes inside the brain and listen to brain cells speaking their own language. This is very precise, but this obviously can be done only with animals. Now, if you were to peek inside the brain and listen to it speak, what you would see is that it has this electrochemical signal that you can translate to sound, and this sound is the common currency of the brain. It sounds something like this.
5:17
(Clicking)
5:21
So I wanted to use this in humans, but who would let you do that? Patients who undergo brain surgery. So I partner with neurosurgeons across the globe who employ this unique procedure where they open the skull of patients, they stick electrodes in the brain to find the source of the problem, and finding the source can take days or sometimes weeks, so this gives us a unique opportunity to eavesdrop on the brains of patients while they are awake and behaving and they have their skull open with electrodes inside.
6:02
So now that we do that, we want to find what triggers those cells active, what makes them tick. So what we do is we run studies like this one. This is Linda, one of our patients. She is sitting here and watching those clips.
6:16
(Video) ... can't even begin to imagine.

The Senatorial Kayfabe On Mayorkas Changes Nothing - But It Is Entertaining...,

KATV  |   Sen. Rand Paul, R-Ky., chastised Department of Homeland Security Secretary Alejandro Mayorkas Thursday over his alleged mishandli...