Showing posts with label fractal unfolding. Show all posts
Showing posts with label fractal unfolding. Show all posts

Sunday, June 17, 2018

Musean Hypernumbers


archive.is |  Musean hypernumbers are an algebraic concept envisioned by Charles A. Musès (1919–2000) to form a complete, integrated, connected, and natural number system.[1][2][3][4][5] Musès sketched certain fundamental types of hypernumbers and arranged them in ten "levels", each with its own associated arithmetic and geometry.
Mostly criticized for lack of mathematical rigor and unclear defining relations, Musean hypernumbers are often perceived as an unfounded mathematical speculation. This impression was not helped by Musès' outspoken confidence in applicability to fields far beyond what one might expect from a number system, including consciousness, religion, and metaphysics.
The term "M-algebra" was used by Musès for investigation into a subset of his hypernumber concept (the 16 dimensional conic sedenions and certain subalgebras thereof), which is at times confused with the Musean hypernumber level concept itself. The current article separates this well-understood "M-algebra" from the remaining controversial hypernumbers, and lists certain applications envisioned by the inventor.

"M-algebra" and "hypernumber levels"[edit]

Musès was convinced that the basic laws of arithmetic on the reals are in direct correspondence with a concept where numbers could be arranged in "levels", where fewer arithmetical laws would be applicable with increasing level number.[3] However, this concept was not developed much further beyond the initial idea, and defining relations for most of these levels have not been constructed.
Higher-dimensional numbers built on the first three levels were called "M-algebra"[6][7] by Musès if they yielded a distributive multiplication, unit element, and multiplicative norm. It contains kinds of octonions and historical quaternions (except A. MacFarlane's hyperbolic quaternions) as subalgebras. A proof of completeness of M-algebra has not been provided.

Conic sedenions / "16 dimensional M-algebra"[edit]

The term "M-algebra" (after C. Musès[6]) refers to number systems that are vector spaces over the reals, whose bases consist in roots of −1 or +1, and which possess a multiplicative modulus. While the idea of such numbers was far from new and contains many known isomorphic number systems (like e.g. split-complex numbers or tessarines), certain results from 16 dimensional (conic) sedenions were a novelty. Musès demonstrated the existence of a logarithm and real powers in number systems built to non-real roots of +1.
 

Friday, June 15, 2018

Are Space And Time Quantized?


Forbes |  Throughout the history of science, one of the prime goals of making sense of the Universe has been to discover what's fundamental. Many of the things we observe and interact with in the modern, macroscopic world are composed of, and can be derived from, smaller particles and the underlying laws that govern them. The idea that everything is made of elements dates back thousands of years, and has taken us from alchemy to chemistry to atoms to subatomic particles to the Standard Model, including the radical concept of a quantum Universe.

But even though there's very good evidence that all of the fundamental entities in the Universe are quantum at some level, that doesn't mean that everything is both discrete and quantized. So long as we still don't fully understand gravity at a quantum level, space and time might still be continuous at a fundamental level. Here's what we know so far.

Quantum mechanics is the idea that, if you go down to a small enough scale, everything that contains energy, whether it's massive (like an electron) or massless (like a photon), can be broken down into individual quanta. You can think of these quanta as energy packets, which sometimes behave as particles and other times behave as waves, depending on what they interact with.

Everything in nature obeys the laws of quantum physics, and our "classical" laws that apply to larger, more macroscopic systems can always (at least in theory) be derived, or emerge, from the more fundamental quantum rules. But not everything is necessarily discrete, or capable of being divided into a localized region space.


The energy level differences in Lutetium-177. Note how there are only specific, discrete energy levels that are acceptable. While the energy levels are discrete, the positions of the electrons are not.

If you have a conducting band of metal, for example, and ask "where is this electron that occupies the band," there's no discreteness there. The electron can be anywhere, continuously, within the band. A free photon can have any wavelength and energy; no discreteness there. Just because something is quantized, or fundamentally quantum in nature, doesn't mean everything about it must be discrete.

The idea that space (or space and time, since they're inextricably linked by Einstein's theories of relativity) could be quantized goes way back to Heisenberg himself. Famous for the Uncertainty Principle, which fundamentally limits how precisely we can measure certain pairs of quantities (like position and momentum), Heisenberg realized that certain quantities diverged, or went to infinity, when you tried to calculate them in quantum field theory.

Wednesday, May 16, 2018

DIY DNA Tinkering...,


NYTimes  |  If nefarious biohackers were to create a biological weapon from scratch — a killer that would bounce from host to host to host, capable of reaching millions of people, unrestrained by time or distance — they would probably begin with some online shopping.

A site called Science Exchange, for example, serves as a Craigslist for DNA, a commercial ecosystem connecting almost anyone with online access and a valid credit card to companies that sell cloned DNA fragments.


Mr. Gandall, the Stanford fellow, often buys such fragments — benign ones. But the workarounds for someone with ill intent, he said, might not be hard to figure out.

Biohackers will soon be able to forgo these companies altogether with an all-in-one desktop genome printer: a device much like an inkjet printer that employs the letters AGTC — genetic base pairs — instead of the color model CMYK.

A similar device already exists for institutional labs, called BioXp 3200, which sells for about $65,000. But at-home biohackers can start with DNA Playground from Amino Labs, an Easy Bake genetic oven that costs less than an iPad, or The Odin’s Crispr gene-editing kit for $159.

Tools like these may be threatening in the wrong hands, but they also helped Mr. Gandall start a promising career.

At age 11, he picked up a virology textbook at a church book fair. Before he was old enough for a driver’s permit, he was urging his mother to shuttle him to a research job at the University of California, Irvine.

He began dressing exclusively in red polo shirts to avoid the distraction of choosing outfits. He doodled through high school — correcting biology teachers — and was kicked out of a local science fair for what was deemed reckless home-brew genetic engineering.

Mr. Gandall barely earned a high-school diploma, he said, and was rebuffed by almost every college he applied to — but later gained a bioengineering position at Stanford University.


“Pretty ironic, after they rejected me as a student,” he said.

He moved to East Palo Alto — with 14 red polo shirts — into a house with three nonbiologists, who don’t much notice that DNA is cloned in the corner of his bedroom.

His mission at Stanford is to build a body of genetic material for public use. To his fellow biohackers, it’s a noble endeavor.

To biosecurity experts, it’s tossing ammunition into trigger-happy hands.

“There are really only two things that could wipe 30 million people off of the planet: a nuclear weapon, or a biological one,” said Lawrence O. Gostin, an adviser on pandemic influenza preparedness to the World Health Organization.

“Somehow, the U.S. government fears and prepares for the former, but not remotely for the latter. It baffles me.”


Wednesday, October 04, 2017

Access the Guardian Through a Raspberry Pi? Of Course...,


wikipedia |  Main article: Wolfram Language

In June 2014, Wolfram officially announced the Wolfram Language as a new general multi-paradigm programming language.[65] The documentation for the language was pre-released in October 2013 to coincide with the bundling of Mathematica and the Wolfram Language on every Raspberry Pi computer. While the Wolfram Language has existed for over 25 years as the primary programming language used in Mathematica, it was not officially named until 2014.[66] Wolfram's son, Christopher Wolfram, appeared on the program of SXSW giving a live-coding demonstration using Wolfram Language[67] and has blogged about Wolfram Language for Wolfram Research.[68]

On 8 December 2015, Wolfram published the book "An Elementary Introduction to the Wolfram Language" to introduce people, with no knowledge of programming, to the Wolfram Language and the kind of computational thinking it allows.[69] The release of the second edition of the book[70] coincided with a "CEO for hire" competition during the 2017 Collision tech conference.[71]

Both Stephen Wolfram and Christopher Wolfram were involved in helping create the alien language for the film Arrival, for which they used the Wolfram Language.[72][73][74]

An Introduction to the Wolfram Language Online

A New Kind of Science


wikipedia |  The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world. Since its nascent beginnings in the 1930s, computation has been primarily approached from two traditions: engineering, which seeks to build practical systems using computations; and mathematics, which seeks to prove theorems about computation. However, as recently as the 1970s, computing has been described as being at the crossroads of mathematical, engineering, and empirical traditions.[2][3]

Wolfram introduces a third tradition that seeks to empirically investigate computation for its own sake: He argues that an entirely new method is needed to do so because traditional mathematics fails to meaningfully describe complex systems, and that there is an upper limit to complexity in all systems.[4]

Simple programs

The basic subject of Wolfram's "new kind of science" is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases (after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules). This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include, amongst others, cellular automata in one, two, and three dimensions; mobile automata; Turing machines in 1 and 2 dimensions; several varieties of substitution and network systems; primitive recursive functions; nested recursive functions; combinators; tag systems; register machines; reversal-addition. For a program to qualify as simple, there are several requirements:
  1. Its operation can be completely explained by a simple graphical illustration.
  2. It can be completely explained in a few sentences of human language.
  3. It can be implemented in a computer language using just a few lines of code.
  4. The number of its possible variations is small enough so that all of them can be computed.
Generally, simple programs tend to have a very simple abstract framework. Simple cellular automata, Turing machines, and combinators are examples of such frameworks, while more complex cellular automata do not necessarily qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems. The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.

Simple programs are capable of a remarkable range of behavior. Some have been proven to be universal computers. Others exhibit properties familiar from traditional science, such as thermodynamic behavior, continuum behavior, conserved quantities, percolation, sensitive dependence on initial conditions, and others. They have been used as models of traffic, material fracture, crystal growth, biological growth, and various sociological, geological, and ecological phenomena. Another feature of simple programs is that, according to the book, making them more complicated seems to have little effect on their overall complexity. A New Kind of Science argues that this is evidence that simple programs are enough to capture the essence of almost any complex system.

Mapping and mining the computational universe

In order to study simple rules and their often complex behaviour, Wolfram argues that it is necessary to systematically explore all of these computational systems and document what they do. He further argues that this study should become a new branch of science, like physics or chemistry. The basic goal of this field is to understand and characterize the computational universe using experimental methods.

The proposed new branch of scientific exploration admits many different forms of scientific production. For instance, qualitative classifications are often the results of initial forays into the computational jungle. On the other hand, explicit proofs that certain systems compute this or that function are also admissible. There are also some forms of production that are in some ways unique to this field of study. For example, the discovery of computational mechanisms that emerge in different systems but in bizarrely different forms.

Another kind of production involves the creation of programs for the analysis of computational systems. In the NKS framework, these themselves should be simple programs, and subject to the same goals and methodology. An extension of this idea is that the human mind is itself a computational system, and hence providing it with raw data in as effective a way as possible is crucial to research. Wolfram believes that programs and their analysis should be visualized as directly as possible, and exhaustively examined by the thousands or more. Since this new field concerns abstract rules, it can in principle address issues relevant to other fields of science. However, in general Wolfram's idea is that novel ideas and mechanisms can be discovered in the computational universe, where they can be represented in their simplest forms, and then other fields can choose among these discoveries for those they find relevant.

Wolfram has since expressed "A central lesson of A New Kind of Science is that there’s a lot of incredible richness out there in the computational universe. And one reason that’s important is that it means that there’s a lot of incredible stuff out there for us to 'mine' and harness for our purposes."[5]

Sunday, October 01, 2017

Quantum Criticality in Living Systems


phys.org  |  Stuart Kauffman, from the University of Calgary, and several of his colleagues have recently published a paper on the Arxiv server titled 'Quantum Criticality at the Origins of Life'. The idea of a quantum criticality, and more generally quantum critical states, comes perhaps not surprisingly, from solid state physics. It describes unusual electronic states that are are balanced somewhere between conduction and insulation. More specifically, under certain conditions, current flow at the critical point becomes unpredictable. When it does flow, it tends to do so in avalanches that vary by several orders of magnitude in size. 

Ferroelectric metals, like iron, are one familiar example of a material that has classical critical point. Above a of 1043 degrees K the magnetization of iron is completely lost. In the narrow range approaching this point, however, thermal fluctuations in the electron spins that underly the magnetic behavior extend over all length scales of the sample—that's the scale invariance we mentioned. In this case we have a continuous phase transition that is thermally driven, as opposed to being driven by something else like external pressure, magnetic field, or some kind of chemical influence.

Quantum criticality, on the other hand, is usually associated with stranger electronic behaviors—things like high-temperature superconductivity or so-called heavy fermion metals like CeRhIn5. One strange behavior in the case of heavy fermions, for example, is the observation of large 'effective mass'—mass up to 1000 times normal—for the conduction electrons as a consequence of their narrow electronic bands. These kinds of phenomena can only be explained in terms of the collective behavior of highly correlated electrons, as opposed to more familiar theory based on decoupled electrons. 

Experimental evidence for critical points in of materials like CeRhIn5 has only recently been found. In this case the so-called "Fermi surface," a three-dimensional map representing the collective energy states of all electrons in the material, was seen to have large instantaneous shifts at the critical points. When electrons across the entire Fermi surface are strongly coupled, unusual physics like superconductivity is possible.

The potential existence of in proteins is a new idea that will need some experimental evidence to back it up. Kauffman and his group eloquently describe the major differences between current flow in proteins as compared to metallic conductors. They note that in metals charges 'float' due to voltage differences. Here, an electric fields accelerate electrons while scattering on impurities dissipates their energy fixing a constant average propagation velocity.
By contrast, this kind of a mechanism would appear to be uncommon in biological systems. The authors note that charges entering a critically conducting biomolecule will be under the joint influence of the quantum Hamiltonian and the excessive decoherence caused by the environment. Currently a huge focus in Quantum biology, this kind of conductance has been seen for example, for excitons in the light-harvesting systems. As might already be apparent here, the logical flow of the paper, at least to nonspecialists, quickly devolves into the more esoteric world of quantum Hamiltonians and niche concepts like 'Anderson localization.' 

To try to catch a glimpse of what might be going on without becoming steeped in formalism I asked Luca Turin, who actually holds the patent for semiconductor structures using proteins as their active element, for his take on the paper. He notes that the question of how electrons get across proteins is one of the great unsolved problems in biophysics, and that the Kauffman paper points in a novel direction to possibly explain conduction. Quantum tunnelling (which is an essential process, for example, in the joint special ops of proteins of the respiratory chain) works fine over small distances. However, rates fall precipitously with distance. Traditional hole and electron transport mechanisms butt against the high bandgap and absence of obvious acceptor impurities. Yet at rest our body's fuel cell generates 100 amps of electron current.
 
In suggesting that biomolecules, or at least most of them, are quantum critical conductors, Kauffman and his group are claiming that their electronic properties are precisely tuned to the transition point between a metal and an insulator. An even stronger reading of this would have that there is a universal mechanism of charge transport in living matter which can exist only in highly evolved systems. To back all this up the group took a closer look at the electronic structure of a few of our standard issue proteins like myoglobin, profilin, and apolipoprotein E.

In particular, they selected NMR spectra from the Protein Data Bank and used a technique known as the extended Huckel Hamiltonion method to calculate HOMO/LUMO orbitals for the proteins. For more comments on HOMO/LUMO orbital calculations you might look at our post on Turin's experiments on electron spin changes as a potential general mechanism of anesthesia. To fully appreciate what such calculations might imply in this case, we have to toss out another fairly abstract concept, namely, Hofstadter's butterfly as seen in the picture below.

Friday, September 29, 2017

Why the Future Doesn't Need Us


ecosophia |  Let’s start with the concept of the division of labor. One of the great distinctions between a modern industrial society and other modes of human social organization is that in the former, very few activities are taken from beginning to end by the same person. A woman in a hunter-gatherer community, as she is getting ready for the autumn tuber-digging season, chooses a piece of wood, cuts it, shapes it into a digging stick, carefully hardens the business end in hot coals, and then puts it to work getting tubers out of the ground. Once she carries the tubers back to camp, what’s more, she’s far more likely than not to take part in cleaning them, roasting them, and sharing them out to the members of the band.

A woman in a modern industrial society who wants to have potatoes for dinner, by contrast, may do no more of the total labor involved in that process than sticking a package in the microwave. Even if she has potatoes growing in a container garden out back, say, and serves up potatoes she grew, harvested, and cooked herself, odds are she didn’t make the gardening tools, the cookware, or the stove she uses. That’s division of labor: the social process by which most members of an industrial society specialize in one or another narrow economic niche, and use the money they earn from their work in that niche to buy the products of other economic niches.

Let’s say it up front: there are huge advantages to the division of labor.  It’s more efficient in almost every sense, whether you’re measuring efficiency in terms of output per person per hour, skill level per dollar invested in education, or what have you. What’s more, when it’s combined with a social structure that isn’t too rigidly deterministic, it’s at least possible for people to find their way to occupational specialties for which they’re actually suited, and in which they will be more productive than otherwise. Yet it bears recalling that every good thing has its downsides, especially when it’s pushed to extremes, and the division of labor is no exception.

Crackpot realism is one of the downsides of the division of labor. It emerges reliably whenever two conditions are in effect. The first condition is that the task of choosing goals for an activity is assigned to one group of people and the task of finding means to achieve those goals is left to a different group of people. The second condition is that the first group needs to be enough higher in social status than the second group that members of the first group need pay no attention to the concerns of the second group.

Consider, as an example, the plight of a team of engineers tasked with designing a flying car.  People have been trying to do this for more than a century now, and the results are in: it’s a really dumb idea. It so happens that a great many of the engineering features that make a good car make a bad aircraft, and vice versa; for instance, an auto engine needs to be optimized for torque rather than speed, while an aircraft engine needs to be optimized for speed rather than torque. Thus every flying car ever built—and there have been plenty of them—performed just as poorly as a car as it did as a plane, and cost so much that for the same price you could buy a good car, a good airplane, and enough fuel to keep both of them running for a good long time.

Engineers know this. Still, if you’re an engineer and you’ve been hired by some clueless tech-industry godzillionaire who wants a flying car, you probably don’t have the option of telling your employer the truth about his pet project—that is, that no matter how much of his money he plows into the project, he’s going to get a clunker of a vehicle that won’t be any good at either of its two incompatible roles—because he’ll simply fire you and hire someone who will tell him what he wants to hear. Nor do you have the option of sitting him down and getting him to face what’s behind his own unexamined desires and expectations, so that he might notice that his fixation on having a flying car is an emotionally charged hangover from age eight, when he daydreamed about having one to help him cope with the miserable, bully-ridden public school system in which he was trapped for so many wretched years. So you devote your working hours to finding the most rational, scientific, and utilitarian means to accomplish a pointless, useless, and self-defeating end. That’s crackpot realism.

You can make a great party game out of identifying crackpot realism—try it sometime—but I’ll leave that to my more enterprising readers. What I want to talk about right now is one of the most glaring examples of crackpot realism in contemporary industrial society. Yes, we’re going to talk about space travel again.

Thursday, September 28, 2017

Poised Realm Patent


google | CROSS-REFERENCE TO RELATED APPLICATIONS
  • [0001]
    This application claims the benefit of U.S. Provisional Application Nos. 61/367,781, filed Jul. 26, 2010; 61/367,779, filed Jul. 26, 2010; 61/416,723, filed Nov. 23, 2010; 61/420,720, filed Dec. 7, 2010; and 61/431,420, filed Jan. 10, 2011, all of which are incorporated herein by reference in their entirety.
  • BACKGROUND
  • [0002]
    1. Field of the Invention
  • [0003]
    The present invention relates to systems and uses of systems operating between fully quantum coherent and fully classical states. Non-limiting applications include drug discovery, computers, and artificial intelligence.
  • [0004]
    2. Background Description
  • [0005]
    Many physical systems having quantum degrees of freedom quickly decohere to classicity for all practical purposes. Thus, many designed systems consider only classical behaviors. One example is in the field of drug discovery where traditional approaches to drug design considers the lock-and-key fitting of a molecule into an enzyme or receptor. Other designed systems are carefully setup to maintain full quantum coherence, for example, the qubits in a quantum computer. However, recent discoveries have indicated several systems in nature that have relatively slow decoherence. Birds are able to see magnetic field lines due to a quantum coherent chemical reaction in their retina. Light harvesting molecules are able to maintain quantum coherent electron transport for times much longer than the expected coherence time at room temperatures. The existence of such cases demonstrates that quantum coherence can exist at room temperature and at the presence of water bath and evolution can ‘design’ quantum coherent structures to play certain biological roles. Thus, there is a need for new systems that utilize the unique properties that exist between full quantum coherence and classicity
    SUMMARY OF THE INVENTION
    • [0006]
      Disclosed herein are various methods of classifying the state of a system, such as a molecule interacting with its environment, in terms of its degree of order, its degree of coherence, and/or its rate of coherence decay. Some embodiments include classifying only a single one of these variables whereas other embodiments include classifying two or all three of the variables. These methods include classifying the system in the course of creating systems that exist and/or operate at a specific point or region of a classification space described the variables discussed above and all practical outcomes of such creation.
    • [0007]
      Disclosed herein is a quantum reservoir computer that includes a plurality of nodes, each node comprising at least one quantum degree of freedom that is coupled to at least one quantum degree of freedom in each other node; at least one input signal generator configured to produce at least one time-varying input signal that couples to the quantum degree of freedom; and a detector configured to receive a plurality of time-varying output signals that couple to the quantum degree of freedom.
    • [0008]
      Also disclosed herein is a method of drug discovery that includes selecting a biological target; screening a library of candidate molecules to identify a first subset of candidate molecules that bind to the biological target; determining the energy level spacing distribution of a quantum degree of freedom in each of the candidate molecules in the first subset; comparing the energy level spacing distribution to at least one pre-determined reference function; and selecting a second subset of molecules from the first subset as drug candidates based on the comparison.
    • 0009]
      Further disclosed herein is a method of drug discovery that includes selecting a biological target; screening a library of candidate molecules to identify a first subset of candidate molecules that bind to the biological target; determining the energy level spacing distribution of a quantum degree of freedom in each of the candidate molecules in the first subset; conducting an in vitro or in vivo assay for biological activity on each of the candidate molecules in the first subset; correlating the energy level spacing distribution with activity determined from the in vitro or in vivo assay; determining the energy level spacing distributions of a quantum degree of freedom in a new set of candidate molecules; comparing the energy level spacing distributions of the new set of candidate molecules with energy level spacing distributions that correlate with biological activity; and select as drug candidates from the new set of candidate molecules those molecules whose energy level spacing distributions exhibit a pre-determined level of similarity to the energy level spacing distributions that correlate with biological activity.
    • [0010]
      Further disclosed herein is a method of drug discovery that includes selecting a biological target; screening a library of candidate molecules to identify a first subset of candidate molecules that bind to the biological target; measuring decoherence decay of a quantum degree of freedom in each of the candidate molecules in the first subset; comparing the decoherence decay to at least one pre-determined reference function; and selecting a second subset of molecules from the first subset as drug candidates based on the comparison.

Sunday, December 11, 2016

the semiosis of evolution


springer |  Most contemporary evolutionary biologists consider perception, cognition, and communication just like any other adaptation to the environmental selection pressures. A biosemiotic approach adds an unexpected turn to this Neo-Darwinian logic and focuses not so much on the evolution of semiosis as it does on the semiosis of evolution. What is meant here, is that evolutionary forces are themselves semiotically constrained and contextualized. The effect of environmental conditions is always mediated by the responses of organisms, who select their developmental pathways and actions based on heritable or memorized past experience and a variety of external and internal signals. In particular, recognition and categorization of objects, learning, and communication (both intraspecific and interspecific) can change the evolutionary fate of lineages. Semiotic selection, an effect of choice upon other species (Maran and Kleisner 2010), active habitat preference (Lindholm 2015), making use of and reinterpreting earlier semiotic structures – known as semiotic co-option (Kleisner 2015), and semiotic scaffolding (Hoffmeyer 2015; Kull 2015), are some further means by which semiosis makes evolution happen.

Semiotic processes are easily recognized in animals that communicate and learn, but it is difficult to find directly analogous processes in organisms without nerves and brains. Molecular biologists are used to talk about information transfer via cell-to-cell communication, DNA replication, RNA or protein synthesis, and signal transduction cascades within cells. However, these informational processes are difficult to compare with perception-related sign processes in animals because information requires interpretation by some agency, and it is not clear where the agency in cells is. In bacterial cells, all molecular processes appear deterministic, with every signal, such as the presence of a nutrient or toxin, launching a pre-defined cascade of responses targeted at confronting new conditions. These processes lack an element of learning during the bacterial life span, and thus cannot be compared directly with complex animal and human semiosis, where individual learning plays a decisive role.

The determinism of the molecular clockwork was summarized in the dogma that genes determine the phenotype and not the other way around. As a result, the Modern Synthesis (MS) theory presented evolution as a mechanical process that starts with blind random variation of the genome, and ends with automatic selection of the fittest phenotypes. Although this theory may explain quantitative changes in already existing features, it certainly cannot describe the emergence of new organs or signaling pathways. The main deficiency of such explanations is that the exact correspondence between genotypes and phenotypes is postulated a priori. In other words, MS was built like Euclidean geometry, where questioning the foundational axioms will make the whole system fall, like a house of cards.

The discipline of biosemiotics has generated a new platform for explaining biological evolution. It considers that evolution is semiosis, a process of continuous interpretation and re-interpretation of hereditary signs alongside other signs that originate in the environment or the body.

Friday, October 28, 2016

Computational Genomics F'Real...,


WSJ |  A QUICK RIDDLE: WHAT DO 100 works of classic literature, a seed database from the nonprofit Crop Trust and the Universal Declaration of Human Rights have in common? All of them were recently converted from bits of digital data to strands of synthetic DNA. In addition to these weighty files, researchers at Microsoft and the University of Washington converted a high-definition music video of “This Too Shall Pass” by the alternative rock band OK Go. The video is an homage to Rube Goldberg-like contraptions, which bear more than a passing resemblance to the labyrinthine process of transforming data into the genetic instructions that shape all living things.

This recent data-to-DNA conversion, completed in July, totaled 200 megabytes—which would barely register on a 16-gigabyte iPhone. It’s not a huge amount of information, but it bested the previous DNA storage record, set by scientists at Harvard University, by a factor of about 10. To achieve this, researchers concocted a convoluted process to encode the data, store it in synthetic DNA and then use DNA sequencing machines to retrieve and, finally, decode the data. The result? The exact same files they began with.

Which raises the question: Why bother?

“We are seeing this explosion in the amount of data that needs to be stored,” says Karin Strauss, the principal Microsoft researcher on the project. “To continue storing this information, we need radical new approaches.” In an age of gargantuan, power-sucking data centers, the space-saving potential of data stored in DNA is staggering. “You can archive all the data on the internet in a shoebox,” says Luis Ceze, an associate professor of computer science and engineering at the University of Washington.

Tuesday, June 07, 2016

quantum criticality at the origin of life


arvix |  Why life persists at the edge of chaos is a question at the very heart of evolution. Here we show that molecules taking part in biochemical processes from small molecules to proteins are critical quantum mechanically. Electronic Hamiltonians of biomolecules are tuned exactly to the critical point of the metal-insulator transition separating the Anderson localized insulator phase from the conducting disordered metal phase. Using tools from Random Matrix Theory we confirm that the energy level statistics of these biomolecules show the universal transitional distribution of the metal-insulator critical point and the wave functions are multifractals in accordance with the theory of Anderson transitions. The findings point to the existence of a universal mechanism of charge transport in living matter. The revealed bio-conductor material is neither a metal nor an insulator but a new quantum critical material which can exist only in highly evolved systems and has unique material properties.

physorg |  Stuart Kauffman, from the University of Calgary, and several of his colleagues have recently published a paper on the Arxiv server titled 'Quantum Criticality at the Origins of Life'. The idea of a quantum criticality, and more generally quantum critical states, comes perhaps not surprisingly, from solid state physics. It describes unusual electronic states that are are balanced somewhere between conduction and insulation. More specifically, under certain conditions, current flow at the critical point becomes unpredictable. When it does flow, it tends to do so in avalanches that vary by several orders of magnitude in size.

In suggesting that biomolecules, or at least most of them, are quantum critical conductors, Kauffman and his group are claiming that their electronic properties are precisely tuned to the transition point between a metal and an insulator. An even stronger reading of this would have that there is a universal mechanism of charge transport in living matter which can exist only in highly evolved systems. To back all this up the group took a closer look at the electronic structure of a few of our standard issue proteins like myoglobin, profilin, and apolipoprotein E.

Tuesday, November 24, 2015

control of fractal unfolding - impressive, most impressive...


Science | The war against malaria has a new ally: a controversial technology for spreading genes throughout a population of animals. Researchers report today that they have harnessed a so-called gene drive to efficiently endow mosquitoes with genes that should make them immune to the malaria parasite—and unable to spread it. On its own, gene drive won’t get rid of malaria, but if successfully applied in the wild the method could help wipe out the disease, at least in some corners of the world. The approach “can bring us to zero [cases],” says Nora Besansky, a geneticist at the University of Notre Dame in South Bend, Indiana, who specializes in malaria-carrying mosquitoes. “The mosquitos do their own work [and] reach places we can’t afford to go or get to.”

But testing that promise in the field may have to wait until a wider debate over gene drives is resolved. The essence of this long-discussed strategy for spreading a genetic trait, such as disease resistance, is to bias inheritance so that more than the expected half of a subsequent generation inherits it. The gene drive concept attracted new attention earlier this year, when geneticists studying fruit flies adapted a gene editing technology called CRISPR-Cas9 to help spread a mutation—and were startled to find it worked so well that the mutation reached almost all fly progeny. Their report, published this spring in Science (20 March, p. 1300) came out less than a year after an eLife paper discussed the feasibility of a CRISPR-Cas9 gene drive system but warned that it could disrupt ecosystems and wipe out populations of entire species.

A firestorm quickly erupted over the risks of experimenting with gene drives, nevermind applying them in the field. The U.S. National Academy of Sciences (NAS) has convened a committee to weigh the risks and propose safeguards, and the authors of the eLife andScience papers have laid out guidelines for experiments (Science, 28 August, p. 927).

Iran Breached And Spec'd The Complete Iron Dome While Hitting It's Military Targets With Hypersonic Missiles

simplicius  |   Now, let’s get down to the nuts and bolts. This strike was unprecedented for several important reasons. Firstly, it was of ...