Showing posts with label quantum. Show all posts
Showing posts with label quantum. Show all posts

Saturday, June 03, 2023

The Collapse Of The Wave Function

wikipedia  |  In quantum mechanics, the measurement problem is the problem of how, or whether, wave function collapse occurs. The inability to observe such a collapse directly has given rise to different interpretations of quantum mechanics and poses a key set of questions that each interpretation must answer.

The wave function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states. However, actual measurements always find the physical system in a definite state. Any future evolution of the wave function is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement "did something" to the system that is not obviously a consequence of Schrödinger evolution. The measurement problem is describing what that "something" is, how a superposition of many possible values becomes a single measured value.

To express matters differently (paraphrasing Steven Weinberg),[1][2] the Schrödinger wave equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum reality and classical reality?[3]

The views often grouped together as the Copenhagen interpretation are the oldest and, collectively, probably still the most widely held attitude about quantum mechanics.[4][5] N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced.[6][7]

Generally, views in the Copenhagen tradition posit something in the act of observation which results in the collapse of the wave function. This concept, though often attributed to Niels Bohr, was due to Werner Heisenberg, whose later writings obscured many disagreements he and Bohr had had during their collaboration and that the two never resolved.[8][9] In these schools of thought, wave functions may be regarded as statistical information about a quantum system, and wave function collapse is the updating of that information in response to new data.[10][11] Exactly how to understand this process remains a topic of dispute.[12]

Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement".[13][14][15][16]

Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting that there is only one wave function, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate how the probabilistic nature of quantum mechanics would appear in measurements, a work later extended by Bryce DeWitt. However, proponents of the Everettian program have not yet reached a consensus regarding the correct way to justify the use of the Born rule to calculate probabilities.[17][18]

De Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wave function, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wave function is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space, which is where apparent wave function collapse comes from, even though there is no actual collapse.[19]

A fourth approach is given by objective-collapse models. In such models, the Schrödinger equation is modified and obtains nonlinear terms. These nonlinear modifications are of stochastic nature and lead to a behaviour that for microscopic quantum objects, e.g. electrons or atoms, is unmeasurably close to that given by the usual Schrödinger equation. For macroscopic objects, however, the nonlinear modification becomes important and induces the collapse of the wave function. Objective-collapse models are effective theories. The stochastic modification is thought to stem from some external non-quantum field, but the nature of this field is unknown. One possible candidate is the gravitational interaction as in the models of Diósi and Penrose. The main difference of objective-collapse models compared to the other approaches is that they make falsifiable predictions that differ from standard quantum mechanics. Experiments are already getting close to the parameter regime where these predictions can be tested.[20] The Ghirardi–Rimini–Weber (GRW) theory proposes that wave function collapse happens spontaneously as part of the dynamics. Particles have a non-zero probability of undergoing a "hit", or spontaneous collapse of the wave function, on the order of once every hundred million years.[21] Though collapse is extremely rare, the sheer number of particles in a measurement system means that the probability of a collapse occurring somewhere in the system is high. Since the entire measurement system is entangled (by quantum entanglement), the collapse of a single particle initiates the collapse of the entire measurement apparatus. Because the GRW theory makes different predictions from orthodox quantum mechanics in some conditions, it is not an interpretation of quantum mechanics in a strict sense.

Friday, June 02, 2023

Constructive Interference Patterns Give Rise To Unitary Conscious Experience

wikipedia  |  Smythies[27] defines the combination problem, also known as the subjective unity of perception, as "How do the brain mechanisms actually construct the phenomenal object?". Revonsuo[1] equates this to "consciousness-related binding", emphasizing the entailment of a phenomenal aspect. As Revonsuo explores in 2006,[28] there are nuances of difference beyond the basic BP1:BP2 division. Smythies speaks of constructing a phenomenal object ("local unity" for Revonsuo) but philosophers such as Descartes, Leibniz, Kant and James (see Brook and Raymont[29]) have typically been concerned with the broader unity of a phenomenal experience ("global unity" for Revonsuo) – which, as Bayne[30] illustrates may involve features as diverse as seeing a book, hearing a tune and feeling an emotion. Further discussion will focus on this more general problem of how sensory data that may have been segregated into, for instance, "blue square" and "yellow circle" are to be re-combined into a single phenomenal experience of a blue square next to a yellow circle, plus all other features of their context. There is a wide range of views on just how real this "unity" is, but the existence of medical conditions in which it appears to be subjectively impaired, or at least restricted, suggests that it is not entirely illusory.[31]

There are many neurobiological theories about the subjective unity of perception. Different visual features such as color, size, shape, and motion are computed by largely distinct neural circuits but we experience an integrated whole. The different visual features interact with each other in various ways. For example, shape discrimination of objects is strongly affected by orientation but only slightly affected by object size.[32] Some theories suggest that global perception of the integrated whole involves higher order visual areas.[33] There is also evidence that the posterior parietal cortex is responsible for perceptual scene segmentation and organization.[34] Bodies facing each other are processed as a single unit and there is increased coupling of the extrastriate body area (EBA) and the posterior superior temporal sulcus (pSTS) when bodies are facing each other.[35] This suggests that the brain is biased towards grouping humans in twos or dyads.[36]

Dennett[40] has proposed that our sense that our experiences are single events is illusory and that, instead, at any one time there are "multiple drafts" of sensory patterns at multiple sites. Each would only cover a fragment of what we think we experience. Arguably, Dennett is claiming that consciousness is not unified and there is no phenomenal binding problem. Most philosophers have difficulty with this position (see Bayne[30]) but some physiologists agree with it. In particular, the demonstration of perceptual asynchrony in psychophysical experiments by Moutoussis and Zeki,[48][49] when color is perceived before orientation of lines and before motion by 40 and 80 ms, respectively, constitutes an argument that, over these very short time periods, different attributes are consciously perceived at different times, leading to the view that at least over these brief periods of time after visual stimulation, different events are not bound to each other, leading to the view of a disunity of consciousness,[50] at least over these brief time intervals. Dennett's view might be in keeping with evidence from recall experiments and change blindness purporting to show that our experiences are much less rich than we sense them to be – what has been called the Grand Illusion.[51] However, few, if any, other authors suggest the existence of multiple partial "drafts". Moreover, also on the basis of recall experiments, Lamme[52] has challenged the idea that richness is illusory, emphasizing that phenomenal content cannot be equated with content to which there is cognitive access.

Dennett does not tie drafts to biophysical events. Multiple sites of causal convergence are invoked in specific biophysical terms by Edwards[53] and Sevush.[54] In this view the sensory signals to be combined in phenomenal experience are available, in full, at each of multiple sites. To avoid non-causal combination each site/event is placed within an individual neuronal dendritic tree. The advantage is that "compresence" is invoked just where convergence occurs neuro-anatomically. The disadvantage, as for Dennett, is the counter-intuitive concept of multiple "copies" of experience. The precise nature of an experiential event or "occasion", even if local, also remains uncertain.

The majority of theoretical frameworks for the unified richness of phenomenal experience adhere to the intuitive idea that experience exists as a single copy, and draw on "functional" descriptions of distributed networks of cells. Baars[55] has suggested that certain signals, encoding what we experience, enter a "Global Workspace" within which they are "broadcast" to many sites in the cortex for parallel processing. Dehaene, Changeux and colleagues[56] have developed a detailed neuro-anatomical version of such a workspace. Tononi and colleagues[57] have suggested that the level of richness of an experience is determined by the narrowest information interface "bottleneck" in the largest sub-network or "complex" that acts as an integrated functional unit. Lamme[52] has suggested that networks supporting reciprocal signaling rather than those merely involved in feed-forward signaling support experience. Edelman and colleagues have also emphasized the importance of re-entrant signaling.[58] Cleeremans[59] emphasizes meta-representation as the functional signature of signals contributing to consciousness.

In general, such network-based theories are not explicitly theories of how consciousness is unified, or "bound" but rather theories of functional domains within which signals contribute to unified conscious experience. A concern about functional domains is what Rosenberg[60] has called the boundary problem; it is hard to find a unique account of what is to be included and what excluded. Nevertheless, this is, if anything is, the consensus approach.

Within the network context, a role for synchrony has been invoked as a solution to the phenomenal binding problem as well as the computational one. In his book, The Astonishing Hypothesis,[61] Crick appears to be offering a solution to BP2 as much as BP1. Even von der Malsburg,[62] introduces detailed computational arguments about object feature binding with remarks about a "psychological moment". The Singer group[63] also appear to be interested as much in the role of synchrony in phenomenal awareness as in computational segregation.

The apparent incompatibility of using synchrony to both segregate and unify might be explained by sequential roles. However, Merker[20] points out what appears to be a contradiction in attempts to solve the subjective unity of perception in terms of a functional (effectively meaning computational) rather than a local biophysical, domain, in the context of synchrony.

Functional arguments for a role for synchrony are in fact underpinned by analysis of local biophysical events. However, Merker[20] points out that the explanatory work is done by the downstream integration of synchronized signals in post-synaptic neurons: "It is, however, by no means clear what is to be understood by 'binding by synchrony' other than the threshold advantage conferred by synchrony at, and only at, sites of axonal convergence onto single dendritic trees..." In other words, although synchrony is proposed as a way of explaining binding on a distributed, rather than a convergent, basis the justification rests on what happens at convergence. Signals for two features are proposed as bound by synchrony because synchrony effects downstream convergent interaction. Any theory of phenomenal binding based on this sort of computational function would seem to follow the same principle. The phenomenality would entail convergence, if the computational function does.

The assumption in many of the quoted models suggest that computational and phenomenal events, at least at some point in the sequence of events, parallel each other in some way. The difficulty remains in identifying what that way might be. Merker's[20] analysis suggests that either (1) both computational and phenomenal aspects of binding are determined by convergence of signals on neuronal dendritic trees, or (2) that our intuitive ideas about the need for "binding" in a "holding together" sense in both computational and phenomenal contexts are misconceived. We may be looking for something extra that is not needed. Merker, for instance, argues that the homotopic connectivity of sensory pathways does the necessary work.

 

BeeDee Gave Me A Gentle Reminder To Get Back On Topic

wikipedia  |  In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater intensity (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves. 

The word interference is derived from the Latin words inter which means "between" and fere which means "hit or strike", and was coined by Thomas Young in 1801.[1][2][3]

The principle of superposition of waves states that when two or more propagating waves of the same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves.[4] If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave, then the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference. In ideal mediums (water, air are almost ideal) energy is always conserved, at points of destructive interference energy is stored in the elasticity of the medium. For example when we drop 2 pebbles in a pond we see a pattern but eventually waves continue and only when they reach the shore is energy absorbed away from the medium.

Constructive interference occurs when the phase difference between the waves is an even multiple of π (180°), whereas destructive interference occurs when the difference is an odd multiple of π. If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values.

Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.

Interference of light is a unique phenomenon in that we can never observe superposition of the EM field directly as we can for example in water. Superposition in the EM field is an assumed and necessary requirement, fundamentally 2 light beam pass through each other and continue on their respective paths. Light can be explained classically by the superposition of waves, however a deeper understanding of light interference requires knowledge of wave-particle duality of light which is due to quantum mechanics. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. Traditionally the classical wave model is taught as a basis for understanding optical interference, based on the Huygens–Fresnel principle however an explanation based on the Feynman path integral exists which takes into account quantum mechanical considerations.

 

 

Monday, May 08, 2023

Hameroff Was Talking About Living Water Forty Years Ago...,

hameroff  |  Biomolecules have evolved and flourished in aqueous environments, and basic interactions among biomolecules and their pervasive hosts, water molecules, are extremely important. The properties of intracellular water are controversial. Many authors believe that more than 90 percent of intracellular water is in the “bulk” phase-water as it exists in the oceans (Cooke and Kuntz, 1974; Schwan and Foster, 1977; Fung and McGaughy, 1979). This traditional view is challenged

by others who feel that none of the water in living cells is bulk (Troshin, 1966, Cope 1976, Negendank and Karreman, 1979). A middle position is assumed by those who feel that about half of “living” water is bulk and the other half “ordered” (Hinke, 1970; Clegg, 1976; Clegg, 1979; Horowitz and Paine, 1979). This group emphasizes the importance of “ordered” water to cellular structure and function.

Many techniques have been used to study this issue, but the results still require a great deal of interpretation. Nuclear magnetic resonance (NMR), neutron diffraction, heat capacity measurement, and diffusion studies are all inconclusive. Water appears to exist in both ordered and aqueous forms within cells. The critical issue is the relation between intracellular surfaces and water. Surfaces of all kinds are known to perturb adjacent water, but within cells it is unknown precisely how far from the surfaces ordering may extend. We know the surface area of the microtrabecular lattice and other cytoskeleton components is extensive (billions of square nanometers per cell) and that about one fifth of cell interiors consist of these components. Biologist James Clegg (1981) has extensively reviewed these issues. He concludes that intracellular water exists in three phases. 1) “Bound water” is involved in primary hydration, being within one or two layers from a biomolecular surface. 2) “Vicinal water” is ordered, but not directly bound to structures except other water molecules. This altered water is thought to extend 8 to 9 layers of water molecules from surfaces, a distance of about 3 nanometers. Garlid (1976, 1979) has shown that vicinal water has distinct solvent properties which differ from bulk water. Thus “borders” exist between water phases which partition solute molecules. 3) “Bulk water” extends beyond 3 nanometers from cytoskeletal surfaces (Figure 6.4).

Drost-Hansen (1973) described cooperative processes and phase transitions among vicinal water molecules. Clegg points out the potential implications of vicinal water on the function of enzymes which had previously been considered “soluble.” Rather than floating freely in an aqueous soup, a host of intracellular enzymes appear instead to be bound to the MTL surface within the vicinal water phase. Significant advantages appear evident to such an arrangement: a sequence of enzymes which perform a sequence of reactions on a substrate would be much more efficient if bound on a surface in the appropriate order. Requirements for diffusion of the substrate, the most time consuming step in enzymatic processes, would be minimal. Clegg presents extensive examples of associations of cytoplasmic enzymes which appear to be attached to and regulated by, the MTL. These vicinal water multi-enzyme complexes may indeed be part of a cytoskeletal information processing system. Clegg conjectures that dynamic conformational activities within the cytoskeleton/MTL can selectively excite enzymes to their active states.

The polymerization of cytoskeletal polymers and other biomolecules appears to flow upstream against the tide of order proceeding to disorder which is decreed by the second law of thermodynamics. This apparent second law felony is explained by the activities of the water molecules involved (Gutfreund, 1972). Even in bulk aqueous solution, water molecules are somewhat ordered, in that each water molecule can form up to 4 hydrogen bonds with other water molecules. Motion of the water molecules (unless frozen) and reversible breaking and reforming of these hydrogen bonds maintain the far miliar liquid nature of bulk water. Outer surfaces of biomolecules form more stable hydrogen bonding with water, “ordering” the water surrounding them. This results in a decrease in entropy (increased order) and increase in free energy: factors which would strongly inhibit the solubility of biomolecules if not for the effects of hydrophobic interactions. Hydrophobic groups (for example amino acids whose side groups are non-polar, that is they have no charge-like polar groups to form hydrogen bonds in water) tend to combine, or coalesce for two main reasons: Van der Waals forces and exclusion of water. 

Combination of hydrophobic groups “liberate” ordered water into free water, resulting in increased entropy and decreased free energy, factors which tend to drive reactions. The magnitude of the favorable free energy change for the combination of hydrophobic groups depends on their size and how well they fit together “sterically.” A snug fit between groups will exclude more water from hydrophobic regions than will loose fits. Consequently, specific biological reactions can rely on hydrophobic interactions. Forma, tion of tertiary and quaternary protein structure (including the assembly of microtubules and other cytoskeletal polymers) are largely regulated by hydrophobic interactions, and by the effect of hydrophobic regions on the energies of other bonding. A well studied example of the assembly of protein subunits into a complex structure being accompanied by an increase in entropy (decrease in order) is the crystallization of the tobacco mosaic virus. When the virus assembles from its subunits, an increase in entropy occurs due to exclusion of water from the virus surface. Similar events promote the assembly of microtubules and other cytoskeletal elements The attractive forces which bind hydrophobic groups are distinctly different from other types of chemical bonds such as covalent bonds and ionic bonds. These forces are called Van der Waals forces after the Dutch chemist who described them in 1873. At that time, it had been experimentally observed that gas molecules failed to follow behavior predicted by the “ideal gas laws” regarding pressure, temperature and volume relationships. Van der Waals attributed this deviation to the volume occupied by the gas molecules and by attractive forces among the gas molecules. These same attractive forces are vital to the assembly of organic crystals, including protein assemblies. They consist of dipole-dipole attraction, “induction effect,” and London dispersion forces. These hydrophobic Van der Waals forces are subtly vital to the assembly and function of important biomolecules.

Dipole-dipole attractions occur among molecules with permanent dipole moments. Only specific orientations are favored: alignments in which attractive, low energy arrangements occur as opposed to repulsive, high energy orientations. A net attraction between two polar molecules can result if their dipoles are properly configured. The “induction” effect occurs when a permanent dipole in one molecule can polarize electrons in a nearby molecule. The second molecule’s electrons are distorted so that their interaction with the dipole of the first molecule is attractive. The magnitude of the induced dipole attraction force was shown by Debye in 1920 to depend on the molecules’ dipole moments and their polarizability. Defined as the dipole moment induced by a standard field, polarizability also depends on the molecules’ orientation relative to that field. Subunits of protein assemblies like the tobacco mosaic virus have been shown to have high degrees of polarizability. London dispersion forces explain why all molecules, even those without intrinsic dipoles, attract each other. The effect was recognized by F. London in 1930 and depends on quantum mechanical motion of electrons. Electrons in atoms without permanent dipole moments (and “shared” electrons in molecules) have, on the average, a zero dipole, however “instantaneous dipoles” can be recognized. Instantaneous dipoles can induce dipoles in neighboring polarizable atoms or molecules. The strength of London forces is proportional to the square of the polarizability and inversely to the sixth power of the separation. Thus London forces can be significant only when two or more atoms or molecules are very close together (Barrow, 1966). Lindsay (1987) has observed that water and ions ordered on surfaces of biological macromolecules may have “correlated fluctuations” analogous to London forces among electrons. Although individually tenuous, these and other forces are the collective “glue” of dynamic living systems.

Sunday, December 25, 2022

Higher-Dimensional Incompetence Resulted In 3-Dimensional Imprisonment

Heaviside's vector calculus, also known as vector analysis, was developed in the late 19th century as a way to simplify and unify the mathematical treatment of physical phenomena involving vectors, such as those described by James Clerk Maxwell's equations of electromagnetism. At the time, Maxwell's equations were typically expressed using quaternions, which are a type of mathematical notation that involves four complex numbers. The quaternion algebra, developed by James Clerk Maxwell and William Rowan Hamilton, was a more complex mathematical system that had been used to describe physical phenomena, but it was eventually replaced by vector calculus due to its relative simplicity and ease of use.

Quaternions involved complex numbers and required the use of four dimensions, which made them more difficult to work with and interpret. In contrast, vector calculus used a more familiar three-dimensional coordinate system and involved only familiar algebraic operations. Quaternions were found to be somewhat difficult to work with and interpret, especially for those who were not familiar with the notation.

In contrast, vector calculus provided a more intuitive and familiar way to represent and manipulate vectors, using familiar concepts such as magnitude and direction. As a result, vector calculus quickly gained widespread adoption and eventually replaced quaternions as the preferred method for expressing and solving problems involving vectors in physics and engineering. Heaviside's vector notation, which uses arrow notation to represent vectors and dot notation to represent scalars, is much easier to use and understand than quaternions, which are a type of mathematical notation that uses four-dimensional complex numbers.

While quaternions were primarily used in the study of electromagnetism, vector calculus could be used to represent any type of vector quantity, including displacement, velocity, acceleration, and force. This made it a more widely applicable tool for solving problems in many different fields of science and engineering.

Monday, September 12, 2022

Ultimate Computing: Biomolecular Consciousness And NanoTechnology

This book has been written by an anesthesiologist because of a confluence of two fascinations. The first is the nature of consciousness, which anesthesiologists routinely erase and restore in their patients. The second is a fifteen year trail of notions that would not go away. While a third year medical student in 1972, I spent a summer research elective in a cancer laboratory. For some reason I became fascinated and fixated by one particular question. When cells divided, the chromosomes were separated and daughter cell architecture established by wispy strands called mitotic spindles (“microtubules”) and cylindrical organelles called centrioles. Somehow, the centrioles and spindles “knew” when to move, where to go, and what to do. The uncanny guidance and orientation mechanism of these tiny biomolecular structures seemed to require some kind of motorized intelligence.

At about the same time, electron microscopy techniques were revealing the interior of all living cells to be densely filled with wispy strands, some of which were identical to mitotic spindles. Interconnected in dynamic parallel networks, these structures were thought to serve a purely supportive, or mechanical structural role and were collectively termed the “cytoskeleton.”

But several factors suggested that the cytoskeleton was more than the structural “bones” of the cell: they manipulated dynamic activities, orchestrating complex and highly efficient processes such as cell growth, mitosis and transport. Another factor was a lack of any other candidate for “real time” dynamic organization within cells. Long term blueprints and genetic information clearly resided in DNA and RNA, and membranes performed dynamic functions at cell surfaces. However, a mechanism for the moment to moment execution, organization, and activities within cells remained unknown. 

Where was the nervous system within the cell? Was there a biological controller? 

This book is based on the premise that the cytoskeleton is the cell’s nervous system, the biological controller/computer. In the brain this implies that the basic levels of cognition are within nerve cells, that cytoskeletal filaments are the roots of consciousness. 

The small size and rapid conformational activities of cytoskeletal proteins are just beyond the resolution of current technologies, so their potential dynamics remain unexplored and a cytoskeletal controlling capability untested. Near future technologies will be able to function in the nanoscale (nano = 10-9; nanometer = one billionth meter, nanosecond = one billionth second and will hopefully resolve these questions. If indeed cytoskeletal dynamics are the texture of intracellular information processing, these same “nanotechnologies” should enable direct monitoring, decoding and interfacing between biological and technological information devices. This in turn could result in important biomedical applications and perhaps a merger of mind and machine: Ultimate Computing.

A thorough consideration of these ideas involves a number of disciplines, all of which are at least tangentially related to anesthesiology. These include biochemistry, cognitive science, computer science, engineering, mathematics, microbiology, molecular biology, pharmacology, philosophy, physics, physiology, and psychology. As an expert in none, but a dabbler in all, I hope true experts in these fields will find my efforts never-the-less interesting.

Starting from a cytoskeletal perspective, this book flings metaphors at the truth. Perhaps one or more will land on target, or at least come close.

Thursday, August 11, 2022

Physics Without Metaphysical Assumptions

quantumphysicslady |  Quantum physics also poses major challenges to realism. In 2017, Chinese physicists experimented with two photons, that is, bits of light. The specially-created photons were separated by 700 miles. The experiment showed that the photons were able to instantaneously coordinate their behavior. This phenomenon is called “quantum entanglement.”

According to Special Relativity, no signal can travel across a distance instantaneously. How can one photon instantaneously “know” what another photon over 700 miles away is doing? What kind of reality are we living in?

When people learn about quantum physics, they find out about specific oddities like quantum entanglement. But the most fundamental oddity is its most fundamental premise: Both matter and energy are mere vibrations in an invisible, undetectable medium called “fields.” How does our very impressive, very solid physical universe arise from vibrations in a kind of nothingness?

How does our reality arise from vibrations? [Image source: David Chalmers and Kelvin McQueen, “Consciousness and the Collapse of the Wave Function” (modified to omit a label) http://consc.net/slides/collapse.pdf]

But it gets worse. The vibrations represent a cornucopia of possible physical realities, only one of which becomes the solid reality that we perceive. (If this sentence is baffling to you, you are not alone. It condenses the heart of quantum physics, which is puzzling enough, into one sentence—which is a terrible idea. See the footnote below.**)

Do the oddities of quantum physics mean we must abandon realism? No. Many physicists have come up with interpretations of quantum physics that are based in realism. It’s also true that some of these interpretations describe very odd realities. For example, the Many Worlds Interpretation, perhaps the oddest, describes us as having infinite copies of ourselves in infinite numbers of universes. The desire to salvage realism may be an important reason that the Many Worlds Interpretation is gaining popularity among physicists. But there are other realistic interpretations of Special Relativity and quantum physics that are less mind-blowing.

Quantum physics undermines the solidity of the physical universe. When people learn that quantum physics is based on the proposition that matter and energy are, at bottom, vibrations of questionable reality which shake invisible, undetectable fields, the solidity of our universe loses some of its impressiveness. Thoughts pop up like: Could our physical universe have no more solidity than a dream?

To the dreamer, the houses, the cars, and the monsters all seem completely real. So, too do to the hallucinations of the psychotic and those of someone on LSD. Our minds are quite capable of creating solid reality without any necessity of an independent external world.

Quantum physics may also create wonderings about the mysterious invisible medium for the vibrations of matter and energy: Could the vibrations of matter and energy be vibrations in a new kind of energy that makes up consciousness?

In other words, some begin to entertain the notion of idealism.

Wednesday, August 10, 2022

More Thinking About The Fabric Of Reality

If you look at the graphic at the top of the article (Penrose tiling) you'll notice there are a bunch of points that are centers of rotational symmetry (you can rotate it 2pi/N and get the same thing) and lines of reflection symmetry (you can mirror it over that line and get the same thing) but there is no translational symmetry (you can't slide it over in any direction and overlap with the original), this is a "quasicrystal" (in 2d)

Compare this to a grid of squares that has reflection and rotation symmetry but also has translational symmetry, this is a true "crystal" (in 2d)

This article is treating a train of laser pulses as a "1d crystal" and if long/short pulses resemble a Fibonacci sequence treating it as a "1d quasicrystal". This seems to be noteworthy in that using such a structured pulse train provides some improvements in quantum computing when it's used to read/write (i.e. shine on) information (i.e. electron configuration) from atoms / small molecules (i.e. qubits)

The "2 time dimensions" thing is basically that a N-d "quasicrystal" is usually a pretty close approximation of an [N+M]-d "true crystal" projected down into N dimensions so the considering the higher dimension structure might make things easier by getting rid of transcendental numbers etc.

They could have just said "aperiodic laser pulses" are used. No need to introduce fantastical sounding terminology about multiple time dimensions, which seems to have been done quite deliberately.

The biggest and most important step is to make sure you drop any mysticism about what a "dimension" is. It's just a necessary component of identifying the location of something in some way. More than three "dimensions" is not just common but super common, to the point of mundanity. The location and orientation of a rigid object, a completely boring quantity, is six dimensional: three for space, three for the rotation. Add velocity in and it becomes 12 dimensional; the six previous and three each now for linear and rotational velocity. To understand "dimensions" you must purge ALL science fiction understanding and understand them not as exotic, but painfully mundane and boring. (They may measure something interesting, but that "interestingness" should be accounted to the thing being measured, not the "dimension". "Dimensions" are as boring as "inches" or "gallons".)

Next up, there is a very easy metaphor for us in the computing realm for the latest in QM and especially materials science. In our world, there is a certain way in which a "virtual machine" and a "machine" are hard to tell apart. A lot of things in the latest QM and materials science is building little virtual things that combine the existing simple QM primitives to build new systems. The simplest example of this sort of thing is a "hole". Holes do not "exist". They are where an electron is missing. But you can treat them as a virtual thing, and it can be difficult to tell whether or not that virtual thing is "real" or not, because it acts exactly like the "virtual" thing would if it were "real".

In this case, this system may mathematically behave like there is a second time dimension, and that's interesting, but it "just" "simulating" it. It creates a larger system out of smaller parts that happens to match that behavior, but it doesn't mean there's "really" a second time dimension.

The weird and whacky things you hear coming out of QM and materials science are composite things being assembled out of normal mundane components in ways that allow them to "simulate" being some other interesting system, except when you're "simulating" at this low, basic level it essentially is just the thing being "simulated". But there's not necessarily anything new going on; it's still electrons and protons and neutrons and such, just arranged in interesting ways, just as, in the end, Quake or Tetris is "just" an interesting arrangement of NAND gates. There's no upper limit to how "interestingly" things can be arranged, but there's less "new" than meets the eye.

Unfortunately, trying to understand this through science articles, which are still as addicted as ever to "woo woo" with the word dimensions and leaning in to the weirdness of QM and basically deliberately trying to instill mysticism at the incorrect level of the problem. (Personally, I still feel a lot of wonder about the world and enjoy learning more... but woo woo about what a "dimension" is is not the place for that.) 


The Universe As A Holographic Quasicrystalline Tensor Network

ncatlab |  The AdS-CFT correspondence at its heart is the observation (Witten 98, Section 2.4) that the classical action functionals for various fields coupled to Einstein gravity on anti de Sitter spacetime are, when expressed as functions of the asymptotic boundary-values of the fields, of the form of generating functions for correlators/n-point functions of a conformal field theory on that asymptotic boundary, in a large N limit.

This is traditionally interpreted as a concrete realization of a vague “holographic principle” according to which quantum gravity in bulk spacetimes is controlled, in one way or other, by “boundary field theories” on effective spacetime boundaries, such as event horizons. The original and main motivation for the holographic principle itself was the fact that the apparent black hole entropy in Einstein gravity scales with the area of the event horizon instead of the black hole’s bulk volume (which is not even well-defined), suggesting that gravity encodes or is encoded by some boundary field theory associated with horizons; an idea that, in turn, seems to find a concrete realization in open/closed string duality in the vicinity of, more generally, black branes. The original intuition about holographic black hole entropy has meanwhile found remarkably detailed reflection in (mathematically fairly rigorous) analysis of holographic entanglement entropy, specifically via holographic tensor networks, which turn out to embody key principles of the AdS/CFT correspondence in the guise of quantum information theory, with concrete applications such as to quantum error correcting codes.

quantumgravityresearch |  Recent advances in AdS/CFT holography have found an analogue in discrete tensor networks of qubits. The {5,4} hyperbolic tiling allows for topological error correction. We review a simple 32 x 32 Hamiltonian from five maximally entangled physical qubits on the boundary edges of a pentagon, whose two-fold degenerate ground state leads to an emergent logical qubit in the bulk. The inflation rule of a holographic conformal quasicrystal is found to encode the holographic code rate that determines the ratio of logical qubits to physical qubits. Generalizing SU(2) qubits to twistors as conformal spinors of SU(2,2), an H3-symmetric 5-compound of cuboctahedral A3 = D3 root polytopes is outlined. Motivated by error correction in the Hamming code, the E8 lattice is projected to the H4-symmetric quasicrystal. The 4-dimensional 600-cell is found to contain five 24-cells associated with the D4 root polytope associated with Spin(4,4). Intersection with Sp(8,R) phase space identifies three generations of conformal symmetry with an axial U(1) symmetry. A lightning review of E8(-24) phenomenology with Spin(12,4) is pursued for gravity and the standard model with a notion of CDT-inspired discretized membranes in mind. Warm dark matter beyond the standard model is briefly articulated to stem from intersecting worldvolumes related to the Leech lattice associated with the Golay code, hinting at a monstrously supersymmetric M-theory in D=26+1. A new D=27+3 superalgebra is shown to contain membranes that can give a worldvolume description of M-theory and F-theory.


Sunday, August 07, 2022

Just A Different Way Of Thinking About The Fabric Of Reality

phys.org  |  By shining a laser pulse sequence inspired by the Fibonacci numbers at atoms inside a quantum computer, physicists have created a remarkable, never-before-seen phase of matter. The phase has the benefits of two time dimensions despite there still being only one singular flow of time, the physicists report July 20 in Nature.

This mind-bending property offers a sought-after benefit: Information stored in the phase is far more protected against errors than with alternative setups currently used in quantum computers. As a result, the information can exist without getting garbled for much longer, an important milestone for making quantum computing viable, says study lead author Philipp Dumitrescu.

The approach's use of an "extra" time dimension "is a completely different way of thinking about phases of matter," says Dumitrescu, who worked on the project as a research fellow at the Flatiron Institute's Center for Computational Quantum Physics in New York City. "I've been working on these theory ideas for over five years, and seeing them come actually to be realized in experiments is exciting."


The best way to understand their approach is by considering something else ordered yet non-repeating: "quasicrystals." A typical crystal has a regular, repeating structure, like the hexagons in a honeycomb. A quasicrystal still has order, but its patterns never repeat. (Penrose tiling is one example of this.) Even more mind-boggling is that quasicrystals are crystals from higher dimensions projected, or squished down, into lower dimensions. Those higher dimensions can even be beyond physical space's three dimensions: A 2D Penrose tiling, for instance, is a projected slice of a 5-D lattice.

Dumitrescu spearheaded the study's theoretical component with Andrew Potter of the University of British Columbia in Vancouver, Romain Vasseur of the University of Massachusetts, Amherst, and Ajesh Kumar of the University of Texas at Austin. The experiments were carried out on a quantum computer at Quantinuum in Broomfield, Colorado, by a team led by Brian Neyenhuis.

The workhorses of the team's quantum computer are 10 atomic ions of an element called ytterbium. Each ion is individually held and controlled by electric fields produced by an ion trap, and can be manipulated or measured using .

Each of those atomic ions serves as what scientists dub a quantum bit, or "qubit." Whereas traditional computers quantify information in bits (each representing a 0 or a 1), the qubits used by quantum computers leverage the strangeness of quantum mechanics to store even more information. Just as Schrödinger's cat is both dead and alive in its box, a qubit can be a 0, a 1 or a mashup—or "superposition"—of both. That extra information density and the way qubits interact with one another promise to allow quantum computers to tackle computational problems far beyond the reach of conventional computers.

There's a big problem, though: Just as peeking in Schrödinger's box seals the cat's fate, so does interacting with a . And that interaction doesn't even have to be deliberate. "Even if you keep all the atoms under tight control, they can lose their quantumness by talking to their environment, heating up or interacting with things in ways you didn't plan," Dumitrescu says. "In practice, experimental devices have many sources of error that can degrade coherence after just a few laser pulses."

The challenge, therefore, is to make qubits more robust. To do that, physicists can use "symmetries," essentially properties that hold up to change. (A snowflake, for instance, has because it looks the same when rotated by 60 degrees.) One method is adding time symmetry by blasting the atoms with rhythmic laser pulses. This approach helps, but Dumitrescu and his collaborators wondered if they could go further. So instead of just one time symmetry, they aimed to add two by using ordered but non-repeating laser pulses.

More Of The Less...,

aeon  |  Calling these numbers imaginary came later, in the 1600s, when the philosopher René Descartes argued that, in geometry, any structure corresponding to imaginary numbers must be impossible to visualise or draw. By the 1800s, thinkers such as Carl Friedrich Gauss and Leonhard Euler included imaginary numbers in their studies. They discussed complex numbers made up of a real number added to an imaginary number, such as 3+4i, and found that complex-valued mathematical functions have different properties than those that only produce real numbers.

Yet, they still had misgivings about the philosophical implications of such functions existing at all. The French mathematician Augustin-Louis Cauchy wrote that he was ‘abandoning’ the imaginary unit ‘without regret because we do not know what this alleged symbolism signifies nor what meaning to give to it.’

In physics, however, the oddness of imaginary numbers was disregarded in favour of their usefulness. For instance, imaginary numbers can be used to describe opposition to changes in current within an electrical circuit. They are also used to model some oscillations, such as those found in grandfather clocks, where pendulums swing back and forth despite friction. Imaginary numbers are necessary in many equations pertaining to waves, be they vibrations of a plucked guitar string or undulations of water along a coast. And these numbers hide within mathematical functions of sine and cosine, familiar to many high-school trigonometry students.

At the same time, in all these cases imaginary numbers are used as more of a bookkeeping device than a stand-in for some fundamental part of physical reality. Measurement devices such as clocks or scales have never been known to display imaginary values. Physicists typically separate equations that contain imaginary numbers from those that do not. Then, they draw some set of conclusions from each, treating the infamous i as no more than an index or an extra label that helps organise this deductive process. Unless the physicist in question is confronted with the tiny and cold world of quantum mechanics.

Quantum theory predicts the physical behaviour of objects that are either very small, such as electrons that make up electric currents in every wire in your home, or millions of times colder than the insides of your fridge. And it is chock-full of complex and imaginary numbers.

Imaginary numbers went from a problem seeking a solution to a solution that had just been matched with its problem

Emerging in the 1920s, only about a decade after Albert Einstein’s paradigm-shifting work on general relativity and the nature of spacetime, quantum mechanics complicated almost everything that physicists thought they knew about using mathematics to describe physical reality. One big upset was the proposition that quantum states, the fundamental way in which objects that behave according to the laws of quantum mechanics are described, are by default complex. In other words, the most generic, most basic description of anything quantum includes imaginary numbers.

In stark contrast to theories concerning electricity and oscillations, in quantum mechanics a physicist cannot look at an equation that involves imaginary numbers, extract a useful punchline, then forget all about them. When you set out to try and capture a quantum state in the language of mathematics, these seemingly impossible square roots of negative numbers are an integral part of your vocabulary. Eliminating imaginary numbers would highly limit how accurate of a statement you could make.

The discovery and development of quantum mechanics upgraded imaginary numbers from a problem seeking a solution to a solution that had just been matched with its problem. As the physicist and Nobel laureate Roger Penrose noted in the documentary series Why Are We Here? (2017): ‘[Imaginary numbers] were there all the time. They’ve been there since the beginning of time. These numbers are embedded in the way the world works at the smallest and, if you like, most basic level.’

Sunday, July 03, 2022

Zeta Potential

research.colostate  |  Zeta potential is a physical property which is exhibited by any particle in suspension, macromolecule or material surface. It can be used to optimize the formulations of suspensions, emulsions and protein solutions, predict interactions with surfaces, and optimise the formation of films and coatings. Knowledge of the zeta potential can reduce the time needed to produce trial formulations. It can also be used as an aid in predicting long-term stability.

This introduction concentrates on the zeta potential of colloidal systems, with a density low enough such that if they remain dispersed, sedimentation is negligible.


Colloid Science
Three of the fundamental states of matter are solids, liquids and gases. If one of these states is finely dispersed in another then we have a 'colloidal system'. These materials have special properties that are of great practical importance.


There are various examples of colloidal systems that include aerosols, emulsions, colloidal suspensions and association colloids. In certain circumstances, the particles in a dispersion may adhere to one another and form aggregates of successively increasing size, which may settle out under the influence of gravity. An initially formed aggregate is called a floc and the process of its formation flocculation. The floc may or may not sediment or phase separate. If the aggregate changes to a much denser form, it is said to undergo coagulation. An aggregate usually separates out either by sedimentation (if it is more dense than the medium) or by creaming (if it less dense than the medium). The terms flocculation and coagulation have often been used interchangeably. Usually coagulation is irreversible whereas flocculation can be reversed by the process of deflocculation. 


Colloidal Stability and DVLO Theory

The scientists Derjaguin, Verwey, Landau and Overbeek developed a theory in the 1940s which dealt with the stability of colloidal systems. DVLO theory suggests that the stability of a particle in solution is dependent upon its total potential energy function VT.

This theory recognizes that VT is the balance of several competing contributions:

VT = VA + VR + VS

VS is the potential energy due to the solvent, it usually only makes a marginal contribution to the total potential energy over the last few nanometers of separation.

Much more important is the balance between VA and VR, these are the attractive and repulsive contributions. They potentially are much larger and operate over a much larger distance.

VA = -A/(12 π D2)

where A is the Hamaker constant and D is the particle separation.

The repulsive potential VR is a far more complex function.

VR = 2 π ε a ζ2 exp(-κD)

where a is the particle radius, π is the solvent permeability, κ is a function of the ionic composition and ζ is the zeta potential.

DVLO theory suggests that the stability of a colloidal system is determined by the sum of these van der Waals attractive (VA) and electrical double layer repulsive (VR) forces that exist between particles as they approach each other due to the Brownian motion they are undergoing. Figure 2a shows the separate forces as a dotted line, and the sum of these forces as the solid line. This sum has a peak, and the theory proposes that particles that are initially separated are prevented from approaching each other because of the repulsive force. However if the particles are forced with sufficient energy to overcome that barrier, for example by increasing the temperature, the attractive force will pull them into contact where they adhere strongly and irreversibly together. Therefore if the particles have a sufficiently high repulsion, the dispersion will resist flocculation and the colloidal system will be stable.

However if a repulsion mechanism does not exist then flocculation or coagulation will eventually take place. If the zeta potential is reduced (e.g. in high salt concentrations), there is a possibility of a "secondary minimum" being created, where a much weaker and potentially reversible adhesion between particles exists (figure 2 (b)). These weak flocs are sufficiently stable not to be broken up by Brownian motion, but may disperse under an externally applied force such as vigorous agitation.

Therefore to maintain the stability of the colloidal system, the repulsive forces must be dominant. How can colloidal stability be achieved? There are two fundamental mechanisms that affect dispersion stability.

Steric repulsion - this involves polymers added to the system adsorbing onto the particle surface and preventing the particle surfaces coming into close contact. If enough polymer adsorbs, the thickness of the coating will be sufficient to keep particles separated by steric repulsions between the polymer layers, and at those separations the van der Waals forces are too weak to cause the particles to adhere.

Electrostatic or charge stabilization - this is the effect on particle interaction due to the distribution of charged species in the system.

Each mechanism has its benefits for particular systems. Steric stabilization is simple, requiring just the addition of a suitable polymer. However it can be difficult to subsequently flocculate the system if this is required, the polymer can be expensive and in some cases the polymer is undesirable e.g. when a ceramic slip is cast and sintered, the polymer has to be 'burnt out'. This causes shrinkage and can lead to defects.

Electrostatic or charge stabilization has the benefits of stabilizing or flocculating a system by simply altering the concentration of ions in the system. This is a reversible process and is potentially inexpensive.

It has long been recognized that the zeta potential is a very good index of the magnitude of the interaction between colloidal particles and measurements of zeta potential are commonly used to assess the stability of colloidal system.

Friday, July 01, 2022

Quantum Physics And Engineered Viruses

MIT  | Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the researchers at MIT and Eni, the Italian energy company, achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT, Eni, and Italian universities.

Lloyd, the Nam Pyo Suh Professor in the Department of Mechanical Engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

 

Random Mutation And Natural Selection Have Minimal Explanatory Usefulness

theguardian  |  Strange as it sounds, scientists still do not know the answers to some of the most basic questions about how life on Earth evolved. Take eyes, for instance. Where do they come from, exactly? The usual explanation of how we got these stupendously complex organs rests upon the theory of natural selection.

You may recall the gist from school biology lessons. If a creature with poor eyesight happens to produce offspring with slightly better eyesight, thanks to random mutations, then that tiny bit more vision gives them more chance of survival. The longer they survive, the more chance they have to reproduce and pass on the genes that equipped them with slightly better eyesight. Some of their offspring might, in turn, have better eyesight than their parents, making it likelier that they, too, will reproduce. And so on. Generation by generation, over unfathomably long periods of time, tiny advantages add up. Eventually, after a few hundred million years, you have creatures who can see as well as humans, or cats, or owls.

This is the basic story of evolution, as recounted in countless textbooks and pop-science bestsellers. The problem, according to a growing number of scientists, is that it is absurdly crude and misleading.

For one thing, it starts midway through the story, taking for granted the existence of light-sensitive cells, lenses and irises, without explaining where they came from in the first place. Nor does it adequately explain how such delicate and easily disrupted components meshed together to form a single organ. And it isn’t just eyes that the traditional theory struggles with. “The first eye, the first wing, the first placenta. How they emerge. Explaining these is the foundational motivation of evolutionary biology,” says Armin Moczek, a biologist at Indiana University. “And yet, we still do not have a good answer. This classic idea of gradual change, one happy accident at a time, has so far fallen flat.”

There are certain core evolutionary principles that no scientist seriously questions. Everyone agrees that natural selection plays a role, as does mutation and random chance. But how exactly these processes interact – and whether other forces might also be at work – has become the subject of bitter dispute. “If we cannot explain things with the tools we have right now,” the Yale University biologist Günter Wagner told me, “we must find new ways of explaining.”

In 2014, eight scientists took up this challenge, publishing an article in the leading journal Nature that asked “Does evolutionary theory need a rethink?” Their answer was: “Yes, urgently.” Each of the authors came from cutting-edge scientific subfields, from the study of the way organisms alter their environment in order to reduce the normal pressure of natural selection – think of beavers building dams – to new research showing that chemical modifications added to DNA during our lifetimes can be passed on to our offspring. The authors called for a new understanding of evolution that could make room for such discoveries. The name they gave this new framework was rather bland – the Extended Evolutionary Synthesis (EES) – but their proposals were, to many fellow scientists, incendiary.

 

Thursday, June 30, 2022

The Quantum Future Of Biology

royalsocietypublishing |  Biological systems are dynamical, constantly exchanging energy and matter with the environment in order to maintain the non-equilibrium state synonymous with living. Developments in observational techniques have allowed us to study biological dynamics on increasingly small scales. Such studies have revealed evidence of quantum mechanical effects, which cannot be accounted for by classical physics, in a range of biological processes. Quantum biology is the study of such processes, and here we provide an outline of the current state of the field, as well as insights into future directions.

1. Introduction

Quantum mechanics is the fundamental theory that describes the properties of subatomic particles, atoms, molecules, molecular assemblies and possibly beyond. Quantum mechanics operates on the nanometre and sub-nanometre scales and is at the basis of fundamental life processes such as photosynthesis, respiration and vision. In quantum mechanics, all objects have wave-like properties, and when they interact, quantum coherence describes the correlations between the physical quantities describing such objects due to this wave-like nature.

In photosynthesis, respiration and vision, the models that have been developed in the past are fundamentally quantum mechanical. They describe energy transfer and electron transfer in a framework based on surface hopping. The dynamics described by these models are often ‘exponential’ and follow from the application of Fermi’s Golden Rule [1,2]. As a consequence of averaging the rate of transfer over a large and quasi-continuous distribution of final states the calculated dynamics no longer display coherences and interference phenomena. In photosynthetic reaction centres and light-harvesting complexes, oscillatory phenomena were observed in numerous studies performed in the 1990s and were typically ascribed to the formation of vibrational or mixed electronic–vibrational wavepackets. The reported detection of the remarkably long-lived (660 fs and longer) electronic quantum coherence during excitation energy transfer in a photosynthetic system revived interest in the role of ‘non-trivial’ quantum mechanics to explain the fundamental life processes of living organisms [3]. However, the idea that quantum phenomena—like coherence—may play a functional role in macroscopic living systems is not new. In 1932, 10 years after quantum physicist Niels Bohr was awarded the Nobel Prize for his work on the atomic structure, he delivered a lecture entitled ‘Light and Life’ at the International Congress on Light Therapy in Copenhagen [4]. This raised the question of whether quantum theory could contribute to a scientific understanding of living systems. In attendance was an intrigued Max Delbrück, a young physicist who later helped to establish the field of molecular biology and won a Nobel Prize in 1969 for his discoveries in genetics [5].

All living systems are made up of molecules, and fundamentally all molecules are described by quantum mechanics. Traditionally, however, the vast separation of scales between systems described by quantum mechanics and those studied in biology, as well as the seemingly different properties of inanimate and animate matter, has maintained some separation between the two bodies of knowledge. Recently, developments in experimental techniques such as ultrafast spectroscopy [6], single molecule spectroscopy [711], time-resolved microscopy [1214] and single particle imaging [1518] have enabled us to study biological dynamics on increasingly small length and time scales, revealing a variety of processes necessary for the function of the living system that depend on a delicate interplay between quantum and classical physical effects.

Quantum biology is the application of quantum theory to aspects of biology for which classical physics fails to give an accurate description. In spite of this simple definition, there remains debate over the aims and role of the field in the scientific community. This article offers a perspective on where quantum biology stands today, and identifies potential avenues for further progress in the field.

2. What is quantum biology?

Biology, in its current paradigm, has had wide success in applying classical models to living systems. In most cases, subtle quantum effects on (inter)molecular scales do not play a determining role in overall biological function. Here, ‘function’ is a broad concept. For example: How do vision and photosynthesis work on a molecular level and on an ultrafast time scale? How does DNA, with stacked nucleotides separated by about 0.3 nm, deal with UV photons? How does an enzyme catalyse an essential biochemical reaction? How does our brain with neurons organized on a sub-nanometre scale deal with such an amazing amount of information? How do DNA replication and expression work? All these biological functions should, of course, be considered in the context of evolutionary fitness. The differences between a classical approximation and a quantum-mechanical model are generally thought to be negligible in these cases, even though at the basis every process is entirely governed by the laws of quantum mechanics. What happens at the ill-defined border between the quantum and classical regimes? More importantly, are there essential biological functions that ‘appear’ classical but in reality are not? The role of quantum biology is precisely to expose and unravel this connection.

Fundamentally, all matter—animate or inanimate—is quantum mechanical, being constituted of ions, atoms and/or molecules whose equilibrium properties are accurately determined by quantum theory. As a result, it could be claimed that all of biology is quantum mechanical. However, this definition does not address the dynamical nature of biological processes, or the fact that a classical description of intermolecular dynamics seems often sufficient. Quantum biology should, therefore, be defined in terms of the physical ‘correctness’ of the models used and the consistency in the explanatory capabilities of classical versus quantum mechanical models of a particular biological process.

As we investigate biological systems on nanoscales and larger, we find that there exist processes in biological organisms, detailed in this article, for which it is currently thought that a quantum mechanical description is necessary to fully characterize the behaviour of the relevant subsystem. While quantum effects are difficult to observe on macroscopic time and length scales, processes necessary for the overall function and therefore survival of the organism seem to rely on dynamical quantum-mechanical effects at the intermolecular scale. It is precisely the interplay between these time and length scales that quantum biology investigates with the aim to build a consistent physical picture.

Grand hopes for quantum biology may include a contribution to a definition and understanding of life, or to an understanding of the brain and consciousness. However, these problems are as old as science itself, and a better approach is to ask whether quantum biology can contribute to a framework in which we can repose these questions in such a way as to get new answers. The study of biological processes operating efficiently at the boundary between the realms of quantum and classical physics is already contributing to improved physical descriptions of this quantum-to-classical transition.

More immediately, quantum biology promises to give rise to design principles for biologically inspired quantum nanotechnologies, with the ability to perform efficiently at a fundamental level in noisy environments at room temperature and even make use of these ‘noisy environments’ to preserve or even enhance the quantum properties [19,20]. Through engineering such systems, it may be possible to test and quantify the extent to which quantum effects can enhance processes and functions found in biology, and ultimately answer whether these quantum effects may have been purposefully selected in the design of the systems. Importantly, however, quantum bioinspired technologies can also be intrinsically useful independently from the organisms that inspired them.

The Weaponization Of Safety As A Way To Criminalize Students

 Slate  |   What do you mean by the “weaponization of safety”? The language is about wanting to make Jewish students feel saf...