Monday, June 05, 2023

Try Fitting Assembly/Constructor Theory Over Twistor Space

quantamagazine  |  Assembly theory started when Cronin asked why, given the astronomical number of ways to combine different atoms, nature makes some molecules and not others. It’s one thing to say that an object is possible according to the laws of physics; it’s another to say there’s an actual pathway for making it from its component parts. “Assembly theory was developed to capture my intuition that complex molecules can’t just emerge into existence because the combinatorial space is too vast,” Cronin said.

“We live in a recursively structured universe,” Walker said. “Most structure has to be built on memory of the past. The information is built up over time.”

Assembly theory makes the seemingly uncontroversial assumption that complex objects arise from combining many simpler objects. The theory says it’s possible to objectively measure an object’s complexity by considering how it got made. That’s done by calculating the minimum number of steps needed to make the object from its ingredients, which is quantified as the assembly index (AI).

In addition, for a complex object to be scientifically interesting, there has to be a lot of it. Very complex things can arise from random assembly processes — for example, you can make proteinlike molecules by linking any old amino acids into chains. In general, though, these random molecules won’t do anything of interest, such as behaving like an enzyme. And the chances of getting two identical molecules in this way are vanishingly small.

Functional enzymes, however, are made reliably again and again in biology, because they are assembled not at random but from genetic instructions that are inherited across generations. So while finding a single, highly complex molecule doesn’t tell you anything about how it was made, finding many identical complex molecules is improbable unless some orchestrated process — perhaps life — is at work.

Assembly theory predicts that objects like us can’t arise in isolation — that some complex objects can only occur in conjunction with others. This makes intuitive sense; the universe could never produce just a single human. To make any humans at all, it had to make a whole bunch of us.

In accounting for specific, actual entities like humans in general (and you and me in particular), traditional physics is only of so much use. It provides the laws of nature, and assumes that specific outcomes are the result of specific initial conditions. In this view, we must have been somehow encoded in the first moments of the universe. But it surely requires extremely fine-tuned initial conditions to make Homo sapiens (let alone you) inevitable.

Assembly theory, its advocates say, escapes from that kind of overdetermined picture. Here, the initial conditions don’t matter much. Rather, the information needed to make specific objects like us wasn’t there at the outset but accumulates in the unfolding process of cosmic evolution — it frees us from having to place all that responsibility on an impossibly fine-tuned Big Bang. The information “is in the path,” Walker said, “not the initial conditions.”

Cronin and Walker aren’t the only scientists attempting to explain how the keys to observed reality might not lie in universal laws but in the ways that some objects are assembled or transformed into others. The theoretical physicist Chiara Marletto of the University of Oxford is developing a similar idea with the physicist David Deutsch. Their approach, which they call constructor theory and which Marletto considers “close in spirit” to assembly theory, considers which types of transformations are and are not possible.

“Constructor theory talks about the universe of tasks able to make certain transformations,” Cronin said. “It can be thought of as bounding what can happen within the laws of physics.” Assembly theory, he says, adds time and history into that equation.

To explain why some objects get made but others don’t, assembly theory identifies a nested hierarchy of four distinct “universes.”

In the Assembly Universe, all permutations of the basic building blocks are allowed. In the Assembly Possible, the laws of physics constrain these combinations, so only some objects are feasible. The Assembly Contingent then prunes the vast array of physically allowed objects by picking out those that can actually be assembled along possible paths. The fourth universe is the Assembly Observed, which includes just those assembly processes that have generated the specific objects we actually see.

Merrill Sherman/Quanta Magazine; source: https://doi.org/10.48550/arXiv.2206.02279

Assembly theory explores the structure of all these universes, using ideas taken from the mathematical study of graphs, or networks of interlinked nodes. It is “an objects-first theory,” Walker said, where “the things [in the theory] are the objects that are actually made, not their components.”

To understand how assembly processes operate within these notional universes, consider the problem of Darwinian evolution. Conventionally, evolution is something that “just happened” once replicating molecules arose by chance — a view that risks being a tautology, because it seems to say that evolution started once evolvable molecules existed. Instead, advocates of both assembly and constructor theory are seeking “a quantitative understanding of evolution rooted in physics,” Marletto said.

According to assembly theory, before Darwinian evolution can proceed, something has to select for multiple copies of high-AI objects from the Assembly Possible. Chemistry alone, Cronin said, might be capable of that — by narrowing down relatively complex molecules to a small subset. Ordinary chemical reactions already “select” certain products out of all the possible permutations because they have faster reaction rates.

The specific conditions in the prebiotic environment, such as temperature or catalytic mineral surfaces, could thus have begun winnowing the pool of life’s molecular precursors from among those in the Assembly Possible. According to assembly theory, these prebiotic preferences will be “remembered” in today’s biological molecules: They encode their own history. Once Darwinian selection took over, it favored those objects that were better able to replicate themselves. In the process, this encoding of history became stronger still. That’s precisely why scientists can use the molecular structures of proteins and DNA to make deductions about the evolutionary relationships of organisms.

Thus, assembly theory “provides a framework to unify descriptions of selection across physics and biology,” Cronin, Walker and colleagues wrote. “The ‘more assembled’ an object is, the more selection is required for it to come into existence.”

“We’re trying to make a theory that explains how life arises from chemistry,” Cronin said, “and doing it in a rigorous, empirically verifiable way.”

 

Sunday, June 04, 2023

Forget The Math And Just Enjoy The Mind-Bending Perspectival Ingenuity Of Twistor Space

wikipedia  |  In theoretical physics, twistor theory was proposed by Roger Penrose in 1967[1] as a possible path[2] to quantum gravity and has evolved into a widely studied branch of theoretical and mathematical physics. Penrose's idea was that twistor space should be the basic arena for physics from which space-time itself should emerge. It has led to powerful mathematical tools that have applications to differential and integral geometry, nonlinear differential equations and representation theory, and in physics to general relativity, quantum field theory, and the theory of scattering amplitudes. Twistor theory arose in the context of the rapidly expanding mathematical developments in Einstein's theory of general relativity in the late 1950s and in the 1960s and carries a number of influences from that period. In particular, Roger Penrose has credited Ivor Robinson as an important early influence in the development of twistor theory, through his construction of so-called Robinson congruences.[3]

Mathematically, projective twistor space is a 3-dimensional complex manifold, complex projective 3-space . It has the physical interpretation of the space of massless particles with spin. It is the projectivisation of a 4-dimensional complex vector space, non-projective twistor space with a Hermitian form of signature (2,2) and a holomorphic volume form. This can be most naturally understood as the space of chiral (Weyl) spinors for the conformal group of Minkowski space; it is the fundamental representation of the spin group of the conformal group. This definition can be extended to arbitrary dimensions except that beyond dimension four, one defines projective twistor space to be the space of projective pure spinors for the conformal group.[4][5]

In its original form, twistor theory encodes physical fields on Minkowski space into complex analytic objects on twistor space via the Penrose transform. This is especially natural for massless fields of arbitrary spin. In the first instance these are obtained via contour integral formulae in terms of free holomorphic functions on regions in twistor space. The holomorphic twistor functions that give rise to solutions to the massless field equations can be more deeply understood as Čech representatives of analytic cohomology classes on regions in . These correspondences have been extended to certain nonlinear fields, including self-dual gravity in Penrose's nonlinear graviton construction[6] and self-dual Yang–Mills fields in the so-called Ward construction;[7] the former gives rise to deformations of the underlying complex structure of regions in , and the latter to certain holomorphic vector bundles over regions in . These constructions have had wide applications, including inter alia the theory of integrable systems.[8][9][10]

The self-duality condition is a major limitation for incorporating the full nonlinearities of physical theories, although it does suffice for Yang–Mills–Higgs monopoles and instantons (see ADHM construction).[11] An early attempt to overcome this restriction was the introduction of ambitwistors by Edward Witten[12] and by Isenberg, Yasskin & Green.[13] Ambitwistor space is the space of complexified light rays or massless particles and can be regarded as a complexification or cotangent bundle of the original twistor description. These apply to general fields but the field equations are no longer so simply expressed.

Twistorial formulae for interactions beyond the self-dual sector first arose from Witten's twistor string theory.[14] This is a quantum theory of holomorphic maps of a Riemann surface into twistor space. It gave rise to the remarkably compact RSV (Roiban, Spradlin & Volovich) formulae for tree-level S-matrices of Yang–Mills theories,[15] but its gravity degrees of freedom gave rise to a version of conformal supergravity limiting its applicability; conformal gravity is an unphysical theory containing ghosts, but its interactions are combined with those of Yang–Mills theory in loop amplitudes calculated via twistor string theory.[16]

Despite its shortcomings, twistor string theory led to rapid developments in the study of scattering amplitudes. One was the so-called MHV formalism[17] loosely based on disconnected strings, but was given a more basic foundation in terms of a twistor action for full Yang–Mills theory in twistor space.[18] Another key development was the introduction of BCFW recursion.[19] This has a natural formulation in twistor space[20][21] that in turn led to remarkable formulations of scattering amplitudes in terms of Grassmann integral formulae[22][23] and polytopes.[24] These ideas have evolved more recently into the positive Grassmannian[25] and amplituhedron.

Twistor string theory was extended first by generalising the RSV Yang–Mills amplitude formula, and then by finding the underlying string theory. The extension to gravity was given by Cachazo & Skinner,[26] and formulated as a twistor string theory for maximal supergravity by David Skinner.[27] Analogous formulae were then found in all dimensions by Cachazo, He & Yuan for Yang–Mills theory and gravity[28] and subsequently for a variety of other theories.[29] They were then understood as string theories in ambitwistor space by Mason & Skinner[30] in a general framework that includes the original twistor string and extends to give a number of new models and formulae.[31][32][33] As string theories they have the same critical dimensions as conventional string theory; for example the type II supersymmetric versions are critical in ten dimensions and are equivalent to the full field theory of type II supergravities in ten dimensions (this is distinct from conventional string theories that also have a further infinite hierarchy of massive higher spin states that provide an ultraviolet completion). They extend to give formulae for loop amplitudes[34][35] and can be defined on curved backgrounds.[36]

 

Penrose's "Missing" Link Between The Physics Of The Large And The Physics Of The Small

wikipedia  |  The Penrose interpretation is a speculation by Roger Penrose about the relationship between quantum mechanics and general relativity. Penrose proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level.[1][2][3]

Penrose's idea is inspired by quantum gravity, because it uses both the physical constants and . It is an alternative to the Copenhagen interpretation, which posits that superposition fails when an observation is made (but that it is non-objective in nature), and the many-worlds interpretation, which states that alternative outcomes of a superposition are equally "real", while their mutual decoherence precludes subsequent observable interactions.

Penrose's idea is a type of objective collapse theory. For these theories, the wavefunction is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the "'one-graviton' level".[1] He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure derived from standard quantum mechanics. Penrose's "'one-graviton' level" criterion forms the basis of his prediction, providing an objective criterion for wave function collapse.[1] Despite the difficulties of specifying this in a rigorous way, he proposes that the basis states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation.[4][5] Recent work indicates an increasingly deep inter-relation between quantum mechanics and gravitation.[6][7]

Accepting that wavefunctions are physically real, Penrose believes that matter can exist in more than one place at one time. In his opinion, a macroscopic system, like a human being, cannot exist in more than one place for a measurable time, as the corresponding energy difference is very large. A microscopic system, like an electron, can exist in more than one location significantly longer (thousands of years), until its space-time curvature separation reaches collapse threshold.[8][9]

In Einstein's theory, any object that has mass causes a warp in the structure of space and time around it. This warping produces the effect we experience as gravity. Penrose points out that tiny objects, such as dust specks, atoms and electrons, produce space-time warps as well. Ignoring these warps is where most physicists go awry. If a dust speck is in two locations at the same time, each one should create its own distortions in space-time, yielding two superposed gravitational fields. According to Penrose's theory, it takes energy to sustain these dual fields. The stability of a system depends on the amount of energy involved: the higher the energy required to sustain a system, the less stable it is. Over time, an unstable system tends to settle back to its simplest, lowest-energy state: in this case, one object in one location producing one gravitational field. If Penrose is right, gravity yanks objects back into a single location, without any need to invoke observers or parallel universes.[2]

Penrose speculates that the transition between macroscopic and quantum states begins at the scale of dust particles (the mass of which is close to a Planck mass). He has proposed an experiment to test this theory, called FELIX (free-orbit experiment with laser interferometry X-rays), in which an X-ray laser in space is directed toward a tiny mirror and fissioned by a beam splitter from tens of thousands of miles away, with which the photons are directed toward other mirrors and reflected back. One photon will strike the tiny mirror while moving to another mirror and move the tiny mirror back as it returns, and according to conventional quantum theories, the tiny mirror can exist in superposition for a significant period of time. This would prevent any photons from reaching the detector. If Penrose's hypothesis is correct, the mirror's superposition will collapse to one location in about a second, allowing half the photons to reach the detector.[2]

However, because this experiment would be difficult to arrange, a table-top version that uses optical cavities to trap the photons long enough for achieving the desired delay has been proposed instead.[10]

 

Saturday, June 03, 2023

Why Quantum Mechanics Is An Inconsistent Theory

wikipedia  | The Diósi–Penrose model was introduced as a possible solution to the measurement problem, where the wave function collapse is related to gravity. The model was first suggested by Lajos Diósi when studying how possible gravitational fluctuations may affect the dynamics of quantum systems.[1][2] Later, following a different line of reasoning, R. Penrose arrived at an estimation for the collapse time of a superposition due to gravitational effects, which is the same (within an unimportant numerical factor) as that found by Diósi, hence the name Diósi–Penrose model. However, it should be pointed out that while Diósi gave a precise dynamical equation for the collapse,[2] Penrose took a more conservative approach, estimating only the collapse time of a superposition.[3]

It is well known that general relativity and quantum mechanics, our most fundamental theories for describing the universe, are not compatible, and the unification of the two is still missing. The standard approach to overcome this situation is to try to modify general relativity by quantizing gravity. Penrose suggests an opposite approach, what he calls “gravitization of quantum mechanics”, where quantum mechanics gets modified when gravitational effects become relevant.[3][4][9][11][12][13] The reasoning underlying this approach is the following one: take a massive system well-localized states in space. In this case, being the state well-localized, the induced space–time curvature is well defined. According to quantum mechanics, because of the superposition principle, the system can be placed (at least in principle) in a superposition of two well-localized states, which would lead to a superposition of two different space–times. The key idea is that since space–time metric should be well defined, nature “dislikes” these space–time superpositions and suppresses them by collapsing the wave function to one of the two localized states.

To set these ideas on a more quantitative ground, Penrose suggested that a way for measuring the difference between two space–times, in the Newtonian limit, is

 

 

 

 

(9)

where is the Newtoninan gravitational acceleration at the point where the system is localized around . The acceleration can be written in terms of the corresponding gravitational potentials , i.e. . Using this relation in Eq. (9), together with the Poisson equation , with giving the mass density when the state is localized around , and its solution, one arrives at

 

 

 

 

(10)

The corresponding decay time can be obtained by the Heisenberg time–energy uncertainty:

 

 

 

 

(11)

which, apart for a factor simply due to the use of different conventions, is exactly the same as the time decay derived by Diósi's model. This is the reason why the two proposals are named together as the Diósi–Penrose model.

More recently, Penrose suggested a new and quite elegant way to justify the need for a gravity-induced collapse, based on avoiding tensions between the superposition principle and the equivalence principle, the cornerstones of quantum mechanics and general relativity. In order to explain it, let us start by comparing the evolution of a generic state in the presence of uniform gravitational acceleration . One way to perform the calculation, what Penrose calls “Newtonian perspective”,[4][9] consists in working in an inertial frame, with space–time coordinates and solve the Schrödinger equation in presence of the potential (typically, one chooses the coordinates in such a way that the acceleration is directed along the axis, in which case ). Alternatively, because of the equivalence principle, one can choose to go in the free-fall reference frame, with coordinates related to by and , solve the free Schrödinger equation in that reference frame, and then write the results in terms of the inertial coordinates . This is what Penrose calls “Einsteinian perspective”. The solution obtained in the Einsteinian perspective and the one obtained in the Newtonian perspective are related to each other by

 

 

 

 

(12)

Being the two wave functions equivalent apart for an overall phase, they lead to the same physical predictions, which implies that there are no problems in this situation, when the gravitational field has always a well-defined value. However, if the space–time metric is not well defined, then we will be in a situation where there is a superposition of a gravitational field corresponding to the acceleration and one corresponding to the acceleration . This does not create problems as far as one sticks to the Newtonian perspective. However, when using the Einstenian perspective, it will imply a phase difference between the two branches of the superposition given by . While the term in the exponent linear in the time does not lead to any conceptual difficulty, the first term, proportional to , is problematic, since it is a non-relativistic residue of the so-called Unruh effect: in other words, the two terms in the superposition belong to different Hilbert spaces and, strictly speaking, cannot be superposed. Here is where the gravity-induced collapse plays a role, collapsing the superposition when the first term of the phase becomes too large.

The Collapse Of The Wave Function

wikipedia  |  In quantum mechanics, the measurement problem is the problem of how, or whether, wave function collapse occurs. The inability to observe such a collapse directly has given rise to different interpretations of quantum mechanics and poses a key set of questions that each interpretation must answer.

The wave function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states. However, actual measurements always find the physical system in a definite state. Any future evolution of the wave function is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement "did something" to the system that is not obviously a consequence of Schrödinger evolution. The measurement problem is describing what that "something" is, how a superposition of many possible values becomes a single measured value.

To express matters differently (paraphrasing Steven Weinberg),[1][2] the Schrödinger wave equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum reality and classical reality?[3]

The views often grouped together as the Copenhagen interpretation are the oldest and, collectively, probably still the most widely held attitude about quantum mechanics.[4][5] N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced.[6][7]

Generally, views in the Copenhagen tradition posit something in the act of observation which results in the collapse of the wave function. This concept, though often attributed to Niels Bohr, was due to Werner Heisenberg, whose later writings obscured many disagreements he and Bohr had had during their collaboration and that the two never resolved.[8][9] In these schools of thought, wave functions may be regarded as statistical information about a quantum system, and wave function collapse is the updating of that information in response to new data.[10][11] Exactly how to understand this process remains a topic of dispute.[12]

Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement".[13][14][15][16]

Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting that there is only one wave function, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate how the probabilistic nature of quantum mechanics would appear in measurements, a work later extended by Bryce DeWitt. However, proponents of the Everettian program have not yet reached a consensus regarding the correct way to justify the use of the Born rule to calculate probabilities.[17][18]

De Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wave function, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wave function is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space, which is where apparent wave function collapse comes from, even though there is no actual collapse.[19]

A fourth approach is given by objective-collapse models. In such models, the Schrödinger equation is modified and obtains nonlinear terms. These nonlinear modifications are of stochastic nature and lead to a behaviour that for microscopic quantum objects, e.g. electrons or atoms, is unmeasurably close to that given by the usual Schrödinger equation. For macroscopic objects, however, the nonlinear modification becomes important and induces the collapse of the wave function. Objective-collapse models are effective theories. The stochastic modification is thought to stem from some external non-quantum field, but the nature of this field is unknown. One possible candidate is the gravitational interaction as in the models of Diósi and Penrose. The main difference of objective-collapse models compared to the other approaches is that they make falsifiable predictions that differ from standard quantum mechanics. Experiments are already getting close to the parameter regime where these predictions can be tested.[20] The Ghirardi–Rimini–Weber (GRW) theory proposes that wave function collapse happens spontaneously as part of the dynamics. Particles have a non-zero probability of undergoing a "hit", or spontaneous collapse of the wave function, on the order of once every hundred million years.[21] Though collapse is extremely rare, the sheer number of particles in a measurement system means that the probability of a collapse occurring somewhere in the system is high. Since the entire measurement system is entangled (by quantum entanglement), the collapse of a single particle initiates the collapse of the entire measurement apparatus. Because the GRW theory makes different predictions from orthodox quantum mechanics in some conditions, it is not an interpretation of quantum mechanics in a strict sense.

Chipocalypse Now - I Love The Smell Of Deportations In The Morning

sky |   Donald Trump has signalled his intention to send troops to Chicago to ramp up the deportation of illegal immigrants - by posting a...