Showing posts with label Penrose. Show all posts
Showing posts with label Penrose. Show all posts

Monday, November 27, 2023

Omega Level Talents Carrying On The Vital Work Of The Hon.Bro.Sir.Roger Penrose

math.columbia.edu  |  Last month I recorded a podcast with Curt Jaimungal for his Theories of Everything site, and it’s now available with audio here, on Youtube here. There are quite a few other programs on the site well worth watching.

Much of the discussion in this program is about the general ideas I’m trying to pursue about spinors, twistors and unification. For more about the details of these, see arXiv preprints here and here, as well as blog entries here.

About the state of string theory, that’s a topic I find more and more disturbing, with little new though to say about it. It’s been dead now for a long time and most of the scientific community and the public at large are now aware of this. The ongoing publicity campaign from some of the most respected figures in theoretical physics to deny reality and claim that all is well with string theory is what is disturbing. Just in the last week or so, you can watch Cumrun Vafa and Brian Greene promoting string theory on Brian Keating’s channel, with Vafa explaining how string theory computes the mass of the electron. At the World Science Festival site there’s Juan Maldacena, with an upcoming program featuring Greene, Strominger, Vafa and Witten.

On Twitter, there’s now stringking42069, who is producing a torrent of well-informed cutting invective about what is going on in the string theory research community, supposedly from a true believer. It’s unclear whether this is a parody account trying to discredit string theory, or an extreme example of how far gone some string theorists now are.

To all those celebrating Thanksgiving tomorrow, may your travel problems be minimal and your get-togethers with friends and family a pleasure.

Update: If you don’t want to listen to the whole thing and don’t want to hear about spinors and twistors, Curt Jaimungal has put up a shorter clip where we discuss among other things the lack of any significant public technical debate between string theory skeptics and optimists. He offers his site as a venue. Is there anyone who continues to work on string theory and is optimistic about its prospects willing to participate?

Tuesday, June 06, 2023

Sir Roger Penrose: Artificial Intelligence Is A Misnomer

moonofalabama  |  'Artificial Intelligence' Is (Mostly) Glorified Pattern Recognition

This somewhat funny narrative about an 'Artificial Intelligence' simulation by the U.S. airforce appeared yesterday and got widely picked up by various mainstream media:

However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.
...
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they  did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

(SEAD = Suppression of Enemy Air Defenses, SAM = Surface to Air Missile)

In the earl 1990s I worked at a University, first to write a Ph.D. in economics and management and then as associated lecturer for IT and programming. A large part of the (never finished) Ph.D. thesis was a discussion of various optimization algorithms. I programmed each and tested them on training and real world data. Some of those mathematical algos are deterministic. They always deliver the correct result. Some are not deterministic. They just estimated the outcome and give some confidence measure or probability on how correct the presented result may be. Most of the later involved some kind of Bayesisan statistics. Then there were the (related) 'Artificial Intelligence' algos, i.e. 'machine learning'.

Artificial Intelligence is a misnomer for the (ab-)use of a family of computerized pattern recognition methods.

Well structured and labeled data is used to train the models to later have them recognize 'things' in unstructured data. Once the 'things' are found some additional algorithm can act on them.

I programmed some of these as backpropagation networks. They would, for example, 'learn' to 'read' pictures  of the numbers 0 to 9 and to present the correct numerical output. To push the 'learning' into the right direction during the serial iterations that train the network one needs a reward function or reward equation. It tells the network if the results of an iteration are 'right' or 'wrong'. For 'reading' visual representations of numbers that is quite simple. One sets up a table with the visual representations and manually adds the numerical value one sees. After the algo has finished its guess a lookup in the table will tell if it were right or wrong. A 'reward' is given when the result was correct. The model will reiterate and 'learn' from there.

Once trained on numbers written in Courier typography the model is likely to also recognize numbers written upside down in Times New Roman even though they look different.

The reward function for reading 0 to 9 is simple. But the formulation of a reward function quickly evolves into a huge problem when one works, as I did, on multi-dimensional (simulated) real world management problems. The one described by the airforce colonel above is a good example for the potential mistakes. Presented with a huge amount of real world data and a reward function that is somewhat wrong or too limited a machine learning algorithm may later come up with results that are unforeseen, impossible to execute or prohibited.

Currently there is some hype about a family of large language models like ChatGPT. The program reads natural language input and processes it into some related natural language content output. That is not new. The first Artificial Linguistic Internet Computer Entity (Alice) was developed by Joseph Weizenbaum at MIT in the early 1960s. I had funny chats with ELIZA in the 1980s on a mainframe terminal. ChatGPT is a bit niftier and its iterative results, i.e. the 'conversations' it creates, may well astonish some people. But the hype around it is unwarranted.

Behind those language models are machine learning algos that have been trained by large amounts of human speech sucked from the internet. They were trained with speech patterns to then generate speech patterns. The learning part is problem number one. The material these models have been trained with is inherently biased. Did the human trainers who selected the training data include user comments lifted from pornographic sites or did they exclude those? Ethics may have argued for excluding them. But if the model is supposed to give real world results the data from porn sites must be included. How does one prevent remnants from such comments from sneaking into a conversations with kids that the model may later generate? There is a myriad of such problems. Does one include New York Times pieces in the training set even though one knows that they are highly biased? Will a model be allowed to produce hateful output? What is hateful? Who decides? How is that reflected in its reward function?

Currently the factual correctness of the output of the best large language models is an estimated 80%. They process symbols and pattern but have no understanding of what those symbols or pattern represent. They can not solve mathematical and logical problems, not even very basic ones.

There are niche applications, like translating written languages, where AI or pattern recognition has amazing results. But one still can not trust them to get every word right. The models can be assistants but one will always have to double check their results.

Overall the correctness of current AI models is still way too low to allow them to decide any real world situation. More data or more computing power will not change that. If one wants to overcome their limitations one will need to find some fundamentally new ideas.

Monday, June 05, 2023

Does It Make Sense To Talk About "Scale Free Cognition" In The Context Of Light Cones?

arvix  | Broadly speaking, twistor theory is a framework for encoding physical information on space-time as geometric data on a complex projective space, known as a twistor space. The relationship between space-time and twistor space is non-local and has some surprising consequences, which we explore in these lectures. Starting with a review of the twistor correspondence for four-dimensional Minkowski space, we describe some of twistor theory’s historic successes (e.g., describing free fields and integrable systems) as well as some of its historic shortcomings. We then discuss how in recent years many of these
problems have been overcome, with a view to understanding how twistor theory is applied
to the study of perturbative QFT today.

These lectures were given in 2017 at the XIII Modave Summer School in mathematical physics.

Sunday, June 04, 2023

Forget The Math And Just Enjoy The Mind-Bending Perspectival Ingenuity Of Twistor Space

wikipedia  |  In theoretical physics, twistor theory was proposed by Roger Penrose in 1967[1] as a possible path[2] to quantum gravity and has evolved into a widely studied branch of theoretical and mathematical physics. Penrose's idea was that twistor space should be the basic arena for physics from which space-time itself should emerge. It has led to powerful mathematical tools that have applications to differential and integral geometry, nonlinear differential equations and representation theory, and in physics to general relativity, quantum field theory, and the theory of scattering amplitudes. Twistor theory arose in the context of the rapidly expanding mathematical developments in Einstein's theory of general relativity in the late 1950s and in the 1960s and carries a number of influences from that period. In particular, Roger Penrose has credited Ivor Robinson as an important early influence in the development of twistor theory, through his construction of so-called Robinson congruences.[3]

Mathematically, projective twistor space is a 3-dimensional complex manifold, complex projective 3-space . It has the physical interpretation of the space of massless particles with spin. It is the projectivisation of a 4-dimensional complex vector space, non-projective twistor space with a Hermitian form of signature (2,2) and a holomorphic volume form. This can be most naturally understood as the space of chiral (Weyl) spinors for the conformal group of Minkowski space; it is the fundamental representation of the spin group of the conformal group. This definition can be extended to arbitrary dimensions except that beyond dimension four, one defines projective twistor space to be the space of projective pure spinors for the conformal group.[4][5]

In its original form, twistor theory encodes physical fields on Minkowski space into complex analytic objects on twistor space via the Penrose transform. This is especially natural for massless fields of arbitrary spin. In the first instance these are obtained via contour integral formulae in terms of free holomorphic functions on regions in twistor space. The holomorphic twistor functions that give rise to solutions to the massless field equations can be more deeply understood as Čech representatives of analytic cohomology classes on regions in . These correspondences have been extended to certain nonlinear fields, including self-dual gravity in Penrose's nonlinear graviton construction[6] and self-dual Yang–Mills fields in the so-called Ward construction;[7] the former gives rise to deformations of the underlying complex structure of regions in , and the latter to certain holomorphic vector bundles over regions in . These constructions have had wide applications, including inter alia the theory of integrable systems.[8][9][10]

The self-duality condition is a major limitation for incorporating the full nonlinearities of physical theories, although it does suffice for Yang–Mills–Higgs monopoles and instantons (see ADHM construction).[11] An early attempt to overcome this restriction was the introduction of ambitwistors by Edward Witten[12] and by Isenberg, Yasskin & Green.[13] Ambitwistor space is the space of complexified light rays or massless particles and can be regarded as a complexification or cotangent bundle of the original twistor description. These apply to general fields but the field equations are no longer so simply expressed.

Twistorial formulae for interactions beyond the self-dual sector first arose from Witten's twistor string theory.[14] This is a quantum theory of holomorphic maps of a Riemann surface into twistor space. It gave rise to the remarkably compact RSV (Roiban, Spradlin & Volovich) formulae for tree-level S-matrices of Yang–Mills theories,[15] but its gravity degrees of freedom gave rise to a version of conformal supergravity limiting its applicability; conformal gravity is an unphysical theory containing ghosts, but its interactions are combined with those of Yang–Mills theory in loop amplitudes calculated via twistor string theory.[16]

Despite its shortcomings, twistor string theory led to rapid developments in the study of scattering amplitudes. One was the so-called MHV formalism[17] loosely based on disconnected strings, but was given a more basic foundation in terms of a twistor action for full Yang–Mills theory in twistor space.[18] Another key development was the introduction of BCFW recursion.[19] This has a natural formulation in twistor space[20][21] that in turn led to remarkable formulations of scattering amplitudes in terms of Grassmann integral formulae[22][23] and polytopes.[24] These ideas have evolved more recently into the positive Grassmannian[25] and amplituhedron.

Twistor string theory was extended first by generalising the RSV Yang–Mills amplitude formula, and then by finding the underlying string theory. The extension to gravity was given by Cachazo & Skinner,[26] and formulated as a twistor string theory for maximal supergravity by David Skinner.[27] Analogous formulae were then found in all dimensions by Cachazo, He & Yuan for Yang–Mills theory and gravity[28] and subsequently for a variety of other theories.[29] They were then understood as string theories in ambitwistor space by Mason & Skinner[30] in a general framework that includes the original twistor string and extends to give a number of new models and formulae.[31][32][33] As string theories they have the same critical dimensions as conventional string theory; for example the type II supersymmetric versions are critical in ten dimensions and are equivalent to the full field theory of type II supergravities in ten dimensions (this is distinct from conventional string theories that also have a further infinite hierarchy of massive higher spin states that provide an ultraviolet completion). They extend to give formulae for loop amplitudes[34][35] and can be defined on curved backgrounds.[36]

 

Penrose's "Missing" Link Between The Physics Of The Large And The Physics Of The Small

wikipedia  |  The Penrose interpretation is a speculation by Roger Penrose about the relationship between quantum mechanics and general relativity. Penrose proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level.[1][2][3]

Penrose's idea is inspired by quantum gravity, because it uses both the physical constants and . It is an alternative to the Copenhagen interpretation, which posits that superposition fails when an observation is made (but that it is non-objective in nature), and the many-worlds interpretation, which states that alternative outcomes of a superposition are equally "real", while their mutual decoherence precludes subsequent observable interactions.

Penrose's idea is a type of objective collapse theory. For these theories, the wavefunction is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the "'one-graviton' level".[1] He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure derived from standard quantum mechanics. Penrose's "'one-graviton' level" criterion forms the basis of his prediction, providing an objective criterion for wave function collapse.[1] Despite the difficulties of specifying this in a rigorous way, he proposes that the basis states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation.[4][5] Recent work indicates an increasingly deep inter-relation between quantum mechanics and gravitation.[6][7]

Accepting that wavefunctions are physically real, Penrose believes that matter can exist in more than one place at one time. In his opinion, a macroscopic system, like a human being, cannot exist in more than one place for a measurable time, as the corresponding energy difference is very large. A microscopic system, like an electron, can exist in more than one location significantly longer (thousands of years), until its space-time curvature separation reaches collapse threshold.[8][9]

In Einstein's theory, any object that has mass causes a warp in the structure of space and time around it. This warping produces the effect we experience as gravity. Penrose points out that tiny objects, such as dust specks, atoms and electrons, produce space-time warps as well. Ignoring these warps is where most physicists go awry. If a dust speck is in two locations at the same time, each one should create its own distortions in space-time, yielding two superposed gravitational fields. According to Penrose's theory, it takes energy to sustain these dual fields. The stability of a system depends on the amount of energy involved: the higher the energy required to sustain a system, the less stable it is. Over time, an unstable system tends to settle back to its simplest, lowest-energy state: in this case, one object in one location producing one gravitational field. If Penrose is right, gravity yanks objects back into a single location, without any need to invoke observers or parallel universes.[2]

Penrose speculates that the transition between macroscopic and quantum states begins at the scale of dust particles (the mass of which is close to a Planck mass). He has proposed an experiment to test this theory, called FELIX (free-orbit experiment with laser interferometry X-rays), in which an X-ray laser in space is directed toward a tiny mirror and fissioned by a beam splitter from tens of thousands of miles away, with which the photons are directed toward other mirrors and reflected back. One photon will strike the tiny mirror while moving to another mirror and move the tiny mirror back as it returns, and according to conventional quantum theories, the tiny mirror can exist in superposition for a significant period of time. This would prevent any photons from reaching the detector. If Penrose's hypothesis is correct, the mirror's superposition will collapse to one location in about a second, allowing half the photons to reach the detector.[2]

However, because this experiment would be difficult to arrange, a table-top version that uses optical cavities to trap the photons long enough for achieving the desired delay has been proposed instead.[10]

 

Saturday, June 03, 2023

Why Quantum Mechanics Is An Inconsistent Theory

wikipedia  | The Diósi–Penrose model was introduced as a possible solution to the measurement problem, where the wave function collapse is related to gravity. The model was first suggested by Lajos Diósi when studying how possible gravitational fluctuations may affect the dynamics of quantum systems.[1][2] Later, following a different line of reasoning, R. Penrose arrived at an estimation for the collapse time of a superposition due to gravitational effects, which is the same (within an unimportant numerical factor) as that found by Diósi, hence the name Diósi–Penrose model. However, it should be pointed out that while Diósi gave a precise dynamical equation for the collapse,[2] Penrose took a more conservative approach, estimating only the collapse time of a superposition.[3]

It is well known that general relativity and quantum mechanics, our most fundamental theories for describing the universe, are not compatible, and the unification of the two is still missing. The standard approach to overcome this situation is to try to modify general relativity by quantizing gravity. Penrose suggests an opposite approach, what he calls “gravitization of quantum mechanics”, where quantum mechanics gets modified when gravitational effects become relevant.[3][4][9][11][12][13] The reasoning underlying this approach is the following one: take a massive system well-localized states in space. In this case, being the state well-localized, the induced space–time curvature is well defined. According to quantum mechanics, because of the superposition principle, the system can be placed (at least in principle) in a superposition of two well-localized states, which would lead to a superposition of two different space–times. The key idea is that since space–time metric should be well defined, nature “dislikes” these space–time superpositions and suppresses them by collapsing the wave function to one of the two localized states.

To set these ideas on a more quantitative ground, Penrose suggested that a way for measuring the difference between two space–times, in the Newtonian limit, is

 

 

 

 

(9)

where is the Newtoninan gravitational acceleration at the point where the system is localized around . The acceleration can be written in terms of the corresponding gravitational potentials , i.e. . Using this relation in Eq. (9), together with the Poisson equation , with giving the mass density when the state is localized around , and its solution, one arrives at

 

 

 

 

(10)

The corresponding decay time can be obtained by the Heisenberg time–energy uncertainty:

 

 

 

 

(11)

which, apart for a factor simply due to the use of different conventions, is exactly the same as the time decay derived by Diósi's model. This is the reason why the two proposals are named together as the Diósi–Penrose model.

More recently, Penrose suggested a new and quite elegant way to justify the need for a gravity-induced collapse, based on avoiding tensions between the superposition principle and the equivalence principle, the cornerstones of quantum mechanics and general relativity. In order to explain it, let us start by comparing the evolution of a generic state in the presence of uniform gravitational acceleration . One way to perform the calculation, what Penrose calls “Newtonian perspective”,[4][9] consists in working in an inertial frame, with space–time coordinates and solve the Schrödinger equation in presence of the potential (typically, one chooses the coordinates in such a way that the acceleration is directed along the axis, in which case ). Alternatively, because of the equivalence principle, one can choose to go in the free-fall reference frame, with coordinates related to by and , solve the free Schrödinger equation in that reference frame, and then write the results in terms of the inertial coordinates . This is what Penrose calls “Einsteinian perspective”. The solution obtained in the Einsteinian perspective and the one obtained in the Newtonian perspective are related to each other by

 

 

 

 

(12)

Being the two wave functions equivalent apart for an overall phase, they lead to the same physical predictions, which implies that there are no problems in this situation, when the gravitational field has always a well-defined value. However, if the space–time metric is not well defined, then we will be in a situation where there is a superposition of a gravitational field corresponding to the acceleration and one corresponding to the acceleration . This does not create problems as far as one sticks to the Newtonian perspective. However, when using the Einstenian perspective, it will imply a phase difference between the two branches of the superposition given by . While the term in the exponent linear in the time does not lead to any conceptual difficulty, the first term, proportional to , is problematic, since it is a non-relativistic residue of the so-called Unruh effect: in other words, the two terms in the superposition belong to different Hilbert spaces and, strictly speaking, cannot be superposed. Here is where the gravity-induced collapse plays a role, collapsing the superposition when the first term of the phase becomes too large.

Weak People Are Open, Empty, and Easily Occupied By Evil...,

Tucker Carlson: "Here's the illusion we fall for time and again. We imagine that evil comes like fully advertised as such, like evi...