Showing posts with label Penrose. Show all posts
Showing posts with label Penrose. Show all posts

Friday, June 14, 2024

Did You Humans Crack This Isht And Then Hide It From Yourselves 70 Years Ago?

airplanesandrockets  |  By far the most potent source of energy is gravity. Using it as power future aircraft will attain the speed of light.

Nuclear-powered aircraft are yet to be built, but there are research projects already under way that will make the super-planes obsolete before they are test-flown. For in the United States and Canada research centers, scientists, designers and engineers are perfecting a way to control gravity - a force infinitely more powerful than the mighty atom. The result of their labors will be anti-gravity engines working without fuel - weightless airliners and space ships able to travel at 170,000 miles per second.

If this seems too fantastic to be true, here is something to consider - the gravity research has been supported by Glenn L. Martin Aircraft Co., Convair, Bell Aircraft, Lear Inc., Sperry Gyroscope and several other American aircraft manufacturers who would not spend milli0ns of dollars on science fiction. Lawrence D. Bell, the famous builder of the rocket research planes, says, "We're already working with nuclear fuels and equipment to cancel out gravity." And William Lear, the autopilot wizard, is already figuring out "gravity control" for the weightless craft to come.

Gravitation - the mutual attraction of all matter, be it grains of sand or planets - has been the most mysterious phenomenon of nature. Isaac Newton and other great physicists discovered and described the gravitational law from which there has been no escape. "What goes up must come down," they said. The bigger the body the stronger the gravity attraction it has for other objects ... the larger the distance between the objects, the lesser the gravity pull. Defining those rigid rules was as far as science could go, but what caused gravity nobody knew, until Albert Einstein published his Theory of Relativity.

In formulating universal laws that would explain everything from molecules to stars, Einstein discovered a strong similarity between gravitation and magnetism. Magnets attract magnetic metals, of course, but they also attract and bend beams of electronic rays. For instance, in your television picture tube or electronic microscope, magnetic fields sway the electrons from their straight path. It was the common belief that gravitation of bodies attracted material objects only - then came Einstein's dramatic proof to the contrary.

The G-plane licks "heat barrier" problem of high speed by creating its own gravity field. Gravity generator attracts surrounding air to form a thick boundary layer which travels with craft and dissipates heat. Electronic rockets provide forward and reverse thrust. Crew and passenger cabins are also within ship's own gravity field, thus making fast acceleration and deceleration safe for occupants.

Pre-Einstein physicists were convinced that light traveled along absolutely straight lines. But on May 29, 1919, during a full eclipse of the sun, Einstein proved that the light rays of distant stars were attracted and bent by the sun's gravitation. With the sun eclipsed, it was possible to observe the stars and measure the exact "bend" of their days as they passed close to the sun on their way to earth.

This discovery gave modem scientists a new hope. We already knew how to make magnets by coiling a wire around an iron core. Electric current running through the coiled wire created a magnetic field and it could be switched on and off at will. Perhaps we could do the same with the gravitation.

Einstein's famous formula E = mc2 - the secret of nuclear energy - opened the door to further research in gravitation. Prying into the atom's inner structure, nuclear scientists traced the gravity attraction to the atom's core - the nucleus. First they separated electrons by bombarding the atom with powerful electromagnetic "guns." Then, with even more powerful electromagnetic bombardment, the scientists were able to blast the nucleus. The "split" nucleus yielded a variety of heretofore unknown particles.

In the course of such experiments, Dr. Stanley Deser and Dr. Richard Arnowitt of Princeton Institute of Advanced Study found the gravity culprit - tiny particles responsible for gravitation. Without those G-(gravity) particles, an atom of, say, iron still behaved as any other iron atom except for one thing - it was weightless.

With the secret of gravitation exposed, the scientists now concentrate their efforts on harnessing the G-particles and their gravity pull. They are devising ways of controlling the gravity force just as the vast energy of a nuclear explosion has been put to work in a docile nuclear reactor for motive power and peaceful use. And once we have the control of those G-particles, the rest will be a matter of engineering.

According to the gravity research engineers, the G-engine will replace all other motors. Aircraft, automobiles, submarines, stationary powerplants - all will use the anti-gravity engines that will require little or no fuel and will be a mechanic's dream. A G-engine will have only one moving part - a rotor or a flywheel. One half of the rotor will be subjected to a de-gravitating apparatus, while the other will still be under the earth's gravity pull. With the G-particles neutralized, one half of the rotor will no longer be attracted by the earth's gravitation and will therefore go up as the other half is being pulled down, thus creating a powerful rotary movement.

Another, simpler idea comes from the Gravity. Research Foundation of New Boston, N. H. Instead of de-gravitating one half of the rotor, we would merely shield half of it with a gravity "absorber." The other half would still be pulled down and rotation would result (see sketch).

The anti-gravity engine rotor is partially shielded by the gravity absorber. The gravity force acting only on the exposed half of the rotor which creates a powerful rotary motion. This particular device is suitable for powering ground vehicles.

For an explanation of how the gravity "absorber" would work, lets turn to gravity's twin brother - magnetism. If you own an ordinary watch, you must be forever careful not to get it magnetized. Even holding a telephone receiver can magnetize the delicate balance wheel and throw the watch out of time. Therefore, an anti-magnetic watch is the thing to have. Inner works of such a watch are shielded by a soft iron casing which absorb the magnetic lines of force. Even in the strongest magnetic field, the shielded balance wheel is completely unaffected by the outside magnetic pull. In a similar manner, a gravity "absorber" would prevent the earth's gravity from acting upon the shielded portion of our G-engine.

Applied to engines, a gravity absorber would be a boon, but its true value would be in aircraft construction where the weight control engineers get ulcers trying to save an ounce here, a pound there. Of course, an indiscriminate shielding of an aircraft and the resulting total weightlessness is not what we would want. A de-gravitated aircraft would still be subject to the centrifugal force of our rotating globe. Freed from the gravity pull, a totally weightless aircraft would shoot off into space like sparks flying off a faster spinning, abrasive grinding wheel. So, the weight, or gravity, would have to be reduced gradually for take-off and climb. For level flight and for hovering, the weight would be maintained at some low level while landing would be accomplished by slowly restoring the craft's full weight.

The gravity-defying engineers claim that the problem of this lift control is a cinch. The shield would have an arrangement similar in principle to the venetian blind - open for no lift and closed for decreased weight and increased lift.

No longer dependent on wings or rotors, the G-craft would most likely be an ideal aerodynamic shape - a sort of slimmed-down version of the old-fashioned dirigible balloon. Since weight has a lot to do in limiting the size of today's aircraft, a perfect weight control of the G-craft would remove that barrier and would make possible airliners as big as the great ocean liner the S.S. United States.

A G-airliner would be a real speed demon. The coast-to-coast flight time would be cut to minutes even with the orthodox rocket propulsion. You may wonder about the air friction "heat barrier" of high-speed aircraft, but the gravity experts have an answer for that, too. Canadian scientists headed by Wilbur B. Smith - the director of the "Project Magnet" - visualize an apparatus producing a gravitational field in the G-ship. This gravity field would attract the surrounding air to form a thick "boundary layer" which would move with the ship. Thus, air friction would take place at a distance from the ship's structure and the friction heat would be dissipated before it could warm up the ship's skin (large diagram).

When electric current from battery is switched on the coil will create a magnetic field which repels the aluminum disk and makes it shoot upward. Future sips may be built of diamagnetic metals with specially rearranged atomic structure.

The G-ships own gravity field would perform another useful function. William P. Lear, the chairman of Lear, Inc., makers of autopilots and other electronic controls, points out, "All matter within the ship would be influenced by the ship's gravitation only. This way, no matter how fast you accelerated or changed course, your body would not feel it any more than it now feels the tremendous speed and acceleration of the earth." In other words, no more pilot blackouts or any such acceleration headaches. The G-ship could take off like a cannon shell, come to a stop with equal abruptness and the passengers wouldn't even need seat belts.

This ability to accelerate rapidly would be ideal for a space vehicle. Eugene M. Gluhareff, President of Gluhareff Helicopter and Airplane Corporation of Manhattan Beach, California, has already designed several space ships capable of travel at almost the speed of light, or about 600,000,000 miles per hour. At that speed. the round trip to Venus would take just over 30 minutes. Of course, ordinary chemical rockets would be inadequate for such speeds, but Gluhareff already figures on using "atomic rockets."

At least one such "atomic rocket" design has been worked out by Dr. Ernest Stuhlinger, a physicist of the U.S. Army Redstone Arsenal at Huntsville, Alabama. Dr. Stuhlinger's rocket would use ions - atoms with a positive electric charge. To produce those ions, Dr. Stuhlinger takes cesium, a rare metal that liquefies at 71° F. Blown across a platinum coil heated to 1000° F., liquid cesium is ionized, the ions are accelerated by a 10,000 volt electromagnetic "gun" and shot out of a tail pipe at a velocity of 186,324 miles per second.

The power for Dr. Stuhlinger's "ion rocket" would be supplied by an atomic reactor or by solar energy. The weight of the reactor and its size would no longer be a design problem, since the entire apparatus could be de-gravitated - made weightless. Revolutionary as Dr. Stuhlinger's idea may seem, it is already superseded by the Canadian physicists of the "Project Magnet." The Canadians propose to do away with the bulk of the nuclear reactor and use the existing magnetic fields of the earth and other planets for propulsion.

As we well know, two like magnetic poles repel each other, just as under certain conditions, an electromagnet repel the so-called diamagnetic metal, such as aluminum. Take a flat, aluminum ring, slip it over a strong electromagnet and switch on the current. Repelled by the magnetic field, the disk will fly off with quite a speed. (see sketch). Of course, the earth's magnetism is too weak to repel a huge G-ship made of a diamagnetic metal. However, the recent studies of the atomic nucleus and the discovery of G-particles make it possible to rearrange the atomic structure so as to greatly increase the diamagnetic properties of metals. Thus, a G-ship with a magnetic control could be repelled by the earth's magnetic field and it would travel along the magnetic lines of force like the aluminum ring shooting off the electromagnet.

The entire universe is covered by magnetic fields of stars and planets. Those fields intertwine in a complex pattern, but they are always there. By proper selection of those fields, we could navigate our G-ship in space as well as within the earth's magnetic field. And the use of the magnetic repulsion would eliminate the radiation danger of the nuclear reactor and the problem of atomic fuel.

How long will it take to build the weightless craft and G-engines, the gravity experts don't know. George S. Trimble, Vice-President in charge of the G-project at Martin Aircraft Corporation thinks the job "could be done in about the time it took to build the first atom bomb." And another anti-gravity pioneer, Dudley Clarke, President of Clarke Electronics Laboratories of Palm Springs, California, believes it will be a matter of a few yeas to manufacture anti-gravity "power packages."

But no matter how many years we have to wait, the amazing anti-gravity research is a reality. And the best guarantee of its early success is the backing of the U.S. aircraft industry - the engineers and technicians who have always given us tomorrow's craft today.

Monday, November 27, 2023

Omega Level Talents Carrying On The Vital Work Of The Hon.Bro.Sir.Roger Penrose

math.columbia.edu  |  Last month I recorded a podcast with Curt Jaimungal for his Theories of Everything site, and it’s now available with audio here, on Youtube here. There are quite a few other programs on the site well worth watching.

Much of the discussion in this program is about the general ideas I’m trying to pursue about spinors, twistors and unification. For more about the details of these, see arXiv preprints here and here, as well as blog entries here.

About the state of string theory, that’s a topic I find more and more disturbing, with little new though to say about it. It’s been dead now for a long time and most of the scientific community and the public at large are now aware of this. The ongoing publicity campaign from some of the most respected figures in theoretical physics to deny reality and claim that all is well with string theory is what is disturbing. Just in the last week or so, you can watch Cumrun Vafa and Brian Greene promoting string theory on Brian Keating’s channel, with Vafa explaining how string theory computes the mass of the electron. At the World Science Festival site there’s Juan Maldacena, with an upcoming program featuring Greene, Strominger, Vafa and Witten.

On Twitter, there’s now stringking42069, who is producing a torrent of well-informed cutting invective about what is going on in the string theory research community, supposedly from a true believer. It’s unclear whether this is a parody account trying to discredit string theory, or an extreme example of how far gone some string theorists now are.

To all those celebrating Thanksgiving tomorrow, may your travel problems be minimal and your get-togethers with friends and family a pleasure.

Update: If you don’t want to listen to the whole thing and don’t want to hear about spinors and twistors, Curt Jaimungal has put up a shorter clip where we discuss among other things the lack of any significant public technical debate between string theory skeptics and optimists. He offers his site as a venue. Is there anyone who continues to work on string theory and is optimistic about its prospects willing to participate?

Tuesday, June 06, 2023

Sir Roger Penrose: Artificial Intelligence Is A Misnomer

moonofalabama  |  'Artificial Intelligence' Is (Mostly) Glorified Pattern Recognition

This somewhat funny narrative about an 'Artificial Intelligence' simulation by the U.S. airforce appeared yesterday and got widely picked up by various mainstream media:

However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.
...
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they  did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

(SEAD = Suppression of Enemy Air Defenses, SAM = Surface to Air Missile)

In the earl 1990s I worked at a University, first to write a Ph.D. in economics and management and then as associated lecturer for IT and programming. A large part of the (never finished) Ph.D. thesis was a discussion of various optimization algorithms. I programmed each and tested them on training and real world data. Some of those mathematical algos are deterministic. They always deliver the correct result. Some are not deterministic. They just estimated the outcome and give some confidence measure or probability on how correct the presented result may be. Most of the later involved some kind of Bayesisan statistics. Then there were the (related) 'Artificial Intelligence' algos, i.e. 'machine learning'.

Artificial Intelligence is a misnomer for the (ab-)use of a family of computerized pattern recognition methods.

Well structured and labeled data is used to train the models to later have them recognize 'things' in unstructured data. Once the 'things' are found some additional algorithm can act on them.

I programmed some of these as backpropagation networks. They would, for example, 'learn' to 'read' pictures  of the numbers 0 to 9 and to present the correct numerical output. To push the 'learning' into the right direction during the serial iterations that train the network one needs a reward function or reward equation. It tells the network if the results of an iteration are 'right' or 'wrong'. For 'reading' visual representations of numbers that is quite simple. One sets up a table with the visual representations and manually adds the numerical value one sees. After the algo has finished its guess a lookup in the table will tell if it were right or wrong. A 'reward' is given when the result was correct. The model will reiterate and 'learn' from there.

Once trained on numbers written in Courier typography the model is likely to also recognize numbers written upside down in Times New Roman even though they look different.

The reward function for reading 0 to 9 is simple. But the formulation of a reward function quickly evolves into a huge problem when one works, as I did, on multi-dimensional (simulated) real world management problems. The one described by the airforce colonel above is a good example for the potential mistakes. Presented with a huge amount of real world data and a reward function that is somewhat wrong or too limited a machine learning algorithm may later come up with results that are unforeseen, impossible to execute or prohibited.

Currently there is some hype about a family of large language models like ChatGPT. The program reads natural language input and processes it into some related natural language content output. That is not new. The first Artificial Linguistic Internet Computer Entity (Alice) was developed by Joseph Weizenbaum at MIT in the early 1960s. I had funny chats with ELIZA in the 1980s on a mainframe terminal. ChatGPT is a bit niftier and its iterative results, i.e. the 'conversations' it creates, may well astonish some people. But the hype around it is unwarranted.

Behind those language models are machine learning algos that have been trained by large amounts of human speech sucked from the internet. They were trained with speech patterns to then generate speech patterns. The learning part is problem number one. The material these models have been trained with is inherently biased. Did the human trainers who selected the training data include user comments lifted from pornographic sites or did they exclude those? Ethics may have argued for excluding them. But if the model is supposed to give real world results the data from porn sites must be included. How does one prevent remnants from such comments from sneaking into a conversations with kids that the model may later generate? There is a myriad of such problems. Does one include New York Times pieces in the training set even though one knows that they are highly biased? Will a model be allowed to produce hateful output? What is hateful? Who decides? How is that reflected in its reward function?

Currently the factual correctness of the output of the best large language models is an estimated 80%. They process symbols and pattern but have no understanding of what those symbols or pattern represent. They can not solve mathematical and logical problems, not even very basic ones.

There are niche applications, like translating written languages, where AI or pattern recognition has amazing results. But one still can not trust them to get every word right. The models can be assistants but one will always have to double check their results.

Overall the correctness of current AI models is still way too low to allow them to decide any real world situation. More data or more computing power will not change that. If one wants to overcome their limitations one will need to find some fundamentally new ideas.

Monday, June 05, 2023

Does It Make Sense To Talk About "Scale Free Cognition" In The Context Of Light Cones?

arvix  | Broadly speaking, twistor theory is a framework for encoding physical information on space-time as geometric data on a complex projective space, known as a twistor space. The relationship between space-time and twistor space is non-local and has some surprising consequences, which we explore in these lectures. Starting with a review of the twistor correspondence for four-dimensional Minkowski space, we describe some of twistor theory’s historic successes (e.g., describing free fields and integrable systems) as well as some of its historic shortcomings. We then discuss how in recent years many of these
problems have been overcome, with a view to understanding how twistor theory is applied
to the study of perturbative QFT today.

These lectures were given in 2017 at the XIII Modave Summer School in mathematical physics.

Sunday, June 04, 2023

Forget The Math And Just Enjoy The Mind-Bending Perspectival Ingenuity Of Twistor Space

wikipedia  |  In theoretical physics, twistor theory was proposed by Roger Penrose in 1967[1] as a possible path[2] to quantum gravity and has evolved into a widely studied branch of theoretical and mathematical physics. Penrose's idea was that twistor space should be the basic arena for physics from which space-time itself should emerge. It has led to powerful mathematical tools that have applications to differential and integral geometry, nonlinear differential equations and representation theory, and in physics to general relativity, quantum field theory, and the theory of scattering amplitudes. Twistor theory arose in the context of the rapidly expanding mathematical developments in Einstein's theory of general relativity in the late 1950s and in the 1960s and carries a number of influences from that period. In particular, Roger Penrose has credited Ivor Robinson as an important early influence in the development of twistor theory, through his construction of so-called Robinson congruences.[3]

Mathematically, projective twistor space is a 3-dimensional complex manifold, complex projective 3-space . It has the physical interpretation of the space of massless particles with spin. It is the projectivisation of a 4-dimensional complex vector space, non-projective twistor space with a Hermitian form of signature (2,2) and a holomorphic volume form. This can be most naturally understood as the space of chiral (Weyl) spinors for the conformal group of Minkowski space; it is the fundamental representation of the spin group of the conformal group. This definition can be extended to arbitrary dimensions except that beyond dimension four, one defines projective twistor space to be the space of projective pure spinors for the conformal group.[4][5]

In its original form, twistor theory encodes physical fields on Minkowski space into complex analytic objects on twistor space via the Penrose transform. This is especially natural for massless fields of arbitrary spin. In the first instance these are obtained via contour integral formulae in terms of free holomorphic functions on regions in twistor space. The holomorphic twistor functions that give rise to solutions to the massless field equations can be more deeply understood as Čech representatives of analytic cohomology classes on regions in . These correspondences have been extended to certain nonlinear fields, including self-dual gravity in Penrose's nonlinear graviton construction[6] and self-dual Yang–Mills fields in the so-called Ward construction;[7] the former gives rise to deformations of the underlying complex structure of regions in , and the latter to certain holomorphic vector bundles over regions in . These constructions have had wide applications, including inter alia the theory of integrable systems.[8][9][10]

The self-duality condition is a major limitation for incorporating the full nonlinearities of physical theories, although it does suffice for Yang–Mills–Higgs monopoles and instantons (see ADHM construction).[11] An early attempt to overcome this restriction was the introduction of ambitwistors by Edward Witten[12] and by Isenberg, Yasskin & Green.[13] Ambitwistor space is the space of complexified light rays or massless particles and can be regarded as a complexification or cotangent bundle of the original twistor description. These apply to general fields but the field equations are no longer so simply expressed.

Twistorial formulae for interactions beyond the self-dual sector first arose from Witten's twistor string theory.[14] This is a quantum theory of holomorphic maps of a Riemann surface into twistor space. It gave rise to the remarkably compact RSV (Roiban, Spradlin & Volovich) formulae for tree-level S-matrices of Yang–Mills theories,[15] but its gravity degrees of freedom gave rise to a version of conformal supergravity limiting its applicability; conformal gravity is an unphysical theory containing ghosts, but its interactions are combined with those of Yang–Mills theory in loop amplitudes calculated via twistor string theory.[16]

Despite its shortcomings, twistor string theory led to rapid developments in the study of scattering amplitudes. One was the so-called MHV formalism[17] loosely based on disconnected strings, but was given a more basic foundation in terms of a twistor action for full Yang–Mills theory in twistor space.[18] Another key development was the introduction of BCFW recursion.[19] This has a natural formulation in twistor space[20][21] that in turn led to remarkable formulations of scattering amplitudes in terms of Grassmann integral formulae[22][23] and polytopes.[24] These ideas have evolved more recently into the positive Grassmannian[25] and amplituhedron.

Twistor string theory was extended first by generalising the RSV Yang–Mills amplitude formula, and then by finding the underlying string theory. The extension to gravity was given by Cachazo & Skinner,[26] and formulated as a twistor string theory for maximal supergravity by David Skinner.[27] Analogous formulae were then found in all dimensions by Cachazo, He & Yuan for Yang–Mills theory and gravity[28] and subsequently for a variety of other theories.[29] They were then understood as string theories in ambitwistor space by Mason & Skinner[30] in a general framework that includes the original twistor string and extends to give a number of new models and formulae.[31][32][33] As string theories they have the same critical dimensions as conventional string theory; for example the type II supersymmetric versions are critical in ten dimensions and are equivalent to the full field theory of type II supergravities in ten dimensions (this is distinct from conventional string theories that also have a further infinite hierarchy of massive higher spin states that provide an ultraviolet completion). They extend to give formulae for loop amplitudes[34][35] and can be defined on curved backgrounds.[36]

 

Penrose's "Missing" Link Between The Physics Of The Large And The Physics Of The Small

wikipedia  |  The Penrose interpretation is a speculation by Roger Penrose about the relationship between quantum mechanics and general relativity. Penrose proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level.[1][2][3]

Penrose's idea is inspired by quantum gravity, because it uses both the physical constants and . It is an alternative to the Copenhagen interpretation, which posits that superposition fails when an observation is made (but that it is non-objective in nature), and the many-worlds interpretation, which states that alternative outcomes of a superposition are equally "real", while their mutual decoherence precludes subsequent observable interactions.

Penrose's idea is a type of objective collapse theory. For these theories, the wavefunction is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the "'one-graviton' level".[1] He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure derived from standard quantum mechanics. Penrose's "'one-graviton' level" criterion forms the basis of his prediction, providing an objective criterion for wave function collapse.[1] Despite the difficulties of specifying this in a rigorous way, he proposes that the basis states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation.[4][5] Recent work indicates an increasingly deep inter-relation between quantum mechanics and gravitation.[6][7]

Accepting that wavefunctions are physically real, Penrose believes that matter can exist in more than one place at one time. In his opinion, a macroscopic system, like a human being, cannot exist in more than one place for a measurable time, as the corresponding energy difference is very large. A microscopic system, like an electron, can exist in more than one location significantly longer (thousands of years), until its space-time curvature separation reaches collapse threshold.[8][9]

In Einstein's theory, any object that has mass causes a warp in the structure of space and time around it. This warping produces the effect we experience as gravity. Penrose points out that tiny objects, such as dust specks, atoms and electrons, produce space-time warps as well. Ignoring these warps is where most physicists go awry. If a dust speck is in two locations at the same time, each one should create its own distortions in space-time, yielding two superposed gravitational fields. According to Penrose's theory, it takes energy to sustain these dual fields. The stability of a system depends on the amount of energy involved: the higher the energy required to sustain a system, the less stable it is. Over time, an unstable system tends to settle back to its simplest, lowest-energy state: in this case, one object in one location producing one gravitational field. If Penrose is right, gravity yanks objects back into a single location, without any need to invoke observers or parallel universes.[2]

Penrose speculates that the transition between macroscopic and quantum states begins at the scale of dust particles (the mass of which is close to a Planck mass). He has proposed an experiment to test this theory, called FELIX (free-orbit experiment with laser interferometry X-rays), in which an X-ray laser in space is directed toward a tiny mirror and fissioned by a beam splitter from tens of thousands of miles away, with which the photons are directed toward other mirrors and reflected back. One photon will strike the tiny mirror while moving to another mirror and move the tiny mirror back as it returns, and according to conventional quantum theories, the tiny mirror can exist in superposition for a significant period of time. This would prevent any photons from reaching the detector. If Penrose's hypothesis is correct, the mirror's superposition will collapse to one location in about a second, allowing half the photons to reach the detector.[2]

However, because this experiment would be difficult to arrange, a table-top version that uses optical cavities to trap the photons long enough for achieving the desired delay has been proposed instead.[10]

 

Saturday, June 03, 2023

Why Quantum Mechanics Is An Inconsistent Theory

wikipedia  | The Diósi–Penrose model was introduced as a possible solution to the measurement problem, where the wave function collapse is related to gravity. The model was first suggested by Lajos Diósi when studying how possible gravitational fluctuations may affect the dynamics of quantum systems.[1][2] Later, following a different line of reasoning, R. Penrose arrived at an estimation for the collapse time of a superposition due to gravitational effects, which is the same (within an unimportant numerical factor) as that found by Diósi, hence the name Diósi–Penrose model. However, it should be pointed out that while Diósi gave a precise dynamical equation for the collapse,[2] Penrose took a more conservative approach, estimating only the collapse time of a superposition.[3]

It is well known that general relativity and quantum mechanics, our most fundamental theories for describing the universe, are not compatible, and the unification of the two is still missing. The standard approach to overcome this situation is to try to modify general relativity by quantizing gravity. Penrose suggests an opposite approach, what he calls “gravitization of quantum mechanics”, where quantum mechanics gets modified when gravitational effects become relevant.[3][4][9][11][12][13] The reasoning underlying this approach is the following one: take a massive system well-localized states in space. In this case, being the state well-localized, the induced space–time curvature is well defined. According to quantum mechanics, because of the superposition principle, the system can be placed (at least in principle) in a superposition of two well-localized states, which would lead to a superposition of two different space–times. The key idea is that since space–time metric should be well defined, nature “dislikes” these space–time superpositions and suppresses them by collapsing the wave function to one of the two localized states.

To set these ideas on a more quantitative ground, Penrose suggested that a way for measuring the difference between two space–times, in the Newtonian limit, is

 

 

 

 

(9)

where is the Newtoninan gravitational acceleration at the point where the system is localized around . The acceleration can be written in terms of the corresponding gravitational potentials , i.e. . Using this relation in Eq. (9), together with the Poisson equation , with giving the mass density when the state is localized around , and its solution, one arrives at

 

 

 

 

(10)

The corresponding decay time can be obtained by the Heisenberg time–energy uncertainty:

 

 

 

 

(11)

which, apart for a factor simply due to the use of different conventions, is exactly the same as the time decay derived by Diósi's model. This is the reason why the two proposals are named together as the Diósi–Penrose model.

More recently, Penrose suggested a new and quite elegant way to justify the need for a gravity-induced collapse, based on avoiding tensions between the superposition principle and the equivalence principle, the cornerstones of quantum mechanics and general relativity. In order to explain it, let us start by comparing the evolution of a generic state in the presence of uniform gravitational acceleration . One way to perform the calculation, what Penrose calls “Newtonian perspective”,[4][9] consists in working in an inertial frame, with space–time coordinates and solve the Schrödinger equation in presence of the potential (typically, one chooses the coordinates in such a way that the acceleration is directed along the axis, in which case ). Alternatively, because of the equivalence principle, one can choose to go in the free-fall reference frame, with coordinates related to by and , solve the free Schrödinger equation in that reference frame, and then write the results in terms of the inertial coordinates . This is what Penrose calls “Einsteinian perspective”. The solution obtained in the Einsteinian perspective and the one obtained in the Newtonian perspective are related to each other by

 

 

 

 

(12)

Being the two wave functions equivalent apart for an overall phase, they lead to the same physical predictions, which implies that there are no problems in this situation, when the gravitational field has always a well-defined value. However, if the space–time metric is not well defined, then we will be in a situation where there is a superposition of a gravitational field corresponding to the acceleration and one corresponding to the acceleration . This does not create problems as far as one sticks to the Newtonian perspective. However, when using the Einstenian perspective, it will imply a phase difference between the two branches of the superposition given by . While the term in the exponent linear in the time does not lead to any conceptual difficulty, the first term, proportional to , is problematic, since it is a non-relativistic residue of the so-called Unruh effect: in other words, the two terms in the superposition belong to different Hilbert spaces and, strictly speaking, cannot be superposed. Here is where the gravity-induced collapse plays a role, collapsing the superposition when the first term of the phase becomes too large.

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...