Showing posts with label scientific mystery. Show all posts
Showing posts with label scientific mystery. Show all posts

Saturday, June 03, 2023

The Collapse Of The Wave Function

wikipedia  |  In quantum mechanics, the measurement problem is the problem of how, or whether, wave function collapse occurs. The inability to observe such a collapse directly has given rise to different interpretations of quantum mechanics and poses a key set of questions that each interpretation must answer.

The wave function in quantum mechanics evolves deterministically according to the Schrödinger equation as a linear superposition of different states. However, actual measurements always find the physical system in a definite state. Any future evolution of the wave function is based on the state the system was discovered to be in when the measurement was made, meaning that the measurement "did something" to the system that is not obviously a consequence of Schrödinger evolution. The measurement problem is describing what that "something" is, how a superposition of many possible values becomes a single measured value.

To express matters differently (paraphrasing Steven Weinberg),[1][2] the Schrödinger wave equation determines the wave function at any later time. If observers and their measuring apparatus are themselves described by a deterministic wave function, why can we not predict precise results for measurements, but only probabilities? As a general question: How can one establish a correspondence between quantum reality and classical reality?[3]

The views often grouped together as the Copenhagen interpretation are the oldest and, collectively, probably still the most widely held attitude about quantum mechanics.[4][5] N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced.[6][7]

Generally, views in the Copenhagen tradition posit something in the act of observation which results in the collapse of the wave function. This concept, though often attributed to Niels Bohr, was due to Werner Heisenberg, whose later writings obscured many disagreements he and Bohr had had during their collaboration and that the two never resolved.[8][9] In these schools of thought, wave functions may be regarded as statistical information about a quantum system, and wave function collapse is the updating of that information in response to new data.[10][11] Exactly how to understand this process remains a topic of dispute.[12]

Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement".[13][14][15][16]

Hugh Everett's many-worlds interpretation attempts to solve the problem by suggesting that there is only one wave function, the superposition of the entire universe, and it never collapses—so there is no measurement problem. Instead, the act of measurement is simply an interaction between quantum entities, e.g. observer, measuring instrument, electron/positron etc., which entangle to form a single larger entity, for instance living cat/happy scientist. Everett also attempted to demonstrate how the probabilistic nature of quantum mechanics would appear in measurements, a work later extended by Bryce DeWitt. However, proponents of the Everettian program have not yet reached a consensus regarding the correct way to justify the use of the Born rule to calculate probabilities.[17][18]

De Broglie–Bohm theory tries to solve the measurement problem very differently: the information describing the system contains not only the wave function, but also supplementary data (a trajectory) giving the position of the particle(s). The role of the wave function is to generate the velocity field for the particles. These velocities are such that the probability distribution for the particle remains consistent with the predictions of the orthodox quantum mechanics. According to de Broglie–Bohm theory, interaction with the environment during a measurement procedure separates the wave packets in configuration space, which is where apparent wave function collapse comes from, even though there is no actual collapse.[19]

A fourth approach is given by objective-collapse models. In such models, the Schrödinger equation is modified and obtains nonlinear terms. These nonlinear modifications are of stochastic nature and lead to a behaviour that for microscopic quantum objects, e.g. electrons or atoms, is unmeasurably close to that given by the usual Schrödinger equation. For macroscopic objects, however, the nonlinear modification becomes important and induces the collapse of the wave function. Objective-collapse models are effective theories. The stochastic modification is thought to stem from some external non-quantum field, but the nature of this field is unknown. One possible candidate is the gravitational interaction as in the models of Diósi and Penrose. The main difference of objective-collapse models compared to the other approaches is that they make falsifiable predictions that differ from standard quantum mechanics. Experiments are already getting close to the parameter regime where these predictions can be tested.[20] The Ghirardi–Rimini–Weber (GRW) theory proposes that wave function collapse happens spontaneously as part of the dynamics. Particles have a non-zero probability of undergoing a "hit", or spontaneous collapse of the wave function, on the order of once every hundred million years.[21] Though collapse is extremely rare, the sheer number of particles in a measurement system means that the probability of a collapse occurring somewhere in the system is high. Since the entire measurement system is entangled (by quantum entanglement), the collapse of a single particle initiates the collapse of the entire measurement apparatus. Because the GRW theory makes different predictions from orthodox quantum mechanics in some conditions, it is not an interpretation of quantum mechanics in a strict sense.

Friday, June 02, 2023

Constructive Interference Patterns Give Rise To Unitary Conscious Experience

wikipedia  |  Smythies[27] defines the combination problem, also known as the subjective unity of perception, as "How do the brain mechanisms actually construct the phenomenal object?". Revonsuo[1] equates this to "consciousness-related binding", emphasizing the entailment of a phenomenal aspect. As Revonsuo explores in 2006,[28] there are nuances of difference beyond the basic BP1:BP2 division. Smythies speaks of constructing a phenomenal object ("local unity" for Revonsuo) but philosophers such as Descartes, Leibniz, Kant and James (see Brook and Raymont[29]) have typically been concerned with the broader unity of a phenomenal experience ("global unity" for Revonsuo) – which, as Bayne[30] illustrates may involve features as diverse as seeing a book, hearing a tune and feeling an emotion. Further discussion will focus on this more general problem of how sensory data that may have been segregated into, for instance, "blue square" and "yellow circle" are to be re-combined into a single phenomenal experience of a blue square next to a yellow circle, plus all other features of their context. There is a wide range of views on just how real this "unity" is, but the existence of medical conditions in which it appears to be subjectively impaired, or at least restricted, suggests that it is not entirely illusory.[31]

There are many neurobiological theories about the subjective unity of perception. Different visual features such as color, size, shape, and motion are computed by largely distinct neural circuits but we experience an integrated whole. The different visual features interact with each other in various ways. For example, shape discrimination of objects is strongly affected by orientation but only slightly affected by object size.[32] Some theories suggest that global perception of the integrated whole involves higher order visual areas.[33] There is also evidence that the posterior parietal cortex is responsible for perceptual scene segmentation and organization.[34] Bodies facing each other are processed as a single unit and there is increased coupling of the extrastriate body area (EBA) and the posterior superior temporal sulcus (pSTS) when bodies are facing each other.[35] This suggests that the brain is biased towards grouping humans in twos or dyads.[36]

Dennett[40] has proposed that our sense that our experiences are single events is illusory and that, instead, at any one time there are "multiple drafts" of sensory patterns at multiple sites. Each would only cover a fragment of what we think we experience. Arguably, Dennett is claiming that consciousness is not unified and there is no phenomenal binding problem. Most philosophers have difficulty with this position (see Bayne[30]) but some physiologists agree with it. In particular, the demonstration of perceptual asynchrony in psychophysical experiments by Moutoussis and Zeki,[48][49] when color is perceived before orientation of lines and before motion by 40 and 80 ms, respectively, constitutes an argument that, over these very short time periods, different attributes are consciously perceived at different times, leading to the view that at least over these brief periods of time after visual stimulation, different events are not bound to each other, leading to the view of a disunity of consciousness,[50] at least over these brief time intervals. Dennett's view might be in keeping with evidence from recall experiments and change blindness purporting to show that our experiences are much less rich than we sense them to be – what has been called the Grand Illusion.[51] However, few, if any, other authors suggest the existence of multiple partial "drafts". Moreover, also on the basis of recall experiments, Lamme[52] has challenged the idea that richness is illusory, emphasizing that phenomenal content cannot be equated with content to which there is cognitive access.

Dennett does not tie drafts to biophysical events. Multiple sites of causal convergence are invoked in specific biophysical terms by Edwards[53] and Sevush.[54] In this view the sensory signals to be combined in phenomenal experience are available, in full, at each of multiple sites. To avoid non-causal combination each site/event is placed within an individual neuronal dendritic tree. The advantage is that "compresence" is invoked just where convergence occurs neuro-anatomically. The disadvantage, as for Dennett, is the counter-intuitive concept of multiple "copies" of experience. The precise nature of an experiential event or "occasion", even if local, also remains uncertain.

The majority of theoretical frameworks for the unified richness of phenomenal experience adhere to the intuitive idea that experience exists as a single copy, and draw on "functional" descriptions of distributed networks of cells. Baars[55] has suggested that certain signals, encoding what we experience, enter a "Global Workspace" within which they are "broadcast" to many sites in the cortex for parallel processing. Dehaene, Changeux and colleagues[56] have developed a detailed neuro-anatomical version of such a workspace. Tononi and colleagues[57] have suggested that the level of richness of an experience is determined by the narrowest information interface "bottleneck" in the largest sub-network or "complex" that acts as an integrated functional unit. Lamme[52] has suggested that networks supporting reciprocal signaling rather than those merely involved in feed-forward signaling support experience. Edelman and colleagues have also emphasized the importance of re-entrant signaling.[58] Cleeremans[59] emphasizes meta-representation as the functional signature of signals contributing to consciousness.

In general, such network-based theories are not explicitly theories of how consciousness is unified, or "bound" but rather theories of functional domains within which signals contribute to unified conscious experience. A concern about functional domains is what Rosenberg[60] has called the boundary problem; it is hard to find a unique account of what is to be included and what excluded. Nevertheless, this is, if anything is, the consensus approach.

Within the network context, a role for synchrony has been invoked as a solution to the phenomenal binding problem as well as the computational one. In his book, The Astonishing Hypothesis,[61] Crick appears to be offering a solution to BP2 as much as BP1. Even von der Malsburg,[62] introduces detailed computational arguments about object feature binding with remarks about a "psychological moment". The Singer group[63] also appear to be interested as much in the role of synchrony in phenomenal awareness as in computational segregation.

The apparent incompatibility of using synchrony to both segregate and unify might be explained by sequential roles. However, Merker[20] points out what appears to be a contradiction in attempts to solve the subjective unity of perception in terms of a functional (effectively meaning computational) rather than a local biophysical, domain, in the context of synchrony.

Functional arguments for a role for synchrony are in fact underpinned by analysis of local biophysical events. However, Merker[20] points out that the explanatory work is done by the downstream integration of synchronized signals in post-synaptic neurons: "It is, however, by no means clear what is to be understood by 'binding by synchrony' other than the threshold advantage conferred by synchrony at, and only at, sites of axonal convergence onto single dendritic trees..." In other words, although synchrony is proposed as a way of explaining binding on a distributed, rather than a convergent, basis the justification rests on what happens at convergence. Signals for two features are proposed as bound by synchrony because synchrony effects downstream convergent interaction. Any theory of phenomenal binding based on this sort of computational function would seem to follow the same principle. The phenomenality would entail convergence, if the computational function does.

The assumption in many of the quoted models suggest that computational and phenomenal events, at least at some point in the sequence of events, parallel each other in some way. The difficulty remains in identifying what that way might be. Merker's[20] analysis suggests that either (1) both computational and phenomenal aspects of binding are determined by convergence of signals on neuronal dendritic trees, or (2) that our intuitive ideas about the need for "binding" in a "holding together" sense in both computational and phenomenal contexts are misconceived. We may be looking for something extra that is not needed. Merker, for instance, argues that the homotopic connectivity of sensory pathways does the necessary work.

 

Wednesday, May 10, 2023

Viktor Schauberger And Implosion Technology

 subtle.energy  |  We live in a realm of polarity. Polarity is observable in all things; up and down, night and day, big and small, etc. When it comes to harnessing natural forces for energy, humanity has recently become proficient in utilizing the power of explosion to move our vehicles, light our houses, and run our modern world. Currently, we look towards heat-based technologies that utilize steam, gas pressure, and atomic fission to fulfill the majority of our energy needs.

In our quest to expand our knowledge of mastering this form of explosive energy, we may have accidentally overlooked the potential for another viable energy form, found in the equal-opposite force of explosion; implosion. Renowned Austrian naturalist, scientist, inventor, author and researcher, Viktor Schauberger, noticed this oversight and initiated work to discover the promise that implosion power could hold for our civilization. 

Viktor Schauberger noticed that the interactions of opposites often leads to a spiraling interchange between the extremes of polarity. For example, when a cold front of weather meets currents of hot air, they spiral in, to form a hurricane or tornado. All things move between their extremes of polar opposites or di-polarity, towards the polarity of greater perfection or destruction. 

“Kapieren und kopieren,” or “comprehend and copy nature” is Viktor’s motto, the method by which he gained his inspiration. He spent much of his time in the forest, making great innovative contributions to the timber industry by improving the efficiency of log flumes by directly observing the behaviors of rivers. His deep understanding of water earned him the nickname “Water Wizard.”

Explosion vs. Implosion

Schauberger observed that if the driving force of movement was centrifugal, or spiraling outwards, it would tend towards the being destructive. If the spin was concentrated inwardly, centripetal, the force would favor nourishment and growth. According to his work, Centrifugence led to friction, which leads to heat, which he associated with the intensification of gravity. Centripetence, the opposing force,  would lead to cooling and a lack of friction; therefore levitation.

For example, in nature, hot lava flows deep under the earth’s crust, where gravity continually intensifies towards the planet’s center. However, when water vapor cools, it rises into the atmosphere and floats, essentially levitating over us in the form of puffy clouds. Somewhere in the middle, these forces converge. Water evaporates up in curling spirals, the earth’s crust is whirled away, melted down into the lava.

By using suction, instead of pressure, with the proper applications, energy as we know it today could be revolutionized. Not only would this implosion energy be significantly cleaner than many of the leading energy options of today, it would also lend itself to greater longevity for the equipment used to generate it. Friction and heat can be taxing on materials. This leads machinery to break down more quickly, and more waste to be generated. 

Schauberger considers the choice to rely on combustion engines to be a great error. His belief is that the resources of the world are to be protected and that we are using them up at a great cost, both economically and ecologically. Just as we preserve the body’s fuel; food, by keeping it in the freezer, we destroy its molecular bonds by cooking. The same applies to mechanical fuel. 

If we were to choose to include implosion technology or cooling power, we may be able to stabilize our dependence on natural resources and reach a new renaissance of clean, sustainable energy.

Looking Towards the Future

Schauberger also maintains that the key to overcoming gravity and achieving levitation can be found through implosion technology. Although this tenant is not currently observed in modern scientific circles, recent mainstream media news reports confirm that someone in the cosmos, maybe even from our planet, has clearly mastered what is most likely levitation technology.

 

 

 

 

Viktor Schauberger

svpwiki  |  Viktor Schauberger (1885-1958) was an Austrian man who spent much of his life as a forestmaster in his native country, and as an inventor who had a deep understanding and appreciation for the life energy dynamics of what he quite clearly called water's life cycle. He put forth the scientific idea that water is indeed alive, and as such it can be sterile, immature or mature depending on the cluster size, treatment, motion and temperature of the water. An early invention was for a wooden pipe to carry water. It was believed by Schauberger that in order for water to mature it must not be exposed to sunlight and be allowed to flow undisturbed, meaning to be able to move in a snakelike fashion. What is actually happening in a naturally coursing stream is that there is a longitudinal whirling flow which forms along the length of the stream. In several books on his life and work such as 'Living Energies' by Callum gradient cycles in forests and rivers and their importance to the quality of the water contained therein.

'Understand and duplicate nature' was one of Viktor's most famous sayings, denoting his simple philosophy. In practice this deep understanding of the treatment of water would lead him to create several machines, such as the repulsine, that use the principle of vortex motion in their design. In the repulsine, air is passed through a narrow corrugated chamber created by two plates with impelling blades along the outer edge, so as to create a suction turbine. In one experiment Viktor plotted the resistance of three test pipes to water flow. He found that the first, glass, increased at about a 40 degree angle on the chart. Copper was a little less, about a 30 degree angle. A spiraling copper pipe plotted a variation over various flow rates, however at one point on the graph it drops below zero, denoting the 'sweet spot' where the flowrate temperature and volume of the water all match up. Schauberger found that the ideal temperature for water is around +4°C where it is at its densest, before it starts expanding from heat or expanding to crystallize. So in his inventions the suction action would be used to further cool the water - if it can be controlled to stay in the sweet spot by its own suction action, then efficiency goes up considerably.

Viktor Schauberger was also an avid farmer. He devised a heart-shape spiral plough in which soil is turned out in a longitudinal spiral as it passes through the blades of the plough. He alo found that the copper content of the soil is important, and to use copper or copper coated tools is much better than steel ones. The reasons involve electrical charge of the water and how it interacts with dissolved minerals in the soil. [no source given] 

Viktor Schauberger on his observation of Nature, particularly water:

The Schauberger’s principle preoccupation was directed towards the conservation of the forest and wild game, and even in earliest youth my fondest desire was to understand Nature, and through such understanding to come closer to the truth; a truth that I was unable to discover either at school or in church.

In this quest I was thus drawn time and time again up into the forest. I could sit for hours on end and watch the water flowing by without ever becoming tired or bored. At the time I was still unaware that in water the greatest secret lay hidden. Nor did I know that water was the carrier of life or the ur-source of what we call consciousness. Without any preconceptions, I simply let my gaze fall on the water as it flowed past. It was only years later that I came to realise that running water attracts our consciousnesses like a magnet and draws a small part of it along in its wake. It is a force that can act so powerfully that one temporarily loses one’s consciousness and involuntarily falls asleep.

As time passed I began to play a game with water’s secret powers; I surrendered my so-called free consciousness and allowed the water to take possession of it for a while. Little by little this game turned into a profoundly earnest endeavour, because I realised that one could detach one’s own consciousness from the body and attach it to that of the water.

When my own consciousness was eventually returned to me, then the water’s most deeply concealed psyche often revealed the most extraordinary things to me. As a result of this investigation, a researcher was born who could dispatch his consciousness on a voyage of discovery. In this way I was able to experience things that had escaped other people’s notice, because they were unaware that a human being is able to send forth his free consciousness into those places the eyes cannot see.

By practising this blindfolded vision, I eventually developed a bond with mysterious Nature, whose essential being I then slowly learnt to perceive and understand. [Viktor Schauberger]

''Nature is not served by rigid laws, but by rhythmical, reciprocal processes. Nature uses none of the preconditions of the chemist or the physicist for the purposes of evolution. Nature excludes all fire on principle for purposes of growth; therefore all contemporary machines are unnatural and constructed according to false premises.

Nature avails herself of the bio-dynamic form of motion through which the biological prerequisite for the emergence of life is provided. Its purpose is to ur-procreate [re-create the primary, the essence of] ‘higher’ conditions of matter out of the originally inferior raw materials, which afford the evolutionally older, or the numerically greater rising generation, the possibility of a constant capacity to evolve, for without any growing and increasing reserves of energy there would be no evolution or development.

This results first and foremost in the collapse of the so-called Law of the Conservation of Energy, and in further consequence the Law of Gravity, and all other dogmatics lose any rational or practical basis.'' [Viktor Schauberger, source : from "Implosion" no. 81 re-printed in Nexus magazine Apr-May 1996]

 

Sunday, May 07, 2023

Chemists Big Mad About Pollack's Structured Water Science

 ACS  |  You might call Gerald H. Pollack “the Teflon professor.”

Pollack, a bioengineering professor at the University of Washington, Seattle, has been the subject of savage criticism for his heterodox theories about water—yet he continues to enjoy great success.

In the past decade, Pollack claims to have amassed experimental evidence that in addition to ice, liquid, and gas, water can form a fourth, gel-like or liquid-crystalline phase, as well as store charge—a property that would violate the law of electroneutrality in bulk fluids. Most water and electrochemists dismiss his results, saying they can be entirely explained by invoking basic water chemistry, and the presence of impurities.

These weighty judgments don’t seem to have deterred Pollack’s supporters, however. Pollack has published numerous papers on his theories in respected journals, including Physical Review E , and the ACS journals Langmuir and Journal of Physical Chemistry B . And this year, he received a $3.8 million grant from the National Institutes of Health’s new Transformative Research Projects Program (T-R01).

Pollack acknowledges that his research is controversial. “It’s impossible to break new ground without arousing controversy,” he tells C&EN. But, he adds, “I’ve somehow managed to stay funded.”

Despite—or perhaps because of—its ubiquity and central importance in biology, chemistry, and physics, water has long been steeped in controversy. In the 1960s, researchers debated the existence of polywater, a polymerized form of liquid water with high boiling point and viscosity. Polywater was eventually debunked, only to be replaced by the concept of water memory in the 1980s. This idea that liquid water can sustain ordered structures for long periods of time is one of the key tenets of homeopathy, a scientifically suspect concept, in which water supposedly “remembers” features of a solute even after repeated dilutions that remove all solute molecules. Water memory has also been debunked in the pages of Nature (1988, 334, 287).

In recent years, Pollack has moved outside the confines of the cell to the structure of water in general. In an annual faculty lecture at the University of Washington titled “Water, Energy, and Life: Fresh Views From the Water’s Edge,” which is also making rounds on YouTube, Pollack describes what he calls an “exclusion zone” where microspheres in a container of water pull away from the surface, while an organized water gel thousands of layers thick forms. Any energy, whether from sunlight or heat, puts energy into the system, helping to increase the phenomenon, he says.

But as Pollack treads further into the territory of chemists, criticisms of his ideas have become more pointed. A recent paper of his in Langmuir, titled “Can Water Store Charge?” made the argument that pure water, hooked up to electrodes, will form large pH gradients that persist long after the current is turned off (Langmuir 2009, 25, 542). A firestorm ensued.

Until the early 2000s, most of Pollack’s publications centered on bioengineering topics such as the behavior of muscle proteins. But in 2001, he published the book “Cells, Gels, and the Engines of Life,” in which he dismantled the standard view of cells, including ion pumps and membrane channels. He posited instead that the water inside cells is a structured gel that plays a fundamental role in the organization and action of cellular structures.

Some reviewers took Pollack to task: University of Colorado, Boulder, biology professor Michael W. Klymkowsky criticized the book for an “overall style reminiscent of creationist writings” (Nat. Cell Bio. 2001, 3, E213). But some lauded the book’s fresh outlook. Harvard University bioengineering professor Donald Ingber described the book as a “nicely sculpted … polemic against complacency in the cell biology establishment” (Cell 2002, 109, 688).

 

Structured Water Science

pollacklab  |  Water has three phases – gas, liquid, and solid; but findings from our laboratory imply the presence of a surprisingly extensive fourth phase that occurs at interfaces. The formal name for this fourth phase is exclusion-zone water, aka EZ water. This finding may have profound implication for chemistry, physics, and biology.

The impact of surfaces on the contiguous aqueous phase is generally thought to extend no more than a few water-molecule layers. We find, however, that colloidal and molecular solutes are profoundly excluded from the vicinity of hydrophilic surfaces, to distances up to several hundred micrometers. Such large zones of exclusion have been observed next to many different hydrophilic surfaces, and many diverse solutes are excluded. Hence, the exclusion phenomenon appears to be quite general.​

To test whether the physical properties of the exclusion zone differ from those of bulk water, multiple methods have been applied. NMR, infrared, and birefringence imaging, as well as measurements of electrical potential, viscosity, and UV-VIS and infrared-absorption spectra, collectively reveal that the solute-free zone is a physically distinct, ordered phase of water. It is much like a liquid crystal. It can co-exist essentially indefinitely with the contiguous solute-containing phase. Indeed, this unexpectedly extensive zone may be a candidate for the long-postulated “fourth phase” of water considered by earlier scientists.

The energy responsible for building this charged, low entropy zone comes from light. We found that incident radiant energy including UV, visible, and near-infrared wavelengths induce exclusion-zone growth in a spectrally sensitive manner. IR is particularly effective. Five-minute exposure to radiation at 3.1 µm (corresponding to OH stretch) causes an exclusion-zone-width increase of up to three times. Apparently, incident photons cause some change in bulk water that predisposes constituent molecules to reorganize and build the charged, ordered exclusion zone. How this occurs is under study.​

Photons from ordinary sunlight, then, may have an unexpectedly powerful effect that goes beyond mere heating. It may be that solar energy builds order and separates charge between the near-surface exclusion zone and the bulk water beyond — the separation effectively creating a battery. This light-induced charge separation resembles the first step of photosynthesis. Indeed, this light-induced action would seem relevant not only for photosynthetic processes, but also for all realms of nature involving water and interfaces.​

The work outlined above was selected in the first cohort of NIH Transformative R01 awards, which allowed deeper and broader exploration. It was also selected as recipient the 2008 University of Washington Annual Lectureship. Each year, out of the University’s 3,800 faculty members, one is chosen to receive this award. Viewable here, the lecture presents the material in a lively manner, accessible to non-experts.

The material now appears in a book, published 2013, entitled The Fourth Phase of Water: Beyond Solid, Liquid and Vapor. Sample chapters are freely accessible at www.ebnerandsons.com, which also contains published reviews. Reader reviews can be found on Amazon.com.

Many lectures and interviews on the material above can be found on the internet. Of interest are two TEDx talks. The original one presents an outline of the basic discoveries, designed for a lay audience. The second one, 2016, describes the relevance of EZ water for health.

Also of interest may be a short Discovery Channel piece that combines fourth phase water with snowboarding.

 

 

Saturday, May 06, 2023

You Already Know Three Times All The Oceans Are Locked In The Mantle...,

earth  |  Scientists from the University of Alabama have discovered a dense layer of ocean floor material that covers the boundary between the Earth’s core and mantle, according to a study published in the journal Science Advances

This layer of ancient ocean floor was likely subducted underground as the Earth’s plates shifted, making it denser than the rest of the deep mantle, and it slows seismic waves reverberating beneath the surface. This ultra-low velocity zone (ULVZ) was previously seen only in isolated patches but has now been found to exist across a large region.

“Seismic investigations, such as ours, provide the highest resolution imaging of the interior structure of our planet, and we are finding that this structure is vastly more complicated than once thought,” said study lead author Dr. Samantha Hansen. “Our research provides important connections between shallow and deep Earth structure and the overall processes driving our planet.”

The layer is only tens of kilometers thick, compared to the thickness of the Earth’s dominant layers. This thin layer was detected through high-resolution imaging of seismic signals, which were used to map a variable layer of material across the study region. The properties of the anomalous core-mantle boundary coating include strong wave speed reductions, leading to the name of ultra-low velocity zone.

“Analyzing 1000’s of seismic recordings from Antarctica, our high-definition imaging method found thin anomalous zones of material at the CMB everywhere we probed.” said study co-author Dr. Edward Garnero, a geophysicist at Arizona State University who co-led the research. “The material’s thickness varies from a few kilometers to 10’s of kilometers. This suggests we are seeing mountains on the core, in some places up to 5 times taller than Mt. Everest.”

These underground “mountains” are thought to be former oceanic seafloors that have sunk to the core-mantle boundary. They may play an important role in how heat escapes from the core, which powers the magnetic field. Additionally, material from the ancient ocean floors can also become entrained in mantle plumes or hot spots, which travel back to the surface through volcanic eruptions.

The discovery of this layer provides important insights into the structure and processes of our planet, and it underscores the importance of continued exploration and study of the Earth’s interior. 

“This is a really exciting result, and it provides a critical piece of information for understanding how the Earth works,” said Dr. Garnero. “It’s fascinating to think that we can learn so much about our planet just by listening to the echoes of earthquakes.”

The core-mantle boundary, located approximately 2,000 miles below Earth’s surface, is coated with an ultra-low velocity zone (ULVZ) that ranges from a few kilometers to tens of kilometers thick. This coating was discovered through a seismic network that collected data over three years during four trips to Antarctica.

The team, which included students and researchers from various countries, used 15 stations in the network buried in Antarctica that used seismic waves created by earthquakes from around the world to create an image of the Earth’s interior. The technique is similar to a medical scan of the body. The team was able to probe a large portion of the southern hemisphere in high resolution for the first time using this method.

“We found thin anomalous zones of material at the CMB (core-mantle boundary) everywhere we probed,” said Dr. Garnero. “The material’s thickness varies from a few kilometers to tens of kilometers. This suggests we are seeing mountains on the core, in some places up to 5 times taller than Mt. Everest.”

The ULVZs are thought to be former oceanic seafloors that sank to the core-mantle boundary. Oceanic material is carried into the interior of the planet where two tectonic plates meet and one dives beneath the other, known as subduction zones. 

Accumulations of subducted oceanic material collect along the core-mantle boundary and are pushed by the slowly flowing rock in the mantle over geologic time. The distribution and variability of such material explains the range of observed ULVZ properties.

The ULVZs are comparable to mountains along the core-mantle boundary, with heights ranging from less than about 3 miles to more than 25 miles. The team believes that these underground “mountains” may play an important role in how heat escapes from the core, the portion of the planet that powers the magnetic field. Material from the ancient ocean floors can also become entrained in mantle plumes, or hot spots, that travel back to the surface through volcanic eruptions.

The discovery of the ULVZs and their potential implications for Earth’s heat and magnetic fields provides new insights into the planet’s inner workings, and underscores the importance of continued research in this field.

 

At Pythias' Oasis We Observe Underground Fresh Water Gushing Into The Ocean

peninsuladailynews  |  An underwater spring is gushing water into the ocean at unprecedented levels giving researchers additional insights into plate tectonics but not — despite some news reports — any indications of an impending earthquake.

A seep of warm water about 50 miles off the coast of Newport, Ore., is spouting chemically distinct water into the ocean at rates not seen anywhere else in the world, but that doesn’t mean an earthquake is imminent, according to Evan Solomon, University of Washington oceanography professor.

“We’re not alarmed by the discovery,” said Solomon in a later interview. “The interesting thing about this site is the seep that we discovered has the highest flow rates of any we’ve seen.”

Solomon recently co-authored a study on the vent, and he said some news organizations have sensationalized the report’s findings, portraying the discovery as evidence the Cascadia Subduction Zone — a 600-mile fault line off the coast of the Pacific Northwest running from northern California up to British Columbia — was ready to blow.

That’s not the case, according to Solomon, who said he’s been answering emails from concerned citizens worried about an imminent earthquake.

But the vent — which has been dubbed “Pythia’s Oasis,” does give researchers more information on plate tectonics and what’s known as “locking,” he said.

Tectonic plates are massive pieces of the earth’s surface that rub up against each other, albeit incredibly slowly.

Locking is the region of the plate boundaries that stick, Solomon said, and that causes a build-up of stress that eventually erupts in an earthquake.

This particular vent was discovered about seven years ago, but Solomon said researchers believe it’s been active for at least 1,500 years. 

“Don’t freak out,” Solomon said. “The big thing is the seep site is exciting because the water flow rates are so high. It’s shedding a lot of light on the processes that lead to locking behavior in the subduction zone.”

While other known seeps release water at rates that amount to several centimeters per year, Pythia’s Oasis is sprouting water at several kilometers a year, or roughly half a liter per second, which Solomon called “incredible.”

The seafloor at the oasis is about one kilometer deep, Solomon said, and it’s coming from the plate boundary which is estimated to be another four kilometers down.

What’s also unique about the seep is that what is flowing out is mostly water rather than mostly gas. The water, which is about 16 degrees Fahrenheit warmer than the surrounding water, is also chemically distinct.

Water flowing out of the oasis is partly freshwater, Solomon said, but not the kind that’s found on the surface. This freshwater is created by seawater becoming chemically dehydrated from minerals in the sediment.

 

Tuesday, December 27, 2022

The Whole Idea Started From Maxwell's Equations

wikipedia |  Salvatore Cezar Pais is an American aerospace engineer and inventor, currently working for the United States Space Force. He formerly worked at the Naval Air Station Patuxent River. His patent applications on behalf of his employers have attracted international attention for their potential military and energy-producing applications, but also doubt about their feasibility, and speculation that they may be misinformation intended to mislead the United States' adversaries or a scam.[1]

Salvatore Pais received his advanced education at Case Western Reserve University in Ohio, graduating with an MS for a thesis titled "Design of an experiment for observation of thermocapillary convection phenomena in a simulated floating zone under microgravity conditions" in 1993.[2]

Pais received his PhD in mechanical and aerospace engineering from Case Western in 1999 on the subject of "Bubble generation under reduced gravity conditions for both co-flow and cross-flow configurations" for which he endured a number of parabolic flights in order to produce a low-gravity environment.[3] His doctoral advisers were Yasuhiro Kamotani and Simon Ostrach who had carried out spacelab experiments in low-gravity aboard the space shuttle STS-50 in 1992.[4] Pais's research was sponsored by NASA.[5]

Pais works as a scientist, aerospace engineer, and inventor, at the United States Navy's Naval Air Station Patuxent River. His patent applications on behalf of his employers have attracted international attention for their futuristic-sounding technology and potential military and energy-producing applications, but have also led to speculation that they may be misinformation intended to mislead the United States' strategic adversaries about the direction of United States defense research.[1]

Pais left the NAWCAD in June 2019 and moved to the US Navy's Strategic Systems Programs organization. He transferred to the U.S. Air Force in 2021.[8]

His patent applications include:

  • A "piezoelectricity-induced room temperature superconductor" with the function of enabling "the transmission of electrical power with no losses."(2017).[9][10] The Institution of Engineering and Technology commented that no evidence was presented to show that the device worked and that the highest temperature superconductors so far created worked at around -70 °C.[11]
  • A "plasma compression fusion device" (2018),[7][12][13] described by Popular Mechanics as a "compact nuclear fusion reactor" that "seemingly stretch[es] the limits of science."[14]
  • An "electromagnetic field generator and method to generate an electromagnetic field" (2015), the principal stated application of which is to deflect asteroids that may hit the Earth. The patent is assigned to the US Secretary of the Navy.[15]
  • A "craft using an inertial mass reduction device" (2016), one embodiment of which could be a high speed "hybrid aerospace/undersea craft" able to "engineer the fabric of our reality at the most fundamental level",[6] the patent application for which was supported by the Naval Aviation Enterprise's chief technical officer on the grounds that the Chinese military were already developing similar technology.[1]
  • A "high frequency gravitational wave generator" that may be used "for advanced propulsion, asteroid disruption and/or deflection, and communications through solid objects."(2017).[16]

Testing on the feasibility of a High Energy Electromagnetic Field Generator (HEEMFG) occurred from October 2016 to September 2019; at a total cost of $508,000 over three years. The vast majority of expenditure was on salaries. The "Pais Effect" could not be proven and no further research was conducted.[8] Brett Tingley wrote for The Drive that "Despite every physicist we have spoken to over the better part of two years asserting that the "Pais Effect" has no scientific basis in reality and the patents related to it were filled with pseudo-scientific jargon, NAWCAD confirmed they were interested enough in the patents to spend more than a half-million dollars over three years developing experiments and equipment to test Pais' theories".[8] Pais remained defiant regarding the veracity of his theories, in an email to The Drive he wrote that his work "culminates in the enablement of the Pais Effect...as far as the doubting SMEs [Subject Matter Experts] are concerned, my work shall be proven correct one fine day...".[8] 

Even In Physics - Innovation Follows The Factory Floor

wikipedia  |  The causes of the Casimir effect are described by quantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a "field" in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. Vibrations in this field propagate and are governed by the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. At the most basic level, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particles of particle physics. However, even the vacuum has a vastly complex structure, so all calculations of quantum field theory must be made in relation to this model of the vacuum.

The vacuum has, implicitly, all of the properties that a particle may have: spin,[18] or polarization in the case of light, energy, and so on. On average, most of these properties cancel out: the vacuum is, after all, "empty" in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator states that the lowest possible energy or zero-point energy that such an oscillator may have is

Summing over all possible oscillators at all points in space gives an infinite quantity. Since only differences in energy are physically measurable (with the notable exception of gravitation, which remains beyond the scope of quantum field theory), this infinity may be considered a feature of the mathematics rather than of the physics. This argument is the underpinning of the theory of renormalization. Dealing with infinite quantities in this way was a cause of widespread unease among quantum field theorists before the development in the 1970s of the renormalization group, a mathematical formalism for scale transformations that provides a natural basis for the process.

When the scope of the physics is widened to include gravity, the interpretation of this formally infinite quantity remains problematic. There is currently no compelling explanation as to why it should not result in a cosmological constant that is many orders of magnitude larger than observed.[19] However, since we do not yet have any fully coherent quantum theory of gravity, there is likewise no compelling reason as to why it should instead actually result in the value of the cosmological constant that we observe.[20]

The Casimir effect for fermions can be understood as the spectral asymmetry of the fermion operator (−1)F, where it is known as the Witten index.

Relativistic van der Waals force

Alternatively, a 2005 paper by Robert Jaffe of MIT states that "Casimir effects can be formulated and Casimir forces can be computed without reference to zero-point energies. They are relativistic, quantum forces between charges and currents. The Casimir force (per unit area) between parallel plates vanishes as alpha, the fine structure constant, goes to zero, and the standard result, which appears to be independent of alpha, corresponds to the alpha approaching infinity limit", and that "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates."[16] Casimir and Polder's original paper used this method to derive the Casimir–Polder force. In 1978, Schwinger, DeRadd, and Milton published a similar derivation for the Casimir effect between two parallel plates.[21] More recently, Nikolic proved from first principles of quantum electrodynamics that Casimir force does not originate from vacuum energy of electromagnetic field,[22] and explained in simple terms why the fundamental microscopic origin of Casimir force lies in van der Waals forces.[23]

Sunday, December 25, 2022

Higher-Dimensional Incompetence Resulted In 3-Dimensional Imprisonment

Heaviside's vector calculus, also known as vector analysis, was developed in the late 19th century as a way to simplify and unify the mathematical treatment of physical phenomena involving vectors, such as those described by James Clerk Maxwell's equations of electromagnetism. At the time, Maxwell's equations were typically expressed using quaternions, which are a type of mathematical notation that involves four complex numbers. The quaternion algebra, developed by James Clerk Maxwell and William Rowan Hamilton, was a more complex mathematical system that had been used to describe physical phenomena, but it was eventually replaced by vector calculus due to its relative simplicity and ease of use.

Quaternions involved complex numbers and required the use of four dimensions, which made them more difficult to work with and interpret. In contrast, vector calculus used a more familiar three-dimensional coordinate system and involved only familiar algebraic operations. Quaternions were found to be somewhat difficult to work with and interpret, especially for those who were not familiar with the notation.

In contrast, vector calculus provided a more intuitive and familiar way to represent and manipulate vectors, using familiar concepts such as magnitude and direction. As a result, vector calculus quickly gained widespread adoption and eventually replaced quaternions as the preferred method for expressing and solving problems involving vectors in physics and engineering. Heaviside's vector notation, which uses arrow notation to represent vectors and dot notation to represent scalars, is much easier to use and understand than quaternions, which are a type of mathematical notation that uses four-dimensional complex numbers.

While quaternions were primarily used in the study of electromagnetism, vector calculus could be used to represent any type of vector quantity, including displacement, velocity, acceleration, and force. This made it a more widely applicable tool for solving problems in many different fields of science and engineering.

Maxwell Had 22 Quaternions - Heaviside Mangled And Crippled These Into 4 Classical Equations...,

In this video, we're looking at how there are two sides to every Maxwell, equation, and therefore there are two ways of understanding each of Maxwell's equations.

Maxwell's equations of electromagnetism fall under the umbrella of classical physics, [NO THEY DO NOT!!!!] and describe how electric and magnetic fields are allowed to behave within our universe (assuming the equations are correct of course). Electric and magnetic fields show how electrically charged, and magnetic objects respectively, exert forces on each other.

Each of Maxwell's equations is a differential equation that can be written in one of two forms - the differential form, and the integral form. In this video, we look at two of these equations, and how each of them has two variations. We begin by studying the first Maxwell equation, which says (in the differential form) that the divergence of any magnetic field is always equal to zero.

The physical interpretation of the above statement is that if we consider any closed volume of space, the net magnetic field passing either in or out of the region must always be zero. We can never have a scenario where more magnetic field enters or leaves any closed region of space. The divergence of the magnetic field simply measures how much field is entering or leaving the volume overall. And this must be equal to zero.

Conversely, this same equation can be written in integral from (i.e. from a slightly different perspective). The integral equation says that the integral of B.dS is equal to zero. B is once again the magnetic field, and dS is a small element of the surface surrounding the volume discussed above. This method breaks up the outer surface covering the volume into very small pieces, counts the amount of magnetic field passing the surface element, and then adds up the contributions from all the elements making up the surface. This addition of contributions is given by the surface integral over the closed surface. In other words, the integral form of this Maxwell equation states the same thing as the differential form but looks at it from a slightly different perspective. Note: the integral must be a closed integral i.e. there should be no holes or breaks in the surface.

We also see a similar sort of thing with the second Maxwell equation, which looks at the behavior of electric fields. The differential form states that the divergence of the electric field is equal to a charge density divided by epsilon nought, the permittivity of free space. This therefore says that for any closed volume, the net amount of field entering or leaving the volume is directly related to the density of charge enclosed within the volume. Therefore if the net charge in the volume is zero, then the net field entering or leaving it is also zero. If the net charge is positive, the divergence is greater than zero, and if the net charge is negative, the divergence is less than zero.

The integral equation states that the sum of the electric field contributions to each of the small elements making up the area surrounding the volume is equal to the total charge enclosed within the surface, divided by epsilon nought. So once again this is looking at the same scenario from a slightly different perspective.

Each Maxwell equation has these two ways of writing it, and one can easily convert from the differential form to the integral form if one knows differential calculus. It is generally simple to move between these forms, and we can use whichever one is mathematically most convenient to us at any given time.

Saturday, December 24, 2022

Why Is Electrodynamics Constrained By Heaviside's Condensed Version Of Maxwell's Equations?

wikipedia |  Oliver Heaviside FRS[1] (/ˈhɛvisd/; 18 May 1850 – 3 February 1925) was an English self-taught mathematician and physicist who invented a new technique for solving differential equations (equivalent to the Laplace transform), independently developed vector calculus, and rewrote Maxwell's equations in the form commonly used today. He significantly shaped the way Maxwell's equations are understood and applied in the decades following Maxwell's death. His formulation of the telegrapher's equations became commercially important during his own lifetime, after their significance went unremarked for a long while, as few others were versed at the time in his novel methodology.[2] Although at odds with the scientific establishment for most of his life, Heaviside changed the face of telecommunications, mathematics, and science.[2]

Heaviside's uncle by marriage was Sir Charles Wheatstone (1802–1875), an internationally celebrated expert in telegraphy and electromagnetism, and the original co-inventor of the first commercially successful telegraph in the mid-1830s. Wheatstone took a strong interest in his nephew's education[5] and in 1867 sent him north to work with his older brother Arthur Wheatstone, who was managing one of Charles' telegraph companies in Newcastle-upon-Tyne.[4]: 53 

Two years later he took a job as a telegraph operator with the Danish Great Northern Telegraph Company laying a cable from Newcastle to Denmark using British contractors. He soon became an electrician. Heaviside continued to study while working, and by the age of 22 he published an article in the prestigious Philosophical Magazine on 'The Best Arrangement of Wheatstone's Bridge for measuring a Given Resistance with a Given Galvanometer and Battery'[6] which received positive comments from physicists who had unsuccessfully tried to solve this algebraic problem, including Sir William Thomson, to whom he gave a copy of the paper, and James Clerk Maxwell. When he published an article on the duplex method of using a telegraph cable,[7] he poked fun at R. S. Culley, the engineer in chief of the Post Office telegraph system, who had been dismissing duplex as impractical. Later in 1873 his application to join the Society of Telegraph Engineers was turned down with the comment that "they didn't want telegraph clerks". This riled Heaviside, who asked Thomson to sponsor him, and along with support of the society's president he was admitted "despite the P.O. snobs".[4]: 60 

In 1873 Heaviside had encountered Maxwell's newly published, and later famous, two-volume Treatise on Electricity and Magnetism. In his old age Heaviside recalled:

I remember my first look at the great treatise of Maxwell's when I was a young man... I saw that it was great, greater and greatest, with prodigious possibilities in its power... I was determined to master the book and set to work. I was very ignorant. I had no knowledge of mathematical analysis (having learned only school algebra and trigonometry which I had largely forgotten) and thus my work was laid out for me. It took me several years before I could understand as much as I possibly could. Then I set Maxwell aside and followed my own course. And I progressed much more quickly... It will be understood that I preach the gospel according to my interpretation of Maxwell.[8]

Undertaking research from home, he helped develop transmission line theory (also known as the "telegrapher's equations"). Heaviside showed mathematically that uniformly distributed inductance in a telegraph line would diminish both attenuation and distortion, and that, if the inductance were great enough and the insulation resistance not too high, the circuit would be distortionless in that currents of all frequencies would have equal speeds of propagation.[9] Heaviside's equations helped further the implementation of the telegraph.

 

DEI Is Dumbasses With No Idea That They're Dumb

Tucker Carlson about Alexandria Ocasio-Cortez and Karine Jean-Pierre: "The marriage of ineptitude and high self-esteem is really the ma...