arvix | Broadly speaking, twistor theory is a framework for encoding physical information on space-time as geometric data on a complex projective space, known as a twistor space. The relationship between space-time and twistor space is non-local and has some surprising consequences, which we explore in these lectures. Starting with a review of the twistor correspondence for four-dimensional Minkowski space, we describe some of twistor theory’s historic successes (e.g., describing free fields and integrable systems) as well as some of its historic shortcomings. We then discuss how in recent years many of these problems have been overcome, with a view to understanding how twistor theory is applied to the study of perturbative QFT today.
These lectures were given in 2017 at the XIII Modave Summer School in mathematical physics.
quantamagazine | Assembly
theory started when Cronin asked why, given the astronomical number of
ways to combine different atoms, nature makes some molecules and not
others. It’s one thing to say that an object is possible according to
the laws of physics; it’s another to say there’s an actual pathway for
making it from its component parts. “Assembly theory was developed to
capture my intuition that complex molecules can’t just emerge into
existence because the combinatorial space is too vast,” Cronin said.
“We live in a recursively structured universe,” Walker said. “Most
structure has to be built on memory of the past. The information is
built up over time.”
Assembly theory makes the seemingly uncontroversial assumption that
complex objects arise from combining many simpler objects. The theory
says it’s possible to objectively measure an object’s complexity by
considering how it got made. That’s done by calculating the minimum
number of steps needed to make the object from its ingredients, which is
quantified as the assembly index (AI).
In addition, for a complex object to be scientifically interesting,
there has to be a lot of it. Very complex things can arise from random
assembly processes — for example, you can make proteinlike molecules by
linking any old amino acids into chains. In general, though, these
random molecules won’t do anything of interest, such as behaving like an
enzyme. And the chances of getting two identical molecules in this way
are vanishingly small.
Functional enzymes, however, are made reliably again and again in
biology, because they are assembled not at random but from genetic
instructions that are inherited across generations. So while finding a
single, highly complex molecule doesn’t tell you anything about how it
was made, finding many identical complex molecules is improbable unless
some orchestrated process — perhaps life — is at work.
Assembly theory predicts that objects like us can’t arise in
isolation — that some complex objects can only occur in conjunction with
others. This makes intuitive sense; the universe could never produce
just a single human. To make any humans at all, it had to make a whole
bunch of us.
In accounting for specific, actual entities like humans in general
(and you and me in particular), traditional physics is only of so much
use. It provides the laws of nature, and assumes that specific outcomes
are the result of specific initial conditions. In this view, we must
have been somehow encoded in the first moments of the universe. But it
surely requires extremely fine-tuned initial conditions to make Homo sapiens (let alone you) inevitable.
Assembly theory, its advocates say, escapes from that kind of
overdetermined picture. Here, the initial conditions don’t matter much.
Rather, the information needed to make specific objects like us wasn’t
there at the outset but accumulates in the unfolding process of cosmic
evolution — it frees us from having to place all that responsibility on
an impossibly fine-tuned Big Bang. The information “is in the path,”
Walker said, “not the initial conditions.”
Cronin and Walker aren’t the only scientists attempting to explain
how the keys to observed reality might not lie in universal laws but in
the ways that some objects are assembled or transformed into others. The
theoretical physicist Chiara Marletto of the University of Oxford is developing a similar idea with the physicist David Deutsch. Their approach, which they call constructor theory
and which Marletto considers “close in spirit” to assembly theory,
considers which types of transformations are and are not possible.
“Constructor theory talks about the universe of tasks able to make
certain transformations,” Cronin said. “It can be thought of as bounding
what can happen within the laws of physics.” Assembly theory, he says,
adds time and history into that equation.
To explain why some objects get made but others don’t, assembly
theory identifies a nested hierarchy of four distinct “universes.”
In the Assembly Universe, all permutations of the basic building
blocks are allowed. In the Assembly Possible, the laws of physics
constrain these combinations, so only some objects are feasible. The
Assembly Contingent then prunes the vast array of physically allowed
objects by picking out those that can actually be assembled along
possible paths. The fourth universe is the Assembly Observed, which
includes just those assembly processes that have generated the specific
objects we actually see.
Assembly theory explores the structure of all these universes, using ideas taken from the mathematical study of graphs,
or networks of interlinked nodes. It is “an objects-first theory,”
Walker said, where “the things [in the theory] are the objects that are
actually made, not their components.”
To understand how assembly processes operate within these notional
universes, consider the problem of Darwinian evolution. Conventionally,
evolution is something that “just happened” once replicating molecules
arose by chance — a view that risks being a tautology, because it seems
to say that evolution started once evolvable molecules existed. Instead,
advocates of both assembly and constructor theory are seeking “a
quantitative understanding of evolution rooted in physics,” Marletto
said.
According to assembly theory,
before Darwinian evolution can proceed, something has to select for
multiple copies of high-AI objects from the Assembly Possible. Chemistry
alone, Cronin said, might be capable of that — by narrowing down
relatively complex molecules to a small subset. Ordinary chemical
reactions already “select” certain products out of all the possible
permutations because they have faster reaction rates.
The specific conditions in the prebiotic environment, such as
temperature or catalytic mineral surfaces, could thus have begun
winnowing the pool of life’s molecular precursors from among those in
the Assembly Possible. According to assembly theory, these prebiotic
preferences will be “remembered” in today’s biological molecules: They
encode their own history. Once Darwinian selection took over, it favored
those objects that were better able to replicate themselves. In the
process, this encoding of history became stronger still. That’s
precisely why scientists can use the molecular structures of proteins
and DNA to make deductions about the evolutionary relationships of
organisms.
Thus, assembly theory “provides a framework to unify descriptions of
selection across physics and biology,” Cronin, Walker and colleagues wrote. “The ‘more assembled’ an object is, the more selection is required for it to come into existence.”
“We’re trying to make a theory that explains how life arises from
chemistry,” Cronin said, “and doing it in a rigorous, empirically
verifiable way.”
stephenwolfram |Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today—just two and a half months later—I’m excited to announce that it’s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha—and Wolfram Language
as well—to give it what we might think of as “computational
superpowers”. It’s still very early days for all of this, but it’s
already very impressive—and one can begin to see how amazingly powerful
(and perhaps even revolutionary) what we can call “ChatGPT + Wolfram” can be.
Back in January, I made the point that, as an LLM neural net, ChatGPT—for all its remarkable prowess in textually generating material “like” what it’s read from the web, etc.—can’t itself be expected to do actual nontrivial computations,
or to systematically produce correct (rather than just “looks roughly
right”) data, etc. But when it’s connected to the Wolfram plugin it can
do these things. So here’s my (very simple) first example from January,
but now done by ChatGPT with “Wolfram superpowers” installed:
It’s a correct result (which in January it wasn’t)—found by actual computation. And here’s a bonus: immediate visualization:
How did this work? Under the hood, ChatGPT is formulating a query for Wolfram|Alpha—then sending it to Wolfram|Alpha for computation,
and then “deciding what to say” based on reading the results it got
back. You can see this back and forth by clicking the “Used Wolfram” box
(and by looking at this you can check that ChatGPT didn’t “make
anything up”):
There are lots of nontrivial things going on here, on both the
ChatGPT and Wolfram|Alpha sides. But the upshot is a good, correct
result, knitted into a nice, flowing piece of text.
Let’s try another example, also from what I wrote in January:
In January, I noted that ChatGPT ended up just “making up” plausible (but wrong) data when given this prompt:
But now it calls the Wolfram plugin and gets a good, authoritative answer. And, as a bonus, we can also make a visualization:
Another example from back in January that now comes out correctly is:
If you actually try these examples, don’t be surprised if they work
differently (sometimes better, sometimes worse) from what I’m showing
here. Since ChatGPT uses randomness
in generating its responses, different things can happen even when you
ask it the exact same question (even in a fresh session). It feels “very
human”. But different from the solid
“right-answer-and-it-doesn’t-change-if-you-ask-it-again” experience that
one gets in Wolfram|Alpha and Wolfram Language.
Here’s an example where we saw ChatGPT (rather impressively) “having a
conversation” with the Wolfram plugin, after at first finding out that
it got the “wrong Mercury”:
One particularly significant thing here is that ChatGPT isn’t just
using us to do a “dead-end” operation like show the content of a
webpage. Rather, we’re acting much more like a true “brain implant” for
ChatGPT—where it asks us things whenever it needs to, and we give
responses that it can weave back into whatever it’s doing. It’s rather
impressive to see in action. And—although there’s definitely much more
polishing to be done—what’s already there goes a long way towards (among
other things) giving ChatGPT the ability to deliver accurate, curated
knowledge and data—as well as correct, nontrivial computations.
But there’s more too. We already saw examples where we were able to
provide custom-created visualizations to ChatGPT. And with our
computation capabilities we’re routinely able to make “truly original”
content—computations that have simply never been done before. And
there’s something else: while “pure ChatGPT” is restricted to things it “learned during its training”, by calling us it can get up-to-the-moment data.
wired |The stunning capabilities of ChatGPT, the chatbot from startup OpenAI, has triggered a surge of new interest and investment in artificial intelligence.
But late last week, OpenAI’s CEO warned that the research strategy that
birthed the bot is played out. It's unclear exactly where future
advances will come from.
OpenAI
has delivered a series of impressive advances in AI that works with
language in recent years by taking existing machine-learning algorithms
and scaling them up to previously unimagined size. GPT-4, the latest of
those projects, was likely trained using trillions of words of text and
many thousands of powerful computer chips. The process cost over $100
million.
But the company’s CEO, Sam Altman, says
further progress will not come from making models bigger. “I think we're
at the end of the era where it's going to be these, like, giant, giant
models,” he told an audience at an event held at MIT late last week.
“We'll make them better in other ways.”
Altman’s
declaration suggests an unexpected twist in the race to develop and
deploy new AI algorithms. Since OpenAI launched ChatGPT in November,
Microsoft has used the underlying technology to add a chatbot to its Bing search engine, and Google has launched a rival chatbot called Bard. Many people have rushed to experiment with using the new breed of chatbot to help with work or personal tasks.
Meanwhile, numerous well-funded startups, including Anthropic, AI21, Cohere, and Character.AI,
are throwing enormous resources into building ever larger algorithms in
an effort to catch up with OpenAI’s technology. The initial version of
ChatGPT was based on a slightly upgraded version of GPT-3, but users can
now also access a version powered by the more capable GPT-4.
Altman’s
statement suggests that GPT-4 could be the last major advance to emerge
from OpenAI’s strategy of making the models bigger and feeding them
more data. He did not say what kind of research strategies or techniques
might take its place. In the paper describing GPT-4,
OpenAI says its estimates suggest diminishing returns on scaling up
model size. Altman said there are also physical limits to how many data
centers the company can build and how quickly it can build them.
Nick
Frosst, a cofounder at Cohere who previously worked on AI at Google,
says Altman’s feeling that going bigger will not work indefinitely rings
true. He, too, believes that progress on transformers, the type of
machine learning model at the heart of GPT-4 and its rivals, lies beyond
scaling. “There are lots of ways of making transformers way, way better
and more useful, and lots of them don’t involve adding parameters to
the model,” he says. Frosst says that new AI model designs, or
architectures, and further tuning based on human feedback are promising
directions that many researchers are already exploring.
theguardian | “And so for me,” he concluded, “a computer has
always been a bicycle of the mind – something that takes us far beyond
our inherent abilities. And I think we’re just at the early stages of
this tool – very early stages – and we’ve come only a very short
distance, and it’s still in its formation, but already we’ve seen
enormous changes, [but] that’s nothing to what’s coming in the next 100
years.”
Well, that was 1990 and here we are,
three decades later, with a mighty powerful bicycle. Quite how powerful
it is becomes clear when one inspects how the technology (not just
ChatGPT) tackles particular tasks that humans find difficult.
Writing computer programs, for instance.
Last
week, Steve Yegge, a renowned software engineer who – like all
uber-geeks – uses the ultra-programmable Emacs text editor, conducted an
instructive experiment. He typed
the following prompt into ChatGPT: “Write an interactive Emacs Lisp
function that pops to a new buffer, prints out the first paragraph of A Tale of Two Cities, and changes all words with ‘i’ in them red. Just print the code without explanation.”
ChatGPT
did its stuff and spat out the code. Yegge copied and pasted it into
his Emacs session and published a screenshot of the result. “In one
shot,” he writes, “ChatGPT has produced completely working code from a
sloppy English description! With voice input wired up, I could have
written this program by asking my computer to do it. And not only does
it work correctly, the code that it wrote is actually pretty decent
Emacs Lisp code. It’s not complicated, sure. But it’s good code.”
Ponder the significance of this for a moment, as tech investors such as Paul Kedrosky are already doing. He likens
tools such as ChatGPT to “a missile aimed, however unintentionally,
directly at software production itself. Sure, chat AIs can perform
swimmingly at producing undergraduate essays, or spinning up marketing
materials and blog posts (like we need more of either), but such
technologies are terrific to the point of dark magic at producing,
debugging, and accelerating software production quickly and almost
costlessly.”
Since, ultimately, our networked
world runs on software, suddenly having tools that can write it – and
that could be available to anyone, not just geeks – marks an important
moment. Programmers have always seemed like magicians: they can make an
inanimate object do something useful. I once wrote that they must
sometimes feel like Napoleon – who was able to order legions, at a
stroke, to do his bidding. After all, computers – like troops – obey
orders. But to become masters of their virtual universe, programmers had
to possess arcane knowledge, and learn specialist languages to converse
with their electronic servants. For most people, that was a pretty high
threshold to cross. ChatGPT and its ilk have just lowered it.
quantamagazine | Recent
investigations like the one Dyer worked on have revealed that LLMs can
produce hundreds of “emergent” abilities — tasks that big models can
complete that smaller models can’t, many of which seem to have little to
do with analyzing text. They range from multiplication to generating
executable computer code to, apparently, decoding movies based on
emojis. New analyses suggest that for some tasks and some models,
there’s a threshold of complexity beyond which the functionality of the
model skyrockets. (They also suggest a dark flip side: As they increase
in complexity, some models reveal new biases and inaccuracies in their
responses.)
“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors, including several identified in Dyer’s project. That list continues to grow.
Now, researchers are racing not only to
identify additional emergent abilities but also to figure out why and
how they occur at all — in essence, to try to predict unpredictability.
Understanding emergence could reveal answers to deep questions around AI
and machine learning in general, like whether complex models are truly
doing something new or just getting really good at statistics. It could
also help researchers harness potential benefits and curtail emergent
risks.
“We don’t know how to tell in which sort of
application is the capability of harm going to arise, either smoothly
or unpredictably,” said Deep Ganguli, a computer scientist at the AI startup Anthropic.
The Emergence of Emergence
Biologists, physicists, ecologists and
other scientists use the term “emergent” to describe self-organizing,
collective behaviors that appear when a large collection of things acts
as one. Combinations of lifeless atoms give rise to living cells; water
molecules create waves; murmurations of starlings swoop through the sky
in changing but identifiable patterns; cells make muscles move and
hearts beat. Critically, emergent abilities show up in systems that
involve lots of individual parts. But researchers have only recently
been able to document these abilities in LLMs as those models have grown
to enormous sizes.
Language
models have been around for decades. Until about five years ago, the
most powerful were based on what’s called a recurrent neural network.
These essentially take a string of text and predict what the next word
will be. What makes a model “recurrent” is that it learns from its own
output: Its predictions feed back into the network to improve future
performance.
In 2017, researchers at Google Brain introduced a new kind of architecture called a transformer.
While a recurrent network analyzes a sentence word by word, the
transformer processes all the words at the same time. This means
transformers can process big bodies of text in parallel.
Transformers enabled a rapid scaling up of
the complexity of language models by increasing the number of parameters
in the model, as well as other factors. The parameters can be thought
of as connections between words, and models improve by adjusting these
connections as they churn through text during training. The more
parameters in a model, the more accurately it can make connections, and
the closer it comes to passably mimicking human language. As expected, a
2020 analysis by OpenAI researchers found that models improve in accuracy and ability as they scale up.
But the debut of LLMs also brought
something truly unexpected. Lots of somethings. With the advent of
models like GPT-3, which has 175 billion parameters — or Google’s PaLM,
which can be scaled up to 540 billion — users began describing more and
more emergent behaviors. One DeepMind engineer even reported
being able to convince ChatGPT that it was a Linux terminal and getting
it to run some simple mathematical code to compute the first 10 prime
numbers. Remarkably, it could finish the task faster than the same code
running on a real Linux machine.
As with the movie emoji task, researchers
had no reason to think that a language model built to predict text would
convincingly imitate a computer terminal. Many of these emergent
behaviors illustrate “zero-shot” or “few-shot” learning, which describes
an LLM’s ability to solve problems it has never — or rarely — seen
before. This has been a long-time goal in artificial intelligence
research, Ganguli said. Showing that GPT-3 could solve problems without
any explicit training data in a zero-shot setting, he said, “led me to
drop what I was doing and get more involved.”
He wasn’t alone. A raft of researchers,
detecting the first hints that LLMs could reach beyond the constraints
of their training data, are striving for a better grasp of what
emergence looks like and how it happens. The first step was to
thoroughly document it.
quantamagazine | Imagine going to your local hardware store and seeing a new kind of
hammer on the shelf. You’ve heard about this hammer: It pounds faster
and more accurately than others, and in the last few years it’s rendered
many other hammers obsolete, at least for most uses. And there’s more!
With a few tweaks — an attachment here, a twist there — the tool changes
into a saw that can cut at least as fast and as accurately as any other
option out there. In fact, some experts at the frontiers of tool
development say this hammer might just herald the convergence of all
tools into a single device.
A similar story is playing out among the tools of artificial
intelligence. That versatile new hammer is a kind of artificial neural
network — a network of nodes that “learn” how to do some task by
training on existing data — called a transformer. It was originally
designed to handle language, but has recently begun impacting other AI
domains.
The transformer first appeared in 2017 in a paper that cryptically declared that “Attention Is All You Need.”
In other approaches to AI, the system would first focus on local
patches of input data and then build up to the whole. In a language
model, for example, nearby words would first get grouped together. The
transformer, by contrast, runs processes so that every element in the
input data connects, or pays attention, to every other element.
Researchers refer to this as “self-attention.” This means that as soon
as it starts training, the transformer can see traces of the entire data
set.
Before transformers came along, progress on AI language tasks largely
lagged behind developments in other areas. “In this deep learning
revolution that happened in the past 10 years or so, natural language
processing was sort of a latecomer,” said the computer scientist Anna
Rumshisky of the University of Massachusetts, Lowell. “So NLP was, in a
sense, behind computer vision. Transformers changed that.”
Transformers quickly became the front-runner for applications like
word recognition that focus on analyzing and predicting text. It led to a
wave of tools, like OpenAI’s Generative Pre-trained Transformer 3
(GPT-3), which trains on hundreds of billions of words and generates
consistent new text to an unsettling degree.
The success of transformers prompted the AI crowd to ask what else
they could do. The answer is unfolding now, as researchers report that
transformers are proving surprisingly versatile. In some vision tasks,
like image classification, neural nets that use transformers have become
faster and more accurate than those that don’t. Emerging work in other
AI areas — like processing multiple kinds of input at once, or planning
tasks — suggests transformers can handle even more.
“Transformers seem to really be quite transformational across many
problems in machine learning, including computer vision,” said Vladimir
Haltakov, who works on computer vision related to self-driving cars at
BMW in Munich.
Just 10 years ago, disparate subfields of AI had little to say to
each other. But the arrival of transformers suggests the possibility of a
convergence. “I think the transformer is so popular because it implies
the potential to become universal,” said the computer scientist Atlas Wang of the University of Texas, Austin. “We have good reason to want to try transformers for the entire spectrum” of AI tasks.
axial | Schrödinger won the Nobel Prize in Physics in 1933 and was exiled
from his native home Austria after the nation was annexed by Nazi
Germany. He moved to Ireland after he was invited to set up the Dublin
Institute of Advanced Studies. This follows the past history of Ireland
acting as a storehouse of knowledge during the Dark Ages. After decades
of work, biology was becoming more formalized around the 1940s. Better
tools were emerging to perturb various organisms and samples and the
increasing number of discoveries was building out the framework of life.
With the rediscovery of Mendel’s work on genetics, scientists probably
most importantly Thomas Hunt Morgan and his work on fruit flies (Drosophila) set up the rules of heredity - genes located on chromosomes with each cell containing a set of chromosomes. In 1927, a seminal discovery
was made that irradiation by X-rays of fruits flies can induce
mutations. Just the medium was not known where Schrödinger was thinking
through his ideas on biology. At the same type, organic chemistry was
improving and various macromolecules in the cell such as enzymes were
being identified along with the various types of bonds made. For
Schrödinger, there were no tools to characterized these macromolecules
(i.e. proteins, nucleic acids) such as X-ray crystallography. Really the
only tool useful at the time was centrifugation. At the time, many
people expected proteins to be the store and transmitter of genetic
information. Luckily, Oswald Avery published an incredible paper in 1944 that found DNA as probably the store instead of proteins.
With this knowledge base Schrödinger took a beginner’s mind
to biology. In some ways his naivety was incredibly useful. Instead of
being anchored to some widely-accepted premise that proteins transmitted
genetic information (although he had a hunch some protein was
responsible), the book thought from first principles and identified a
few key concepts in biology that were not appreciated but became very
important. Thankfully Schrödinger was curious - he enjoyed writing
poetry and reading philosophy so jumped into biology somewhat
fearlessly. At the beginning of the book, he sets the main question as:
“How
can the events in space and time which take place within the spatial
boundary of a living organism be accounted for by physics and
chemistry?”
Information
In the first chapter,
Schrödinger argues that because organisms have orderly behavior they
must follow the laws of physics. Because physics relies on statistics,
life was follow the same rules. He then argues that because biological
properties have some level of permanence the material that stores this
information then must be stable. This material must have the ability to
change from one stable state to another (i.e. mutations). Classical
physics is not very useful here, but for Schrödinger his expertise in
quantum mechanics helped determine that these stable states must be held
together through covalent bonds (a quantum phenomena) within a
macromolecule. In the early chapters, the book argues that the gene must
be a stable macromolecule.
Through discussion around the
stability of the gene, the book makes its most important breakthrough -
an analogy between a gene and an aperiodic crystal (DNA is aperiodic but
Schrödinger amazingly didn’t know that at the time): “the germ of a
solid.” Simply, a periodic crystal can store a small amount of
information with an infinite number of atoms and an aperiodic crystal
has the ability to store a near infinite amount of information in a
small number of atoms. The latter was more in line with what the current
data suggested what a gene was. Max Delbrück had similar ideas along
with J.B.S. Haldane, but the book was the first to connect this idea to
heredity. But readers at the time and maybe even still overextended this
framework to believe that genetic code contains all of the information
to build an organism. This isn’t true, development requires an
environment with some level of randomness.
qz | Interest in panpsychism has grown in part thanks to the increased
academic focus on consciousness itself following on from Chalmers’ “hard
problem” paper. Philosophers at NYU, home to one of the leading
philosophy-of-mind departments, have made panpsychism a feature of serious study. There have been several credible academic books on the subject in recent years, and populararticles taking panpsychism seriously.
One of the most popular and credible contemporary neuroscience theories on consciousness, Giulio Tononi’s Integrated Information Theory, further lends credence to panpsychism.
Tononi argues that something will have a form of “consciousness” if the
information contained within the structure is sufficiently
“integrated,” or unified, and so the whole is more than the sum of its
parts. Because it applies to all structures—not just the human
brain—Integrated Information Theory shares the panpsychist view that physical matter has innate conscious experience.
Goff, who has written an academic book
on consciousness and is working on another that approaches the subject
from a more popular-science perspective, notes that there were credible
theories on the subject dating back to the 1920s. Thinkers including
philosopher Bertrand Russell and physicist Arthur Eddington made a
serious case for panpsychism, but the field lost momentum after World
War II, when philosophy became largely focused on analytic philosophical
questions of language and logic. Interest picked up again in the 2000s,
thanks both to recognition of the “hard problem” and to increased
adoption of the structural-realist approach in physics, explains
Chalmers. This approach views physics as describing structure, and not
the underlying nonstructural elements.
“Physical science tells us a lot less about the nature of matter than
we tend to assume,” says Goff. “Eddington”—the English scientist who
experimentally confirmed Einstein’s theory of general relativity in the
early 20th century—“argued there’s a gap in our picture of the universe.
We know what matter does but not what it is. We can put consciousness into this gap.” Fist tap Dale.
ourfiniteworld |The very thing that should be saving us–technology–has side effects that bring the whole system down.
The only way we can keep adding technology is by adding more capital
goods, more specialization, and more advanced education for selected
members of society. The problem, as we should know from research
regarding historical economies that have collapsed, is that more
complexity ultimately leads to collapse because it leads to huge wage
disparity. (See Tainter; Turchin and Nefedov.)
Ultimately, the people at the bottom of the hierarchy cannot afford the
output of the economy. Added debt at lower interest rates can only
partially offset this problem. Governments cannot collect enough taxes
from the large number of people at the bottom of the hierarchy, even
though the top 1% may flourish. The economy tends to collapse because of
the side effects of greater complexity.
Our economy is a networked system, so it should not be surprising
that there is more than one way for the system to reach its end.
I have described the problem that really brings down the economy
as “too low return on human labor,” at least for those at the bottom of
the hierarchy. The wages of the non-elite are too low to provide an
adequate standard of living. In a sense, this is a situation of too low
EROEI: too low return on human energy. Most energy researchers
have been looking at a very different kind of EROEI: a calculation based
on the investment of fossil fuel energy. The two kinds of EROEI are
related, but not very closely. Many economies have collapsed, without
ever using fossil fuel energy.
While what I call “fossil fuel EROEI” was a reasonable starting place
for an analysis of our energy problems back in the 1970s, the
calculation now gets more emphasis than it truly deserves. The limit we are reaching is a different one: falling return on human labor EROEI,
at least for those who are not among the elite. Increasing wage
disparity is becoming a severe problem now; it is the reason we have
very divisive candidates running for political office, and many people
in favor of reduced globalization.
physicsworld | Consciousness appears to arise naturally as a result of a brain
maximizing its information content. So says a group of scientists in
Canada and France, which has studied how the electrical activity in
people's brains varies according to individuals' conscious states. The
researchers find that normal waking states are associated with maximum
values of what they call a brain's "entropy".
Statistical mechanics is very good at explaining the macroscopic
thermodynamic properties of physical systems in terms of the behaviour
of those systems' microscopic constituent particles. Emboldened by this
success, physicists have increasingly been trying to do a similar thing
with the brain: namely, using statistical mechanics to model networks of
neurons. Key to this has been the study of synchronization – how the
electrical activity of one set of neurons can oscillate in phase with
that of another set. Synchronization in turn implies that those sets of
neurons are physically tied to one another, just as oscillating physical
systems, such as pendulums, become synchronized when they are connected
together.
The latest work stems from the observation that consciousness, or at
least the proper functioning of brains, is associated not with high or
even low degrees of synchronicity between neurons but by middling
amounts. Jose Luis Perez Velazquez,
a biochemist at the University of Toronto, and colleagues hypothesized
that what is maximized during consciousness is not connectivity itself
but the number of different ways that a certain degree of connectivity
can be achieved.
ourfiniteworld | Does it make a difference if our models of energy and the economy are
overly simple? I would argue that it depends on what we plan to use the
models for. If all we want to do is determine approximately how many
years in the future energy supplies will turn down, then a simple model
is perfectly sufficient. But if we want to determine how we might change
the current economy to make it hold up better against the forces it is
facing, we need a more complex model that explains the economy’s real
problems as we reach limits. We need a model that tells the correct shape of the curve, as well as the approximate timing. I suggest reading my recent post regarding complexity and its effects as background for this post.
The common lay interpretation of simple models is that running out
of energy supplies can be expected to be our overwhelming problem in
the future. A more complete model suggests that our problems as we
approach limits are likely to be quite different: growing wealth
disparity, inability to maintain complex infrastructure, and growing
debt problems. Energy supplies that look easy to extract will not, in fact, be available because prices will not rise high enough.
These problems can be expected to change the shape of the curve of
future energy consumption to one with a fairly fast decline, such as the
Seneca Cliff.
It is not intuitive, but complexity-related issues create a situation
in which economies need to grow, or they will collapse. See my post, The Physics of Energy and the Economy.
The popular idea that we extract 50% of a resource before peak, and 50%
after peak will be found not to be true–much of the second 50% will
stay in the ground.
Some readers may be interested in a new article that I assisted in
writing, relating to the role that price plays in the quantity of oil
extracted. The article is called, “An oil production forecast for China considering economic limits.” This article has been published by the academic journal Energy, and is available as a free download for 50 days.
edge | What kinds of complex systems can evolve by accumulation of successive useful variations? Does selection by itself achieve complex systems able to adapt? Are there lawful properties characterizing such complex systems? The overall answer may be that complex systems constructed so that they're on the boundary between order and chaos are those best able to adapt by mutation and selection.
Chaos is a subset of complexity. It's an analysis of the behavior of continuous dynamical systems — like hydrodynamic systems, or the weather — or discrete systems that show recurrences of features and high sensitivity to initial conditions, such that very small changes in the initial conditions can lead a system to behave in very different ways. A good example of this is the so called butterfly effect: the idea is that a butterfly in Rio can change the weather in Chicago. An infinitesimal change in initial conditions leads to divergent pathways in the evolution of the system. Those pathways are called trajectories. The enormous puzzle is the following: in order for life to have evolved, it can't possibly be the case that trajectories are always diverging. Biological systems can't work if divergence is all that's going on. You have to ask what kinds of complex systems can accumulate useful variation.
We've discovered the fact that in the evolution of life very complex systems can have convergent flow and not divergent flow. Divergent flow is sensitivity to initial conditions. Convergent flow means that even different starting places that are far apart come closer together. That's the fundamental principle of homeostasis, or stability to perturbation, and it's a natural feature of many complex systems. We haven't known that until now. That's what I found out twenty-five years ago, looking at what are now called Kauffman models — random networks exhibiting what I call "order for free."
Complex systems have evolved which may have learned to balance divergence and convergence, so that they're poised between chaos and order. Chris Langton has made this point, too. It's precisely those systems that can simultaneously perform the most complex tasks and evolve, in the sense that they can accumulate successive useful variations. The very ability to adapt is itself, I believe, the consequence of evolution. You have to be a certain kind of complex system to adapt, and you have to be a certain kind of complex system to coevolve with other complex systems. We have to understand what it means for complex systems to come to know one another — in the sense that when complex systems coevolve, each sets the conditions of success for the others. I suspect that there are emergent laws about how such complex systems work, so that, in a global, Gaia- like way, complex coevolving systems mutually get themselves to the edge of chaos, where they're poised in a balanced state. It's a very pretty idea. It may be right, too.
My approach to the coevolution of complex systems is my order-for-free theory. If you have a hundred thousand genes and you know that genes turn one another on and off, then there's some kind of circuitry among the hundred thousand genes. Each gene has regulatory inputs from other genes that turn it on and off. This was the puzzle: What kind of a system could have a hundred thousand genes turning one another on and off, yet evolve by creating new genes, new logic, and new connections?
Suppose we don't know much about such circuitry. Suppose all we know are such things as the number of genes, the number of genes that regulate each gene, the connectivity of the system, and something about the kind of rules by which genes turn one another on and off. My question was the following: Can you get something good and biology-like to happen even in randomly built networks with some sort of statistical connectivity properties? It can't be the case that it has to be very precise in order to work — I hoped, I bet, I intuited, I believed, on no good grounds whatsoever — but the research program tried to figure out if that might be true. The impulse was to find order for free. As it happens, I found it. And it's profound.
One reason it's profound is that if the dynamical systems that underlie life were inherently chaotic, then for cells and organisms to work at all there'd have to be an extraordinary amount of selection to get things to behave with reliability and regularity. It's not clear that natural selection could ever have gotten started without some preexisting order. You have to have a certain amount of order to select for improved variants.
In 1967[1]and 1969[2]Kauffman proposed applying models of randomboolean networksto simplified genetic circuits. These were very early models of large genetic regulatory networks, proposing that cell types are dynamical attractors of such networks and that cell differentiation steps are transitions between attractors. Recent evidence strongly suggests that cell types in humans and other organisms are indeed attractors.[3]In 1971 he suggested that the zygote may not access all the cell type attractors in the repertoire of the genetic network's dynamics, hence some of the unused cell types might be cancers.[4]This suggested the possibility of "cancer differentiation therapy".
In 1971, Kauffman proposed the self-organized emergence of collectivelyautocatalytic setsofpolymers, specificallypeptides, for the origin of molecular reproduction.[5][6]Reproducing peptide, DNA, and RNA collectively autocatalytic sets have now been made experimentally.[7][8]He is best known for arguing that the complexity of biological systems and organisms might result as much fromself-organizationand far-from-equilibrium dynamics as from Darwiniannatural selectionas discussed in his book Origins of Order (1993). His hypotheses stating that cell types are attractors of such networks, and that genetic regulatory networks are "critical", have found experimental support.[3][9]It now appears that the brain is also dynamically critical.[10]
Rejuvenation Pills
-
No one likes getting old. Everyone would like to be immorbid. Let's be
careful here. Immortal doesnt include youth or return to youth. Immorbid
means you s...
Death of the Author — at the Hands of Cthulhu
-
In 1967, French literary theorist and philosopher Roland Barthes wrote of
“The Death of the Author,” arguing that the meaning of a text is divorced
from au...
9/29 again
-
"On this sacred day of Michaelmas, former President Donald Trump invoked
the heavenly power of St. Michael the Archangel, sharing a powerful prayer
for pro...
Return of the Magi
-
Lately, the Holy Spirit is in the air. Emotional energy is swirling out of
the earth.I can feel it bubbling up, effervescing and evaporating around
us, s...
New Travels
-
Haven’t published on the Blog in quite a while. I at least part have been
immersed in the area of writing books. My focus is on Science Fiction an
Historic...
Covid-19 Preys Upon The Elderly And The Obese
-
sciencemag | This spring, after days of flulike symptoms and fever, a man
arrived at the emergency room at the University of Vermont Medical Center.
He ...