technologyreview | In a
12-by-20-foot room at a skilled-nursing facility in Menlo Park,
California, researchers are testing the next evolution of the computer
interface inside the soft matter of Dennis DeGray’s motor cortex. DeGray
is paralyzed from the neck down. He was hurt in a freak fall in his
yard while taking out the trash and is, he says, “as laid up as a person
can be.” He steers his wheelchair by puffing into a tube.
But DeGray is a virtuoso at using his brain to control a computer mouse. For the last five years, he has been a participant in BrainGate,
a series of clinical trials in which surgeons have inserted silicon
probes the size of a baby aspirin into the brains of more than 20
paralyzed people. Using these brain-computer interfaces,
researchers can measure the firing of dozens of neurons as people think
of moving their arms and hands. And by sending these signals to a
computer, the scientists have enabled those with the implants to grasp
objects with robot arms and steer planes around in flight simulators.
DeGray
is the world’s fastest brain typist. He first established the mark four
years ago, using his brain signals to roam over a virtual keyboard with
a point-and-click cursor. Selecting letters on a screen, he reached a
rate of eight correct words in a minute. Then, right before the covid-19
pandemic began, he demolished his own record, using a new technique
where he imagined he was hand-writing letters on lined paper. With that
approach, he managed 18 words per minute.
One of the people
responsible for the studies with DeGray is Krishna Shenoy, a Stanford
University neuroscientist and electrical engineer who is among the
leaders of the BrainGate project. While other brain-interface
researchers grabbed the limelight with more spectacular demonstrations,
Shenoy’s group has stayed focused on creating a practical interface that
paralyzed patients can use for everyday computer interactions. “We had
to persevere in the early days, when people said Ah, it’s cooler to do a robotic arm—it makes a better movie,” says Shenoy. But “if you can click, then you can use Gmail, surf the Web, and play music.”
Shenoy
says he is developing the technology for people with “the worst
afflictions and the most need.” Those include patients who are utterly
locked in and unable to speak, like those in the end stage of ALS.
But
if the technology allows people like DeGray to link their brain
directly to a computer, why not extend it to others? In 2016, Elon Musk
started a company called Neuralink
that began developing a neural “sewing machine” to implant a new type of
threaded electrode. Musk said his goal was to establish a
high-throughput connection to human brains so that society could keep
pace with artificial intelligence.
nature | A human brain slice is placed in a microscope to visualize nerve fibres. Credit: Mareen Fischinger
Imagine looking at Earth from space and being
able to listen in on what individuals are saying to each other. That’s
about how challenging it is to understand how the brain works.
From
the organ’s wrinkled surface, zoom in a million-fold and you’ll see a
kaleidoscope of cells of different shapes and sizes, which branch off
and reach out to each other. Zoom in a further 100,000 times and you’ll
see the cells’ inner workings — the tiny structures in each one, the
points of contact between them and the long-distance connections between
brain areas.
Scientists have made maps such as these for the worm1 and fly2 brains, and for tiny parts of the mouse3 and human4
brains. But those charts are just the start. To truly understand how
the brain works, neuroscientists also need to know how each of the
roughly 1,000 types of cell thought to exist in the brain speak to each
other in their different electrical dialects. With that kind of
complete, finely contoured map, they could really begin to explain the
networks that drive how we think and behave.
Such maps are emerging, including in a series of papers published this week
that catalogue the cell types in the brain. Results are streaming in
from government efforts to understand and stem the increasing burden of
brain disorders in their ageing populations. These projects, launched
over the past decade, aim to systematically chart the brain’s
connections and catalogue its cell types and their physiological
properties.
It’s an onerous undertaking. “But knowing all the
brain cell types, how they connect with each other and how they
interact, will open up an entirely new set of therapies that we can’t
even imagine today,” says Josh Gordon, director of the US National
Institute of Mental Health (NIMH) in Bethesda, Maryland.
The
largest projects started in 2013, when the US government and the
European Commission launched ‘moonshot’ efforts to provide services to
researchers that will help to crack the mammalian brain’s code. They
each poured vast resources into large-scale systematic programmes with
different goals. The US effort — which is estimated to cost
US$6.6 billion up until 2027 — has focused on developing and applying
new mapping technologies in its BRAIN (Brain Research through Advancing
Innovative Neurotechnologies) Initiative (see ‘Big brain budgets’). The
European Commission and its partner organizations have spent
€607 million ($703 million) on the Human Brain Project (HBP), which is
aimed mainly at creating simulations of the brain’s circuitry and using
those models as a platform for experiments.
royalsocietyofbiology | Understanding how memories are formed and stored is one of the great
enigmas in neuroscience. After more than a century of research, detailed
knowledge of the mechanisms of memory formation remain elusive.
In the past decade, memory research has been advanced by the study of
neuronal engrams, or networks of neurons that are incorporated into a
memory. In particular brain regions associated with memory, a neuronal
engram is theorised to consist of a subset of neurons within that brain
region that is uniquely activated by a behaviour that leads to memory
formation.
For example, when mice are trained on a simple, initial behavioural
task, a certain subset of neurons within a specific brain region will
become activated. Genetic techniques can be used to ‘tag’ this network
of neurons.
If the mouse is then placed in a different behavioural or
environmental context, and the network of neurons from the initial
behavioural task is artificially activated, the mouse will display
behaviour that it learned in the initial task[1].
The initial behavioural task triggered the incorporation of a subset of
neurons into an engram, which encoded the memory for that task.
Given the vast number of neurons in the brain, the potential
combination of neurons that could make up separate memory engrams is
virtually limitless. So the question that is key to our understanding of
the mechanisms of memory formation is: what causes the incorporation of
one neuron, but not another, into memory engrams?
Research has demonstrated that certain proteins can ‘prime’ neurons for incorporation into an engram[2].
Neurons that naturally express more of these proteins are frequently
found in memory engrams for a behaviour. Artificially inducing more of
these substances to be expressed can encourage neurons to become part of
an engram.
One substance in particular that was found to be important for priming neurons for engram incorporation is known as Arc[3].
This protein is induced rapidly by neuronal activity and regulates
levels of receptors at synapses that are critical for synaptic function
and neuronal communication.
Mice that genetically lack Arc protein are unable to form memories
that last longer than the course of a behavioural training session
(known as long-term memories), although they can learn normally at
short-term time scales. Although these experimental findings suggest
that Arc is an important piece of the memory puzzle, the mechanisms that
regulate Arc at the cellular and molecular level remain unclear.
Recently, research I conducted in the laboratory of Dr Jason Shepherd at the University of Utah[4] revealed something very surprising: Arc structurally and functionally resembles a retrovirus such as HIV. This
is the first time a neuronal protein, much less one underlying a
process as crucial as memory formation, has been shown to have a viral
structure. Evolutionary analysis from our laboratory showed that
Arc protein is distantly related to a class of retrotransposons that
also gave rise to retroviruses such as HIV.
opentheory | I think all neuroscientists, all philosophers, all
psychologists, and all psychiatrists should basically drop whatever
they’re doing and learn Selen Atasoy’s “connectome-specific harmonic
wave” (CSHW) framework. It’s going to be the backbone of how we
understand the brain and mind in the future, and it’s basically where
predictive coding was in 2011, or where blockchain was in 2009. Which is
to say, it’s destined for great things and this is a really good time to get into it.
I described CSHW in my last post as:
Selen Atasoy’s Connectome-Specific Harmonic Waves (CSHW) is
a new method for interpreting neuroimaging which (unlike conventional
approaches) may plausibly measure things directly relevant to
phenomenology. Essentially, it’s a method for combining fMRI/DTI/MRI to
calculate a brain’s intrinsic ‘eigenvalues’, or the neural frequencies
which naturally resonate in a given brain, as well as the way the brain
is currently distributing energy (periodic neural activity) between
these eigenvalues.
This post is going to talk a little more about how CSHW
works, why it’s so powerful, and what sorts of things we could use it
for.
CSHW: the basics
All periodic systems have natural modes— frequencies they
‘like’ to resonate at. A tuning fork is a very simple example of this:
regardless of how it’s hit, most of the vibration energy quickly
collapses to one frequency- the natural resonant frequency of the fork.
All musical instruments work on this principle; when you
change the fingering on a trumpet or flute, you’re changing the natural
resonances of the instrument.
CSHW’s big insight is that brains have these natural resonances too,
although they differ slightly from brain to brain. And instead of some
external musician choosing which notes (natural resonances) to play, the
brain sort of ‘tunes itself,’ based on internal dynamics, external
stimuli, and context.
The beauty of CSHW is that it’s a quantitative model, not
just loose metaphor: neural activation and inhibition travel as an
oscillating wave with a characteristic wave propagation pattern, which
we can reasonably estimate, and the substrate in which they propagate is
the the brain’s connectome (map of neural connections), which we can
also reasonably estimate.
TechnologyReview | This spring there was a widespread outcry when American Facebook
users found out that information they had posted on the social
network—including their likes, interests, and political preferences—had
been mined by the voter-targeting firm Cambridge Analytica. While it’s
not clear how effective they were, the company’s algorithms may have
helped fuel Donald Trump’s come-from-behind victory in 2016.
But to ambitious data scientists like Pocovi, who has worked with
major political parties in Latin America in recent elections, Cambridge
Analytica, which shut down in May, was behind the curve. Where it gauged
people’s receptiveness to campaign messages by analyzing data they
typed into Facebook, today’s “neuropolitical” consultants say they can
peg voters’ feelings by observing their spontaneous responses: an
electrical impulse from a key brain region, a split-second grimace, or a
moment’s hesitation as they ponder a question. The experts aim to
divine voters’ intent from signals they’re not aware they’re producing. A
candidate’s advisors can then attempt to use that biological data to
influence voting decisions.
Political insiders say campaigns are buying into this prospect in
increasing numbers, even if they’re reluctant to acknowledge it. “It’s
rare that a campaign would admit to using neuromarketing
techniques—though it’s quite likely the well-funded campaigns are,” says
Roger Dooley, a consultant and author of Brainfluence: 100 Ways to Persuade and Convince Consumers with Neuromarketing.
While it’s not certain the Trump or Clinton campaigns used
neuromarketing in 2016, SCL—the parent firm of Cambridge Analytica,
which worked for Trump—has reportedly used facial analysis to assess
whether what voters said they felt about candidates was genuine.
But even if US campaigns won’t admit to using neuromarketing, “they
should be interested in it, because politics is a blood sport,” says
Dan Hill, an American expert in facial-expression coding who advised
Mexican president Enrique Peña Nieto’s 2012 election campaign. Fred
Davis, a Republican strategist whose clients have included George W.
Bush, John McCain, and Elizabeth Dole, says that while uptake of these
technologies is somewhat limited in the US, campaigns would use
neuromarketing if they thought it would give them an edge. “There’s
nothing more important to a politician than winning,” he says.
The trend raises a torrent of questions in the run-up to the 2018
midterms. How well can consultants like these use neurological data to
target or sway voters? And if they are as good at it as they claim, can
we trust that our political decisions are truly our own? Will democracy
itself start to feel the squeeze?
WaPo | For many people, leisure time now means screen time.
Mom’s on social media, Dad’s surfing the Web, sister is texting friends,
and brother is playing a multiplayer shooting game like Fortnite.
But are they addicted? In June, the World Health Organization announced
that “gaming disorder” would be included in its disease classification
manual, reigniting debates over whether an activity engaged in by so
many could be classified as a disorder.
Experts were quick to point out that only 1 to 3 percent of gamers are likely to fit the diagnostic criteria,
such as lack of control over gaming, giving gaming priority over other
activities and allowing gaming to significantly impair such important
areas of life as social relationships.
Those low
numbers may give the impression that most people don’t have anything to
worry about. Not true. Nearly all teens, as well as most adults, have
been profoundly affected by the increasing predominance of electronic
devices in our lives. Many people suspect that today’s teens spend much
more time with screens and much less time with their peers face-to-face
than did earlier generations,and my analysis of numerous large
surveys of teens of various ages shows this to be true: The number of
17- and 18-year-olds who get together with their friends every day, for
example, dropped by more than 40 percent between 2000 and 2016. Teens
are also sleeping less, with sleep deprivation spiking
after 2010. Similar to the language in the WHO’s addiction criteria,
they are prioritizing time on their electronic devices over other
activities (and no, it’s not because they are studying more: Teens
actually spend less time on homework
than students did in the 1990s). Regardless of any questions around
addiction, how teens spend their free time has fundamentally shifted.
If teens were doing well, this might be fine. But they are not: Clinical-level depression, self-harm behavior (such as cutting), the number of suicide attempts and the suicide rate
for teens all rose sharply after 2010, when smartphones became common
and the iPad was introduced. Teens who spend excessive amounts of time
online are more likely to be sleep deprived, unhappy and depressed. Nor
are the effects small: For example, teens who spent five or more hours a
day using electronic devices were 66 percent more likely
than those who spent just one hour to have at least one risk factor for
suicide, such as depression or a previous suicide attempt.
Guardian | When I am well, I am happy and popular. It is tough to type these
words when I feel none of it. And sometimes when I am most well I am…
boring. Boring is how I want to be all of the time. This is what I have
been working towards, for 12 years now.
When friends decades older tell me off for saying that I am old, at
28, what I mean is: I haven’t achieved all the things I could have done
without this illness. I should have written a book by now. I should have
done so many things! All the time, I feel I am playing catch-up.
Always. I worry, and most of the literature tells me, that I will have
this problem for life. That it will go on, after the hashtags and the
documentaries and the book deals and Princes Harry and William – while the NHS circles closer to the drain.
Maybe it’s cute now, in my 20s. But it won’t be cute later, when I am
older and wearing tracksuits from 20 years ago and not in an ironic
hipster way but because I no longer wash or engage with the world, and
it’s like: my God, did you not get yourself together already?
When I left appointments and saw the long-term patients, walking
around in hospital-issue pyjamas, dead-eyed (the kind of image of the
mentally ill that has become anathema to refer to as part of the
conversation, but which in some cases is accurate), four emotions rushed
in: empathy, sympathy, recognition, terror. It’s one of those things
you can’t really talk about with authenticity unless you’ve seen it, not
really: the aurora borealis, Prince playing live and the inpatient
wards.
Maybe my prognosis will look up, maybe I’ll leave it all behind. I’ve
noticed a recent thing is for people to declare themselves “proud” of
their mental illness. I guess I don’t understand this. It does not
define me.
It’s not something that, when stable, I feel ashamed of, or that I
hide. But I am not proud of it. I’d rather I didn’t have it – so I
wasn’t exhausted, so I wasn’t bitter about it – despite the fact that I
know some people, in all parts of the world, are infinitely worse off.
I want it gone, so that I am not dealing with it all the time, or
worrying about others having to deal with it all the time. So I don’t
have to read another article, or poster, about how I just need to ask
for help. So that when a campaigner on Twitter says, “To anyone feeling
ashamed of being depressed: there is nothing to be ashamed of. It’s
illness. Like asthma or measles”, I don’t have to grit my teeth and say,
actually, I am not OK, and mental illness couldn’t be less like
measles. So that when someone else moans about being bored with everyone
talking about mental health, and a different campaigner replies,
“People with mental illness aren’t bored with it!” I don’t have to say,
no, I am: I am bored with this Conversation. Because more than talking
about it, I want to get better. I want to live.
archive.is |Musean hypernumbers
are an algebraic concept envisioned by Charles A. Musès
(1919–2000) to form a complete, integrated, connected, and natural number system.[1][2][3][4][5]
Musès sketched certain fundamental types of hypernumbers and arranged them in ten "levels", each with its own associated arithmetic
and geometry.
Mostly
criticized for lack of mathematical rigor and unclear defining
relations, Musean hypernumbers are often perceived as an unfounded
mathematical speculation. This impression was not helped by Musès'
outspoken confidence in applicability to fields far beyond what one
might expect from a number system, including consciousness, religion,
and metaphysics.
The term "M-algebra" was used by Musès for investigation into a subset of his hypernumber concept (the 16 dimensional conic
sedenions
and certain subalgebras thereof), which is at times confused with the
Musean hypernumber level concept itself. The current article separates
this well-understood "M-algebra" from the remaining controversial
hypernumbers, and lists certain applications envisioned by the inventor.
Musès was convinced that the basic laws of
arithmetic
on the reals are in direct correspondence with a concept where numbers
could be arranged in "levels", where fewer arithmetical laws would be
applicable with increasing level number.[3]
However, this concept was not developed much further beyond the initial
idea, and defining relations for most of these levels have not been
constructed.
Higher-dimensional numbers built on the first three levels were called "M-algebra"[6][7]
by Musès if they yielded a distributive
multiplication, unit element, and multiplicative norm. It contains kinds of
octonions
and historical quaternions
(except A. MacFarlane's hyperbolic quaternions) as subalgebras. A proof of completeness of M-algebra has not been provided.
The term "M-algebra" (after C. Musès[6]) refers to number systems that are
vector spaces
over the reals,
whose bases consist in roots of −1 or +1, and which possess a
multiplicative modulus. While the idea of such numbers was far from new
and contains many known isomorphic number systems (like e.g.
split-complex
numbers or tessarines),
certain results from 16 dimensional (conic) sedenions were a novelty.
Musès demonstrated the existence of a logarithm and real powers in
number systems built to non-real roots of +1.
cheniere | In the light of other past
researches, we were very much attracted when we first saw his typescript last
year, by the author's perceptive treatment of the operational‑theoretic
significance of measurement, in relation to the broader question of the meaning
of negative entropy. Several years ago 1 we had constructed a pilot
model of an electro‑mechanical machine we described as the Critical Probability
Sequence Calculator, designed and based on considerations stemming from the
mathematical principles of a definite discipline which we later2
called chronotopology: the topological (not excluding quantitative relations)
and most generalized analysis of the temporal process, of all time series ‑ the
science of time so to speak. To use a popular word in a semi‑popular sense, the
CPSC was a 'time‑machine,' as its input data consist solely of known past
times, and its output solely of most probable future times. That is, like the
Hamiltonian analysis of action in this respect, its operation was concerned
only with more general quantities connected with the structure of the temporal
process itself, rather than with the nature of the particular events or
occurrences involved or in question, although it can tell us many useful things
about those events. However, as an analogue computer, it was built simply to demonstrate
visibly the operation of interdependences already much more exactly stated as
chronotopological relationships.
That situations themselves should have
general laws of temporal structure, quite apart from their particular contents,
is a conclusion that must be meaningful to the working scientist; for it is but
a special example of the truth of scientific abstraction, and a particularly
understandable one in the light of the modern theory of games, which is a
discipline that borders on chronotopology.
One of the bridges from ordinary
physics to chronotopology is the bridge on which Rothstein's excellent analyses
also lie: the generalized conception of entropy. And in some of what follows we
will summarize what we wrote in 1951 in the paper previously referred to, and
in other places. We will dispense with any unnecessary apologies for the
endeavor to make the discussion essentially understandable to the intelligent
layman.
Modern studies in communication theory
(and communications are perhaps the heart of our present civilization) involve
time series in a manner basic to their assumptions. A great deal of 20th
century interest is centering on the more and more exact use and measurement of
time intervals. Ours might be epitomized as the Century of Time‑for only since
the 1900's has so much depended on split‑second timing and the accurate
measurement of that timi ng in fields ranging from electronics engineering to
fast‑lens photography.
Another reflection of the importance
of time in our era is the emphasis on high speeds, i.e. minimum time intervals
for action, and thus more effected in less time. Since power can be measured by
energy‑release per time‑unit, the century of time becomes, and so it has
proved, the Century of Power. To the responsible thinker such an equation is
fraught with profound and significant consequences for both science and
humanity. Great amounts of energy delivered in minimal times demand
a) extreme accuracy of knowledge and
knowledgeapplication concerning production of the phenomena,
b)
full understanding of the nature and genesis of the phenomena involved; since
at such speeds and at such amplitudes of energy a practically irrevocable,
quite easily disturbing set of consequences is assured. That we have mastered
(a) more than (b) deserves at least this parenthetical mention. And yet there
is a far‑reaching connection between the two, whereby any more profound
knowledge will inevitably lead in turn to a sounder basis for actions stemming
from that knowledge.
No longer is it enough simply to take
time for granted and merely apportion and program it in a rather naively
arbitrary fashion. Time must be analyzed, and its nature probed for whatever it
may reveal in the way of determinable sequences of critical probabilities. The
analysis of time per se is due to
become, in approximate language, quite probably a necessity for us as a
principal mode of attack by our science on its own possible shortcomings. For
with our present comparatively careening pace of technical advance and action,
safety factors, emergent from a thorough study and knowledge of the nature of
this critical quantity 'time,' are by that very nature most enabled to be the
source of what is so obviously lacking in our knowledge on so many advanced
levels: adequate means of controlling consequences and hence direction of
advance.
Chronotopology (deriving from Chronos + topos + logia) is the study of
the intra‑connectivity of time (including the inter‑connectivity of time points
and intervals), the nature or structure of time, 0 if you will; how it is
contrived in its various ways of formation and how those structures function,
in operation and interrelation.
It is simple though revealing, and it
is practically important to the development of our subject, to appreciate that
seconds, minutes, days, years, centuries, et
al., are not time, but merely the measures of time; that they are no more
time than rulers are what they measure. Of the nature and structure of time
itself investigations have been all but silent. As with many problems lying at
the foundations of our thought and procedures, it has been taken for granted
and thereby neglected ‑ as for centuries before the advent mathematical logic
were the foundations of arithmetic. The "but" in the above phrase
"investigations have been all but silent” conveys an indirect point. As
science has advanced, time has had to be used increasingly as a paramimplicitly
(as in the phase spaces of statistical mechanics) or explicitly.
Birkhoff's improved enunciation of the
ergodic problem 3 actually was one of a characteristic set of modern
efforts to associate a structure with time in a formulated manner. Aside from
theoretical interest, those efforts have obtained a wide justification in
practice and in terms of the greater analytic power they conferred. They lead
directly to chronotopological conceptions as their ideational destination and
basis.
The discovery of the exact formal
congruence of a portion of the theory of probability (that for stochastic
processes) with a portion of the theory of general dynamics is another
significant outcome of those efforts. Such a congruence constitutes more or less suggestion that probability
theory has been undergoing, ever since its first practical use as the theory of
probable errors by astronomy, a gradual metamorphosis into the actual study of
governing time‑forces and their configurations, into chronotopology. And the
strangely privileged character of the time parameter in quantum mechanics is
well known – another fact pointing in the same direction.
Now
Birkhoff's basic limit theorem may be analyzed as a consequence of the second
law of thermodynamics, since all possible states of change of a given system
will become exhausted with increase of entropy 4 as time proceeds.
It is to the credit of W.. S. Franklin to have been the firstspecifically to point out 5 that
the second law of thermodynamics "relates to the inevitable forward
movement which we call time"; not clock‑time, however, but time more
clearly exhibiting its nature, and measured by what Eddington has termed an
entropy‑clock 6. When we combine this fact with the definition of
increase of entropy established by Boltzmann, Maxwell, and Gibbs as progression
from less to more probable states, we can arrive at a basic theorem in chronotopology:
T1, The movement of time is
an integrated movement toward regions of ever‑increasing probability.
Corollary: It is thus a selective movement in a sense to be
determined by a more accurate understanding of probability, and in what
'probability' actually consists in any given situation.
This theorem, supported by modern
thermodynamic theory, indicates that it would no longer be correct for the
Kantian purely subjective view of time entirely to dominate modern scientific
thinking, as it has thus far tended to do since Mach. Rather, a truer balance
of viewpoint is indicated whereby time, though subjectively effective too,
nevertheless possesses definite structural and functional characteristics which
can be formulated quantitatively. We shall eventually see that time may be
defined as the ultimate causal pattern of all energy‑release and that this
release is of an oscillatory nature. To put it more popularly, there are time
waves.
edge | Because we use the word queen—the Egyptians use the word king—we have
a misconception of the role of the queen in the society. The queen is
usually the only reproductive in a honey bee colony. She’s specialized
entirely to that reproductive role. It’s not that she’s any way
directing the society; it’s more accurate to say that the behavior and
activity of the queen is directed by the workers. The queen is
essentially an egg-laying machine. She is fed unlimited high-protein,
high-carbohydrate food by the nurse bees that tend to her. She is
provided with an array of perfectly prepared cells to lay eggs in. She
will lay as many eggs as she can, and the colony will raise as many of
those eggs as they can in the course of the day. But the queen is not
ruling the show. She only flies once in her life. She will leave the
hive on a mating flight; she’ll be mated by up to twenty male bees, in
the case of the honey bee, and then she stores that semen for the rest
of her life. That is the role of the queen. She is the reproductive, but
she is not the ruler of the colony.
Many societies have attached this sense of royalty, and I think that
as much reflects that we see the order inside the honey bee society and
we assume that there must be some sort of structure that maintains that
order. We see this one individual who is bigger and we anthropomorphize
that that somehow must be their leader. But no, there is no way that
it’s appropriate to say that the queen has any leadership role in a
honey bee society.
A honey bee queen would live these days two to three years, and it's
getting shorter. It’s not that long ago that if you read the older
books, they would report that queens would live up to seven years. We’re
not seeing queens last that long now. It’s more common for queens to be
replaced every two to three years. All the worker honey bees are female
and the queen is female—it’s a matriarchal society.
An even more recent and exciting revolution happening now is this
connectomic revolution, where we’re able to map in exquisite detail the
connections of a part of the brain, and soon even an entire insect
brain. It’s giving us absolute answers to questions that we would have
debated even just a few years ago; for example, does the insect brain
work as an integrated system? And because we now have a draft of a
connectome for the full insect brain, we can absolutely answer that
question. That completely changes not just the questions that we’re
asking, but our capacity to answer questions. There’s a whole new
generation of questions that become accessible.
When I say a connectome, what I mean is an absolute map of the
neural connections in a brain. That’s not a trivial problem. It's okay
at one level to, for example with a light microscope, get a sense of the
structure of neurons, to reconstruct some neurons and see where they
go, but knowing which neurons connect with other neurons requires
another level of detail. You need electron microscopy to look at the
synapses.
The main question I’m asking myself at the moment is about the nature
of the animal mind, and how minds and conscious minds evolved. The
perspective I’m taking on that is to try to examine the mind's
mechanisms of behavior in organisms that are far simpler than ours.
I’ve got a particular focus on insects, specifically on the honey
bee. For me, it remains a live question as to whether we can think of
the honey bee as having any kind of mind, or if it's more appropriate to
think of it as something more mechanistic, more robotic. I tend to lean
towards thinking of the honey bee as being a conscious agent, certainly
a cognitively effective agent. That’s the biggest question I’m
exploring for myself.
There’s always been an interest in animals, natural history, and
animal behavior. Insects have always had this particular point of
tension because they are unusually inaccessible compared to so many
other animals. When we look at things like mammals and dogs, we are so
drawn to empathize with them that it tends to mask so much. When we’re
looking at something like an insect, they’re doing so much, but their
faces are completely expressionless and their bodies are completely
alien to ours. They operate on a completely different scale. You cannot
empathize or emote. It’s not immediately clear what they are, whether
they’re an entity or whether they’re a mechanism.
Forbes | Throughout the history of science, one of
the prime goals of making sense of the Universe has been to discover
what's fundamental. Many of the things we observe and interact with in
the modern, macroscopic world are composed of, and can be derived from,
smaller particles and the underlying laws that govern them. The idea
that everything is made of elements dates back thousands of years, and
has taken us from alchemy to chemistry to atoms to subatomic particles
to the Standard Model, including the radical concept of a quantum
Universe.
But even though there's very good evidence that all of the
fundamental entities in the Universe are quantum at some level, that
doesn't mean that everything is both discrete and quantized. So long as
we still don't fully understand gravity at a quantum level, space and
time might still be continuous at a fundamental level. Here's what we
know so far.
Quantum mechanics is the idea that, if you go down to a small enough
scale, everything that contains energy, whether it's massive (like an
electron) or massless (like a photon), can be broken down into
individual quanta. You can think of these quanta as energy packets,
which sometimes behave as particles and other times behave as waves,
depending on what they interact with.
Everything in nature obeys the laws of quantum physics, and our
"classical" laws that apply to larger, more macroscopic systems can
always (at least in theory) be derived, or emerge, from the more
fundamental quantum rules. But not everything is necessarily discrete,
or capable of being divided into a localized region space.
The energy level
differences in Lutetium-177. Note how there are only specific, discrete
energy levels that are acceptable. While the energy levels are discrete,
the positions of the electrons are not.
If you have a conducting band of metal, for example, and ask "where
is this electron that occupies the band," there's no discreteness there.
The electron can be anywhere, continuously, within the band. A free
photon can have any wavelength and energy; no discreteness there. Just
because something is quantized, or fundamentally quantum in nature,
doesn't mean everything about it must be discrete.
The idea that space (or space and time, since they're inextricably
linked by Einstein's theories of relativity) could be quantized goes way
back to Heisenberg himself. Famous for the Uncertainty Principle, which
fundamentally limits how precisely we can measure certain pairs of
quantities (like position and momentum), Heisenberg realized that
certain quantities diverged, or went to infinity, when you tried to
calculate them in quantum field theory.
ama-assn | In the basement of the Bureau International des Poids et Mesures
(BIPM) headquarters in Sevres, France, a suburb of Paris, there lies a
piece of metal that has been secured since 1889 in an environmentally
controlled chamber under three bell jars. It represents the world
standard for the kilogram, and all other kilo measurements around the
world must be compared and calibrated to this one prototype. There is no
such standard for the human brain. Search as you might, there is no
brain that has been pickled in a jar in the basement of the Smithsonian
Museum or the National Institute of Health or elsewhere in the world
that represents the standard to which all other human brains must be
compared. Given that this is the case, how do we decide whether any
individual human brain or mind is abnormal or normal? To be sure,
psychiatrists have their diagnostic manuals. But when it comes to mental
disorders, including autism, dyslexia, attention deficit hyperactivity
disorder, intellectual disabilities, and even emotional and behavioral
disorders, there appears to be substantial uncertainty concerning when a
neurologically based human behavior crosses the critical threshold from
normal human variation to pathology.
A major cause of this ambiguity is the emergence over the past two
decades of studies suggesting that many disorders of the brain or mind
bring with them strengths as well as weaknesses. People diagnosed with
autism spectrum disorder (ASD), for example, appear to have strengths
related to working with systems (e.g., computer languages, mathematical
systems, machines) and in experiments are better than control subjects
at identifying tiny details in complex patterns [1]. They also score
significantly higher on the nonverbal Raven’s Matrices intelligence test
than on the verbal Wechsler Scales [2]. A practical outcome of this new
recognition of ASD-related strengths is that technology companies have
been aggressively recruiting people with ASD for occupations that
involve systemizing tasks such as writing computer manuals, managing
databases, and searching for bugs in computer code [3].
Valued traits have also been identified in people with other mental
disorders. People with dyslexia have been found to possess global
visual-spatial abilities, including the capacity to identify “impossible
objects” (of the kind popularized by M. C. Escher) [4], process
low-definition or blurred visual scenes [5], and perceive peripheral or
diffused visual information more quickly and efficiently than
participants without dyslexia [6]. Such visual-spatial gifts may be
advantageous in jobs requiring three-dimensional thinking such as
astrophysics, molecular biology, genetics, engineering, and computer
graphics [7, 8]. In the field of intellectual disabilities, studies have
noted heightened musical abilities in people with Williams syndrome,
the warmth and friendliness of individuals with Down syndrome, and the
nurturing behaviors of persons with Prader-Willi syndrome [9, 10].
Finally, researchers have observed that subjects with attention deficit
hyperactivity disorder (ADHD) and bipolar disorder display greater
levels of novelty-seeking and creativity than matched controls [11-13].
Such strengths may suggest an evolutionary explanation for why these
disorders are still in the gene pool. A growing number of scientists are
suggesting that psychopathologies may have conferred specific
evolutionary advantages in the past as well as in the present [14]. The
systemizing abilities of individuals with autism spectrum disorder might
have been highly adaptive for the survival of prehistoric humans. As
autism activist Temple Grandin, who herself has autism, surmised: “Some
guy with high-functioning Asperger’s developed the first stone spear; it
wasn’t developed by the social ones yakking around the campfire” [15].
weforum | Neuroscience has offered some evidence-based claims that can be
uncomfortable because they challenge our notions of morality or debunk
the myth about our ‘rational’ brain.
Critically, neuroscience has enlightened us about the physicality
of human emotions. Fear, an emotion we have inherited from our
ancestors, is not an abstract or intangible sense of imminent danger: it
is expressed in neurochemical terms in our amygdala, the almond-shaped
structure on the medial temporal lobe, anterior to the hippocampus.The amygdala
has been demonstrated to be critical in the acquisition, storage and
expression of conditioned fear responses. Certain regions in the
amygdala undergo plasticity – changes in response to emotional stimuli –
triggering other reactions, including endocrine responses.
Similarly, the way our brains produce moral reasoning and then
translate it in the social context can now be studied to some extent in
neuroscientific terms. For instance,the role of serotonin
in prosocial behaviour and moral judgment is now well documented, with a
demonstrably strong correlation between levels of serotonin in the
brain and moral social behaviour.
Neuroscientists have also looked at howpolitical ideologies
are represented in the brain; preliminary research indicates that an
increased gray matter volume in the anterior cingulate cortex can be
correlated with inclinations towards liberalism, while increased gray
matter volume in the amygdala (which is part of the limbic system and
thus concerned with emotions) appears to be associated with conservative
values. These early findings, of course, are not meant to be
reductionist, deterministic, or politically pigeonhole one group or the
other, nor are they fixed. Rather, they can help explain the deep and
persistent divide that we see in party politics across the world. It
would very valuable to look into whether these preliminary findings
pre-date political affiliation or occur as a result of repeated exposure
to politically-inspired partisan and emotional debates.
More recently, policy analysis has turned to neuroscience too. For example, in the US2016 election cycle, some have correlated the appeal of some candidates to the so-calledhardwiring
in our brains, and to our primordial needs of group belonging, while
others have explored the insights from neuroscience on the role ofemotions in decision-making. Similarly, the attitudes surrounding “Brexit” have also been analysed with references from neuroscience.
Divisive politics – what does neuroscience tell us?
The short answer is: some useful new insights. To be sure, some
findings in neuroscience might be crude at this stage as the discipline
and its tools are evolving. The human brain – despite tremendous
scientific advances – remains to a large extent unknown. We do have,
however, some preliminary findings to draw on. Divisive politics have
taken centre stage and neuroscience may be able shed some light on how
this is expressed in our brains.
“Us” vs. “them”, cultivating fear and hatred towards
out-groups that are deemed different (ethnically, ideologically,
religiously, etc.), and vicious and virulent attacks against them, are
all part of an unsettling picture of growing ethnic and racial
hostility. Philosopher Martin Buber identified two opposed ways of being
in relation to others:I-It and I-thou. I-It means perceiving others as objects, whereas I-thou refers to empathic perceptions of others as subjects.
Cognitive neuroscientists have studied this distinction with brain
imaging techniques and the findings – unsurprisingly – tell us a lot
about our increasingly polarised world today and the ways our brains
process the distinction between us and “others”.
tandfonline | Selection pressures to better understand others’ thoughts and feelings
are seen as a primary driving force in human cognitive evolution. Yet
might the evolution of social cognition be more complex than we assume,
with more than one strategy towards social understanding and developing a
positive pro-social reputation? Here we argue that social buffering of
vulnerabilities through the emergence of collaborative morality
will have opened new niches for adaptive cognitive strategies and
widened personality variation. Such strategies include those that that
do not depend on astute social perception or abilities to think
recursively about others’ thoughts and feelings. We particularly
consider how a perceptual style based on logic and detail, bringing
certain enhanced technical and social abilities which compensate for
deficits in complex social understanding could be advantageous at low
levels in certain ecological and cultural contexts. ‘Traits of autism’
may have promoted innovation in archaeological material culture during
the late Palaeolithic in the context of the mutual interdependence of
different social strategies, which in turn contributed to the rise of
innovation and large scale social networks.
physorg | The
ability to focus on detail, a common trait among people with autism,
allowed realism to flourish in Ice Age art, according to researchers at
the University of York.
Around 30,000
years ago realistic art suddenly flourished in Europe. Extremely
accurate depictions of bears, bison, horses and lions decorate the walls
of Ice Age archaeological sites such as Chauvet Cave in southern
France.
Why our ice age ancestors created exceptionally realistic art rather
than the very simple or stylised art of earlier modern humans has long
perplexed researchers.
Many have argued that psychotropic drugs were behind the detailed
illustrations. The popular idea that drugs might make people better at
art led to a number of ethically-dubious studies in the 60s where
participants were given art materials and LSD.
The authors of the new study discount that theory, arguing instead that individuals with "detail focus", a trait linked to autism, kicked off an artistic movement that led to the proliferation of realistic cave drawings across Europe.
The
ability to focus on detail, a common trait among people with autism,
allowed realism to flourish in Ice Age art, according to researchers at
the University of York.
Around 30,000
years ago realistic art suddenly flourished in Europe. Extremely
accurate depictions of bears, bison, horses and lions decorate the walls
of Ice Age archaeological sites such as Chauvet Cave in southern
France.
Why our ice age ancestors created exceptionally realistic art rather
than the very simple or stylised art of earlier modern humans has long
perplexed researchers.
Many have argued that psychotropic drugs were behind the detailed
illustrations. The popular idea that drugs might make people better at
art led to a number of ethically-dubious studies in the 60s where
participants were given art materials and LSD.
The authors of the new study discount that theory, arguing instead that individuals with "detail focus", a trait linked to autism, kicked off an artistic movement that led to the proliferation of realistic cave drawings across Europe.
Harpers | I concluded that the internet and the novel were natural enemies.
“Choose your own adventure” stories were not the future of literature.
The author should be a dictator, a tyrant who treated the reader as his
willing slave, not as a cocreator. And high-tech flourishes should be
avoided. Novels weren’t meant to link to Neil Diamond songs or, say,
refer to real plane crashes on the day they happen. Novels were closed
structures, their boundaries fixed, not data-driven, dynamic feedback
loops. Until quite recently, these were my beliefs, and no new works
emerged to challenge my thinking.
Then, late last year, while knocking
around on the internet one night, I came across a long series of posts
originally published on 4chan, an anonymous message board. They
described a sinister global power struggle only dimly visible to
ordinary citizens. On one side of the fight, the posts explained, was a
depraved elite, bound by unholy oaths and rituals, secretly sowing chaos
and strife to create a pretext for their rule. On the other side was
the public, we the people, brave and decent but easily deceived, not
least because the news was largely scripted by the power brokers and
their collaborators in the press. And yet there was hope, I read,
because the shadow directorate had blundered. Aligned during the
election with Hillary Clinton and unable to believe that she could lose,
least of all to an outsider, it had underestimated Donald Trump—as well
as the patriotism of the US military, which had recruited him for a
last-ditch battle against the psychopathic deep-state spooks. The writer
of the 4chan posts, who signed these missives “Q,” invited readers to
join this battle. He—she? it?—promised to pass on orders from a
commander and intelligence gathered by a network of spies.
I was hooked.
Known to its fan base as QAnon, the tale first appeared last year,
around Halloween. Q’s literary brilliance wasn’t obvious at first. His
obsessions were unoriginal, his style conventional, even dull. He
suggested that Washington was being purged of globalist evildoers,
starting with Clinton, who was awaiting arrest, supposedly, but allowed
to roam free for reasons that weren’t clear. Soon a whole roster of
villains had emerged, from John McCain to John Podesta to former
president Obama, all of whom were set to be destroyed by something
called the Storm, an allusion to a remark by President Trump last fall
about “the calm before the storm.” Clinton’s friend and supporter Lynn
Forrester de Rothschild, a member by marriage of the banking family
abhorred by anti-Semites everywhere, came in for special abuse from Q
and Co.—which may have contributed to her decision to delete her Twitter
app. Along with George Soros, numerous other bigwigs, the FBI, the CIA,
and Twitter CEO Jack Dorsey (by whom the readers of Q feel persecuted),
these figures composed a group called the Cabal. The goal of the Cabal
was dominion over all the earth. Its initiates tended to be pedophiles
(or pedophilia apologists), the better to keep them blackmailed and in
line, and its esoteric symbols were everywhere; the mainstream media
served as its propaganda arm. Oh, and don’t forget the pope.
As I read further, the tradition in which Q was working became
clearer. Q’s plot of plots is a retread, for the most part, of Cold
War–era John Birch Society notions found in books such as None Dare Call It Conspiracy.
These Bircher ideas were borrowings, in turn, from the works of a
Georgetown University history professor by the name of Carroll Quigley.
Said to be an important influence on Bill Clinton, Quigley was a
legitimate scholar of twentieth-century Anglo-American politics. His
1966 book Tragedy and Hope, which concerned the power held by
certain elites over social and military planning in the West, is not
itself a paranoid creation, but parts of it have been twisted and
reconfigured to support wild theories of all kinds. Does Q stand for
Quigley? It’s possible, though there are other possibilities (such as
the Department of Energy’s “Q” security clearance). The literature of
right-wing political fear has a canon and a pantheon, and Q, whoever he
is, seems deeply versed in it.
While introducing his cast of fiends, Q also assembled a basic story
line. Justice was finally coming for the Cabal, whose evil deeds were
“mind blowing,” Q wrote, and could never be “fully exposed” lest they
touch off riots and revolts. But just in case this promised “Great
Awakening” caused panic in the streets, the National Guard and the
Marine Corps were ready to step in. So were panels of military judges,
in whose courts the treasonous cabalists would be tried and convicted,
then sent to Guantánamo. In the manner of doomsayers since time began, Q
hinted that Judgment Day was imminent and seemed unabashed when it kept
on not arriving. Q knew full well that making one’s followers wait for a
definitive, cathartic outcome is a cult leader’s best trick—for the
same reason that it’s a novelist’s best trick. Suspense is an irritation
that’s also a pleasure, so there’s a sensual payoff from these delays.
And the more time a devotee invests in pursuing closure and
satisfaction, the deeper her need to trust the person in charge. It’s
why Trump may be in no hurry to build his wall, or to finish it if he
starts. It’s why he announced a military parade that won’t take place
until next fall.
As the posts piled up and Q’s plot thickened, his writing style
changed. It went from discursive to interrogative, from concise and
direct to gnomic and suggestive. This was the breakthrough, the hook,
the innovation, and what convinced me Q was a master, not just a
prankster or a kook. He’d discovered a principle of online storytelling
that had eluded me all those years ago but now seemed obvious: The
audience for internet narratives doesn’t want to read, it wants to
write. It doesn’t want answers provided, it wants to search for them. It
doesn’t want to sit and be amused, it wants to be sent on a mission. It
wants to do.
melmagazine | We know that people on the spectrum can exhibit remarkable mental gifts in addition to their difficulties; Asperger syndrome has been associated with superior IQs
that reach up to the “genius” threshold (4chan trolls use “aspie” and
“autist” interchangeably). In practice, weaponized autism is best
understood as a perversion of these hidden advantages. Think, for
example, of the keen pattern recognition that underlies musical talent
repurposed for doxxing efforts: Among the more “successful” deployments
of weaponized autism, in the alt-right’s view, was a collective attempt
to identify an antifa demonstrator who assaulted several of their own
with a bike lock at a Berkeley rally this past April.
As Berkeleysidereported,
“the amateur detectives” of 4chan’s /pol/ board went about “matching up
his perceived height and hairline with photos of people at a previous
rally and on social media,” ultimately claiming that Eric Clanton, a
former professor at Diablo Valley College, was the assailant in question. Arrested and charged in May,
Clanton faces a preliminary hearing this week, and has condemned the
Berkeley PD for relying on the conjecture of random assholes. “My case
threatens to set a new standard in which rightwing extremists can select
targets for repression and have police enthusiastically and forcefully
pursue them,” he wrote in a statement.
The denizens of /pol/, meanwhile, are terribly proud of their work, and
fellow Trump boosters have used their platforms to applaud it.
Conspiracy theorist Jack Posobiec
called it a new form of “facial recognition,” as if it were in any way
forensic, and lent credence to another dubious victory for the forces of
weaponized autism: supposed coordination with the Russian government to
take out ISIS camps in Syria. 4chan users are now routinely
deconstructing raw videos of terrorist training sites and the like to
make estimations about where they are, then sending those findings to
the Russian Ministry of Defense’s Twitter account. There is zero reason
to believe, as Posobiec and others contend, that 4chan has ever “called in an airstrike,” nor that Russia even bothered to look at the meager “intel” offered, yet the aggrandizing myth persists.
Since “autistic” has become a catchall idiom on 4chan, the self-defined
mentality of anyone willing to spend time reading and contributing to
the site, it’s impossible to know how many users are diagnosed with the
condition, or could be, or earnestly believe that it correlates to their
own experience, regardless of professional medical opinion. They tend
to assume, at any rate, that autistic personalities are readily drawn to
the board as introverted, societal misfits in search of connection. The
badge of “autist” conveys the dueling attitudes of pride and loathing
at work in troll communities: They may be considered and sometimes feel
like failures offline — stereotyped as sexless, jobless and
immature — but this is because they are different, transgressive, in a
sense better, elevated from the realm of polite, neurotypical normies. Their handicap is a virtue.
CounterPunch | Sitting alone in my room watching videos on Youtube, hearing sounds
from across the hall of my roommate watching Netflix, the obvious point
occurs to me that a key element of the demonic genius of late capitalism
is to enforce a crushing passiveness on the populace. With
social atomization comes collective passiveness—and with collective
passiveness comes social atomization. The product (and cause) of this
vicious circle is the dying society of the present, in which despair can
seem to be the prevailing condition. With an opioid epidemic raging and, more generally, mental illness affecting 50 percent of Americans at some point in their lifetime, it’s clear that the late-capitalist evisceration of civil society
has also eviscerated, on a broad scale, the individual’s sense of
self-worth. We have become atoms, windowless monads buffeted by
bureaucracies, desperately seeking entertainment as a tonic for our
angst and ennui.
The old formula of the psychoanalyst D. W. Winnicott is as relevant as it always will be: “It is creative apperception more than anything that makes the individual feel that life is worth living.” If so many have come to feel alienated from life itself, that is largely because they don’t feel creative, free, or active.......
Noam Chomsky, in the tradition of Marx, is fond of saying that technology is “neutral,”neither
beneficent nor baleful in itself but only in the context of particular
social relations, but I’m inclined to think television is a partial
exception to that dictum. I recall the Calvin and Hobbes strip in
which, while sitting in front of a TV, Calvin says, “I try to make
television-watching a complete forfeiture of experience. Notice how I
keep my jaw slack, so my mouth hangs open. I try not to swallow either,
so I drool, and I keep my eyes half-focused, so I don’t use any muscles
at all. I take a passive entertainment and extend the passivity to my
entire being. I wallow in my lack of participation and response. I’m
utterly inert.” Where before one might have socialized outside, gone to a
play, or discussed grievances with fellow workers and strategized over
how to resolve them, now one could stay at home and watch a passively
entertaining sitcom that imbued one with the proper values of
consumerism, wealth accumulation, status-consciousness, objectification
of women, subordination to authority, lack of interest in politics, and
other “bourgeois virtues.” The more one cultivated a relationship with
the television, the less one cultivated relationships with people—or
with one’s creative capacities, which “more than anything else make the
individual feel that life is worth living.”
Television is the perfect technology for a mature capitalist society,
and has surely been of inestimable value in keeping the population
relatively passive and obedient—distracted, idle, incurious, separated
yet conformist. Doubtless in a different kind of society it could have a
somewhat more elevated potential—programming could be more edifying,
devoted to issues of history, philosophy, art, culture, science—but in
our own society, in which institutions monomaniacally fixated on
accumulating profit and discouraging critical thought (because it’s
dangerous) have control of it, the outcome is predictable. The average
American watches about five hours of TV a day, while 60 percent of Americans have subscription services like Netflix, Amazon Prime, and Hulu. Sixty-five percent of homes have three or more TV sets.
Movie-watching, too, is an inherently passive pastime. Theodor Adorno
remarked, “Every visit to the cinema, despite the utmost watchfulness,
leaves me dumber and worse than before.” To sit in a movie theater (or
at home) with the lights out, watching electronic images flit by,
hearing blaring noises from huge surround-sound speakers, is to
experience a kind of sensory overload while being almost totally
inactive. And then the experience is over and you rub your eyes and try
to become active and whole again. It’s different from watching a play,
where the performers are present in front of you, the art is enacted
right there organically and on a proper human scale, there is no sensory
overload, no artificial splicing together of fleeting images, no
glamorous cinematic alienation from your own mundane life.
Since the 1990s, of course, electronic media have exploded to the point of utterly dominating our lives. For example, 65 percent of U.S. households include someone who plays video games regularly. Over three-quarters
of Americans own a smartphone, which, from anecdotal observation, we
know tends to occupy an immense portion of their time. The same
proportion has broadband internet service at home, and 70 percent of
Americans use social media. As an arch-traditionalist, I look askance at
all this newfangled electronic technology (even as I use it
constantly). It seems to me that electronic mediation of human
relationships, and of life itself, is inherently alienating and
destructive, insofar as it atomizes or isolates. There’s something
anti-humanistic about having one’s life be determined by algorithms
(algorithms invented and deployed, in many cases, by private
corporations). And the effects on mental functioning are by no means
benign: studies have confirmed the obvious,
that “the internet may give you an addict’s brain,” “you may feel more
lonely and jealous,” and “memory problems may be more likely”
(apparently because of information overload). Such problems manifest a
passive and isolated mode of experience.
But this is the mode of experience of neoliberalism, i.e.,
hyper-capitalism. After the upsurge of protest in the 1960s and early
’70s against the corporatist regime of centrist liberalism, the most
reactionary sectors of big business launched a massive counterattack
to destroy organized labor and the whole New Deal system, which was
eating into their profits and encouraging popular unrest. The
counterattack continues in 2018, and, as we know, has been wildly
successful. The union membership rate in the private sector is a mere 6.5 percent, a little less than it was on the eve of the Great Depression, and the U.S. spends much less on
social welfare than comparable OECD countries. Such facts have had
predictable effects on the cohesiveness of the social fabric.
Rejuvenation Pills
-
No one likes getting old. Everyone would like to be immorbid. Let's be
careful here. Immortal doesnt include youth or return to youth. Immorbid
means you s...
Death of the Author — at the Hands of Cthulhu
-
In 1967, French literary theorist and philosopher Roland Barthes wrote of
“The Death of the Author,” arguing that the meaning of a text is divorced
from au...
9/29 again
-
"On this sacred day of Michaelmas, former President Donald Trump invoked
the heavenly power of St. Michael the Archangel, sharing a powerful prayer
for pro...
Return of the Magi
-
Lately, the Holy Spirit is in the air. Emotional energy is swirling out of
the earth.I can feel it bubbling up, effervescing and evaporating around
us, s...
New Travels
-
Haven’t published on the Blog in quite a while. I at least part have been
immersed in the area of writing books. My focus is on Science Fiction an
Historic...
Covid-19 Preys Upon The Elderly And The Obese
-
sciencemag | This spring, after days of flulike symptoms and fever, a man
arrived at the emergency room at the University of Vermont Medical Center.
He ...