Sunday, June 17, 2018

Musean Hypernumbers


archive.is |  Musean hypernumbers are an algebraic concept envisioned by Charles A. Musès (1919–2000) to form a complete, integrated, connected, and natural number system.[1][2][3][4][5] Musès sketched certain fundamental types of hypernumbers and arranged them in ten "levels", each with its own associated arithmetic and geometry.
Mostly criticized for lack of mathematical rigor and unclear defining relations, Musean hypernumbers are often perceived as an unfounded mathematical speculation. This impression was not helped by Musès' outspoken confidence in applicability to fields far beyond what one might expect from a number system, including consciousness, religion, and metaphysics.
The term "M-algebra" was used by Musès for investigation into a subset of his hypernumber concept (the 16 dimensional conic sedenions and certain subalgebras thereof), which is at times confused with the Musean hypernumber level concept itself. The current article separates this well-understood "M-algebra" from the remaining controversial hypernumbers, and lists certain applications envisioned by the inventor.

"M-algebra" and "hypernumber levels"[edit]

Musès was convinced that the basic laws of arithmetic on the reals are in direct correspondence with a concept where numbers could be arranged in "levels", where fewer arithmetical laws would be applicable with increasing level number.[3] However, this concept was not developed much further beyond the initial idea, and defining relations for most of these levels have not been constructed.
Higher-dimensional numbers built on the first three levels were called "M-algebra"[6][7] by Musès if they yielded a distributive multiplication, unit element, and multiplicative norm. It contains kinds of octonions and historical quaternions (except A. MacFarlane's hyperbolic quaternions) as subalgebras. A proof of completeness of M-algebra has not been provided.

Conic sedenions / "16 dimensional M-algebra"[edit]

The term "M-algebra" (after C. Musès[6]) refers to number systems that are vector spaces over the reals, whose bases consist in roots of −1 or +1, and which possess a multiplicative modulus. While the idea of such numbers was far from new and contains many known isomorphic number systems (like e.g. split-complex numbers or tessarines), certain results from 16 dimensional (conic) sedenions were a novelty. Musès demonstrated the existence of a logarithm and real powers in number systems built to non-real roots of +1.
 

Saturday, June 16, 2018

Even Given Eyes To See, We Know Nothing About What We're Looking At...,



cheniere  |  In the light of other past researches, we were very much attracted when we first saw his typescript last year, by the author's perceptive treatment of the operational‑theoretic significance of measurement, in relation to the broader question of the meaning of negative entropy. Several years ago 1 we had constructed a pilot model of an electro‑mechanical machine we described as the Critical Probability Sequence Calculator, designed and based on considerations stemming from the mathematical principles of a definite discipline which we later2 called chronotopology: the topological (not excluding quantitative relations) and most generalized analysis of the temporal process, of all time series ‑ the science of time so to speak. To use a popular word in a semi‑popular sense, the CPSC was a 'time‑machine,' as its input data consist solely of known past times, and its output solely of most probable future times. That is, like the Hamiltonian analysis of action in this respect, its operation was concerned only with more general quantities connected with the structure of the temporal process itself, rather than with the nature of the particular events or occurrences involved or in question, although it can tell us many useful things about those events. However, as an analogue computer, it was built simply to demonstrate visibly the operation of interdependences already much more exactly stated as chronotopological relationships.


That situations themselves should have general laws of temporal structure, quite apart from their particular contents, is a conclusion that must be meaningful to the working scientist; for it is but a special example of the truth of scientific abstraction, and a particularly understandable one in the light of the modern theory of games, which is a discipline that borders on chronotopology.

One of the bridges from ordinary physics to chronotopology is the bridge on which Rothstein's excellent analyses also lie: the generalized conception of entropy. And in some of what follows we will summarize what we wrote in 1951 in the paper previously referred to, and in other places. We will dispense with any unnecessary apologies for the endeavor to make the discussion essentially understandable to the intelligent layman.

Modern studies in communication theory (and communications are perhaps the heart of our present civilization) involve time series in a manner basic to their assumptions. A great deal of 20th century interest is centering on the more and more exact use and measurement of time intervals. Ours might be epitomized as the Century of Time‑for only since the 1900's has so much depended on split‑second timing and the accurate measurement of that timi ng in fields ranging from electronics engineering to fast‑lens photography.

Another reflection of the importance of time in our era is the emphasis on high speeds, i.e. minimum time intervals for action, and thus more effected in less time. Since power can be measured by energy‑release per time‑unit, the century of time becomes, and so it has proved, the Century of Power. To the responsible thinker such an equation is fraught with profound and significant consequences for both science and humanity. Great amounts of energy delivered in minimal times demand

a) extreme accuracy of knowledge and knowledgeapplication concerning production of the phenomena,

b) full understanding of the nature and genesis of the phenomena involved; since at such speeds and at such amplitudes of energy a practically irrevocable, quite easily disturbing set of consequences is assured. That we have mastered (a) more than (b) deserves at least this parenthetical mention. And yet there is a far‑reaching connection between the two, whereby any more profound knowledge will inevitably lead in turn to a sounder basis for actions stemming from that knowledge.

No longer is it enough simply to take time for granted and merely apportion and program it in a rather naively arbitrary fashion. Time must be analyzed, and its nature probed for whatever it may reveal in the way of determinable sequences of critical probabilities. The analysis of time per se is due to become, in approximate language, quite probably a necessity for us as a principal mode of attack by our science on its own possible shortcomings. For with our present comparatively careening pace of technical advance and action, safety factors, emergent from a thorough study and knowledge of the nature of this critical quantity 'time,' are by that very nature most enabled to be the source of what is so obviously lacking in our knowledge on so many advanced levels: adequate means of controlling consequences and hence direction of advance.

Chronotopology (deriving from Chronos + topos + logia) is the study of the intra‑connectivity of time (including the inter‑connectivity of time points and intervals), the nature or structure of time, 0 if you will; how it is contrived in its various ways of formation and how those structures function, in operation and interrelation.

It is simple though revealing, and it is practically important to the development of our subject, to appreciate that seconds, minutes, days, years, centuries, et al., are not time, but merely the measures of time; that they are no more time than rulers are what they measure. Of the nature and structure of time itself investigations have been all but silent. As with many problems lying at the foundations of our thought and procedures, it has been taken for granted and thereby neglected ‑ as for centuries before the advent mathematical logic were the foundations of arithmetic. The "but" in the above phrase "investigations have been all but silent” conveys an indirect point. As science has advanced, time has had to be used increasingly as a paramimplicitly (as in the phase spaces of statistical mechanics) or explicitly.

Birkhoff's improved enunciation of the ergodic problem 3 actually was one of a characteristic set of modern efforts to associate a structure with time in a formulated manner. Aside from theoretical interest, those efforts have obtained a wide justification in practice and in terms of the greater analytic power they conferred. They lead directly to chronotopological conceptions as their ideational destination and basis.

The discovery of the exact formal congruence of a portion of the theory of probability (that for stochastic processes) with a portion of the theory of general dynamics is another significant outcome of those efforts. Such a congr        uence constitutes more or less suggestion that probability theory has been undergoing, ever since its first practical use as the theory of probable errors by astronomy, a gradual metamorphosis into the actual study of governing time‑forces and their configurations, into chronotopology. And the strangely privileged character of the time parameter in quantum mechanics is well known – another fact pointing in the same direction.

Now Birkhoff's basic limit theorem may be analyzed as a consequence of the second law of thermodynamics, since all possible states of change of a given system will become exhausted with increase of entropy 4 as time proceeds. It is to the credit of W.. S. Franklin to have been the first  specifically to point out 5 that the second law of thermodynamics "relates to the inevitable forward movement which we call time"; not clock‑time, however, but time more clearly exhibiting its nature, and measured by what Eddington has termed an entropy‑clock 6. When we combine this fact with the definition of increase of entropy established by Boltzmann, Maxwell, and Gibbs as progression from less to more probable states, we can arrive at a basic theorem in chronotopology:

T1, The movement of time is an integrated movement toward regions of ever‑increasing probability.

Corollary: It is thus a selective movement in a sense to be determined by a more accurate understanding of probability, and in what 'probability' actually consists in any given situation.

This theorem, supported by modern thermodynamic theory, indicates that it would no longer be correct for the Kantian purely subjective view of time entirely to dominate modern scientific thinking, as it has thus far tended to do since Mach. Rather, a truer balance of viewpoint is indicated whereby time, though subjectively effective too, nevertheless possesses definite structural and functional characteristics which can be formulated quantitatively. We shall eventually see that time may be defined as the ultimate causal pattern of all energy‑release and that this release is of an oscillatory nature. To put it more popularly, there are time waves.

John Nash Ott Showed Us How To See A Little Further...,


whale.to  |  John Ott, Sc.D. Hon., a naturalist photographer world famous for his ground breaking work on the health effects of sunlight and artificial light, died peacefully April 6 at the age of 90 in Sarasota, Florida. 

He pioneered the use of rare-earth phosphors in fluorescent tubes to create the effects of natural sunlight indoors that is often referred to as full spectrum light. He also identified that the artificial light and cathode-ray-tube radiation, produced from fluorescent tubes and television sets, created plant mutations and unnatural forms of plant development with the potential corresponding effects in humans. 

His wonderful dual legacy to us is the understanding that sunlight is a holistic and essential nutrient to a healthy life and that artificial light, produced by conventional bulbs and fluorescent tubes, is not healthy. He showed that artificial light can be converted to full spectrum to simulate sunlight for indoor use. 

John Ott's passion for photography in studying motion and life led to his involvement with time-lapse photography. He was hired by Walt Disney to film the famous time-lapse plant growth sequences used in Disney Studios' The Secrets Of Life, Nature's Half Acre, the pumpkin-to-coach sequence in the movie Cinderella and other nature films. It was during this period that he identified the biological effects of artificial light on plants and animals and the need for natural light in their growth. Over the next 40 years he continued his groundbreaking research into the effects that natural light, brought indoors, could have on plants, animals and now people. He conducted research on the effects natural sunlight had on the learning and behavior of children, the increased generation of Vitamin D, the production of melatonin/seratonin when light enters the eyes and the light deprivation syndrome now widely known as "Seasonal Affective Disorder" or SAD. He advocated the continual need for sunlight in our lives and to replace standard indoor lighting with a better full spectrum sunlight variety. 

Dr. Ott received an Honorary Doctorate of Science Degree from Loyola University of Chicago. He was the founder of John Ott Pictures and John Ott Laboratories. He authored many books and articles and gave literally thousands of lectures at conferences, scientific symposiums and to the general public over the years, including lectures to the Cancer Control Society. 

His widely read books include Health And Light: The Effects Of Natural And Artificial Light On Man And Other Living Things; Light, Radiation And You: How To Stay Healthy and My Ivory Cellar: The Story Of Time Lapse Photography. The Cancer Control Society compiled an extensive collection of articles on John Ott's work in their special edition of the Cancer Control Journal titled Let There Be Light.

Friday, June 15, 2018

Flatlanders Squinting At The Connectome


edge |  Because we use the word queen—the Egyptians use the word king—we have a misconception of the role of the queen in the society. The queen is usually the only reproductive in a honey bee colony. She’s specialized entirely to that reproductive role. It’s not that she’s any way directing the society; it’s more accurate to say that the behavior and activity of the queen is directed by the workers. The queen is essentially an egg-laying machine. She is fed unlimited high-protein, high-carbohydrate food by the nurse bees that tend to her. She is provided with an array of perfectly prepared cells to lay eggs in. She will lay as many eggs as she can, and the colony will raise as many of those eggs as they can in the course of the day. But the queen is not ruling the show. She only flies once in her life. She will leave the hive on a mating flight; she’ll be mated by up to twenty male bees, in the case of the honey bee, and then she stores that semen for the rest of her life. That is the role of the queen. She is the reproductive, but she is not the ruler of the colony.

Many societies have attached this sense of royalty, and I think that as much reflects that we see the order inside the honey bee society and we assume that there must be some sort of structure that maintains that order. We see this one individual who is bigger and we anthropomorphize that that somehow must be their leader. But no, there is no way that it’s appropriate to say that the queen has any leadership role in a honey bee society.

A honey bee queen would live these days two to three years, and it's getting shorter. It’s not that long ago that if you read the older books, they would report that queens would live up to seven years. We’re not seeing queens last that long now. It’s more common for queens to be replaced every two to three years. All the worker honey bees are female and the queen is female—it’s a matriarchal society.

An even more recent and exciting revolution happening now is this connectomic revolution, where we’re able to map in exquisite detail the connections of a part of the brain, and soon even an entire insect brain. It’s giving us absolute answers to questions that we would have debated even just a few years ago; for example, does the insect brain work as an integrated system? And because we now have a draft of a connectome for the full insect brain, we can absolutely answer that question. That completely changes not just the questions that we’re asking, but our capacity to answer questions. There’s a whole new generation of questions that become accessible.

When I say a connectome, what I mean is an absolute map of the neural connections in a brain. That’s not a trivial problem. It's okay at one level to, for example with a light microscope, get a sense of the structure of neurons, to reconstruct some neurons and see where they go, but knowing which neurons connect with other neurons requires another level of detail. You need electron microscopy to look at the synapses.

The main question I’m asking myself at the moment is about the nature of the animal mind, and how minds and conscious minds evolved. The perspective I’m taking on that is to try to examine the mind's mechanisms of behavior in organisms that are far simpler than ours.

I’ve got a particular focus on insects, specifically on the honey bee. For me, it remains a live question as to whether we can think of the honey bee as having any kind of mind, or if it's more appropriate to think of it as something more mechanistic, more robotic. I tend to lean towards thinking of the honey bee as being a conscious agent, certainly a cognitively effective agent. That’s the biggest question I’m exploring for myself.

There’s always been an interest in animals, natural history, and animal behavior. Insects have always had this particular point of tension because they are unusually inaccessible compared to so many other animals. When we look at things like mammals and dogs, we are so drawn to empathize with them that it tends to mask so much. When we’re looking at something like an insect, they’re doing so much, but their faces are completely expressionless and their bodies are completely alien to ours. They operate on a completely different scale. You cannot empathize or emote. It’s not immediately clear what they are, whether they’re an entity or whether they’re a mechanism.

Are Space And Time Quantized?


Forbes |  Throughout the history of science, one of the prime goals of making sense of the Universe has been to discover what's fundamental. Many of the things we observe and interact with in the modern, macroscopic world are composed of, and can be derived from, smaller particles and the underlying laws that govern them. The idea that everything is made of elements dates back thousands of years, and has taken us from alchemy to chemistry to atoms to subatomic particles to the Standard Model, including the radical concept of a quantum Universe.

But even though there's very good evidence that all of the fundamental entities in the Universe are quantum at some level, that doesn't mean that everything is both discrete and quantized. So long as we still don't fully understand gravity at a quantum level, space and time might still be continuous at a fundamental level. Here's what we know so far.

Quantum mechanics is the idea that, if you go down to a small enough scale, everything that contains energy, whether it's massive (like an electron) or massless (like a photon), can be broken down into individual quanta. You can think of these quanta as energy packets, which sometimes behave as particles and other times behave as waves, depending on what they interact with.

Everything in nature obeys the laws of quantum physics, and our "classical" laws that apply to larger, more macroscopic systems can always (at least in theory) be derived, or emerge, from the more fundamental quantum rules. But not everything is necessarily discrete, or capable of being divided into a localized region space.


The energy level differences in Lutetium-177. Note how there are only specific, discrete energy levels that are acceptable. While the energy levels are discrete, the positions of the electrons are not.

If you have a conducting band of metal, for example, and ask "where is this electron that occupies the band," there's no discreteness there. The electron can be anywhere, continuously, within the band. A free photon can have any wavelength and energy; no discreteness there. Just because something is quantized, or fundamentally quantum in nature, doesn't mean everything about it must be discrete.

The idea that space (or space and time, since they're inextricably linked by Einstein's theories of relativity) could be quantized goes way back to Heisenberg himself. Famous for the Uncertainty Principle, which fundamentally limits how precisely we can measure certain pairs of quantities (like position and momentum), Heisenberg realized that certain quantities diverged, or went to infinity, when you tried to calculate them in quantum field theory.

Thursday, June 14, 2018

Time and its Structure (Chronotopology)


intuition |  MISHLOVE: I should mention here, since you've used the term, that chronotopology is the name of the discipline which you founded, which is the study of the structure of time. 

MUSES: Do you want me to comment on that? 

MISHLOVE: Yes, please. 

MUSES: In a way, yes, but in a way I didn't found it. I was thinking cybernetics, for instance, was started formally by Norbert Weiner, but it began with the toilet tank that controlled itself. When I was talking with Weiner at Ravello, he happily agreed with this. 

MISHLOVE: The toilet tank. 

MUSES: He says, "Oh yes." The self-shutting-off toilet tank is the first cybernetic advance of mankind. 

MISHLOVE: Oh. And I suppose chronotopology has an illustrious beginning like this also. 

MUSES: Well, better than the toilet tank, actually. It has a better beginning than cybernetics. 

MISHLOVE: In effect, does it go back to the study of the ancient astrologers? 

MUSES: Well, it goes back to the study of almost all traditional cultures. The word astronomia, even the word mathematicus, meant someone who studied the stars, and in Kepler's sense they calculated the positions to know the qualities of time. But that's an independent hypothesis. The hypothesis of chronotopology is whether you have pointers of any kind -- ionospheric disturbances, planetary orbits, or whatnot -- independently of those pointers, time itself has a flux, has a wave motion, the object being to surf on time. 

MISHLOVE: Now, when you talk about the wave motion of time, I'm getting real interested and excited, because in quantum physics there's this notion that the underlying basis for the physical universe are these waves, even probability waves -- nonphysical, nonmaterial waves -- sort of underlying everything. 

MUSES: Very, very astute, because these waves are standing waves. Actually the wave-particle so-called paradox isn't that bad, when you consider that a particle is a wave packet, a packet of standing waves. That's why an electron can go through a plate and leave wavelike things. Actually our bodies are like fountains. The fountain has a shape only because it's being renewed every minute, and our bodies are being renewed. So we are standing waves; we're no exception. 

MISHLOVE: This deep structure of matter, where we can say what we really are in our bodies is not where we appear to be -- you're saying the same thing is true of time. It's not quite what it appears to be. 

MUSES: No, we're a part of this wave structure, and matter and energy all occur in waves, and time is the master control. I will give you an illustration of that. If you'll take a moment of time, this moment cuts through the entire physical universe as we're talking. It holds all of space in itself. But one point of space doesn't hold all of time. In other words, time is much bigger than space. 

MISHLOVE: That thought sort of made me gasp a second -- all of physical space in each now moment -- 

MUSES: Is contained in a point of time, which is a moment. And of course, a line of time is then an occurrence, and a wave of time is a recurrence. And then if you get out from the circle of time, which Nietzsche saw, the eternal recurrence -- if you break that, as we know we do, we develop, and then we're on a helix, because we come around but it's a little different each time. 

MISHLOVE: Well, now you're beginning to introduce the notion of symbols -- point, line, wave, helix, and so on. 

MUSES: Yes, the dimensions of time. 

MISHLOVE: One of the interesting points that you seem to make in your book is that symbols themselves -- words, pictures -- point to the deeper structure of things, including the deeper structure of time. 

MUSES: Yes. Symbols I would regard as pointers to their meanings, like revolving doors. There are some people, however, who have spent their whole lives walking in the revolving door and never getting out of it. 

Time and its Structure (Chronotopology)
Foreword by Charles A. Muses to "Communication, Organization, And Science" by Jerome Rothstein - 1958 

Your Genetic Presence Through Time


counterpunch |  The propagation through time of your personal genetic presence within the genetic sea of humanity can be visualized as a wave that arises out of the pre-conscious past before your birth, moves through the streaming present of your conscious life, and dissipates into the post-conscious future after your death.

You are a pre-conscious genetic concentration drawn out of the genetic diffusion of your ancestors. If you have children who survive you then your conscious life is the time of increase of your genetic presence within the living population. Since your progeny are unlikely to reproduce exponentially, as viruses and bacteria do, your post-conscious genetic presence is only a diffusion to insignificance within the genetic sea of humanity.

During your conscious life, you develop a historical awareness of your pre-conscious past, with a personal interest that fades with receding generations. Also during your conscious life, you can develop a projective concern about your post-conscious future, with a personal interest that fades with succeeding generations and with increasing predictive uncertainty.

Your conscious present is the sum of: your immediate conscious awareness, your reflections on your prior conscious life, your historical awareness of your pre-conscious past, and your concerns about your post-conscious future.

Your time of conscious present becomes increasingly remote in the historical awareness of your succeeding generations.

Your loneliness in old age is just your sensed awareness of your genetic diffusion into the living population of your conscious present and post-conscious future.

Wednesday, June 13, 2018

Globalists: The End of Empire and the Birth of Neoliberalism


BostonReview  |  In 1907, in the waning days of the Austro-Hungarian empire, Austria saw its first elections held under universal male suffrage. For some this was progress, but others felt threatened by the extension of the franchise and the mass demonstrations that had brought it about.

The conservative economist Ludwig von Mises was among the latter. “Unchallenged,” he wrote, “the Social Democrats assumed the ‘right to the street.’” The elections and protests implied a frightening new kind of politics, in which the state’s authority came not from above but from below. When a later round of mass protests was violently suppressed—with dozens of union members killed—Mises was greatly relieved: “Friday’s putsch has cleansed the atmosphere like a thunderstorm.”

In the early twentieth century, there were many people who saw popular sovereignty as a problem to be solved. In a world where dynastic rule had been swept offstage, formal democracy might be unavoidable; and elections served an important role in channeling the demands that might otherwise be expressed through “the right to the street.” But the idea that the people, acting through their political representatives, were the highest authority and entitled to rewrite law, property rights, and contracts in the public interest—this was unacceptable. One way or another, government by the people had to be reined in.

Mises’ writings from a century ago often sound as if they belong in speeches by modern European conservatives such as German Bundestag President Wolfgang Schäuble. The welfare state is unaffordable, Mises says; workers’ excessive wage demands have rendered them unemployable, governments’ uncontrolled spending will be punished by financial markets, and “English and German workers may have to descend to the lowly standard of life of the Hindus and the coolies to compete with them.” 

Quinn Slobodian argues that the similarities between Mises then and Schäuble today are not a coincidence. They are products of a coherent body of thought: neoliberalism, or the Geneva school. His book, Globalists: The End of Empire and the Birth of Neoliberalism, is a history of the “genealogy of thought that linked the neoliberal world economic imaginary from the 1920s to the 1990s.”

The book puts to rest the idea that “neoliberal” lacks a clear referent. As Slobodian meticulously documents, the term has been used since the 1920s by a distinct group of thinkers and policymakers who are unified both by a shared political vision and a web of personal and professional links.
How much did the Geneva school actually shape political outcomes, as opposed to reflecting them? 

John Maynard Keynes famously (and a bit self-servingly) claimed that, “Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist . . . some academic scribbler of a few years back.” Not everyone will share this view, but by highlighting a series of seven “moments”—three before World War II and four after—Slobodian definitively establishes the existence of neoliberalism as a coherent intellectual project—one that, at the very least, has been well represented in the circles of power.

Elites Have No Skin In The Game


mises  |  To review Skin in the Game is a risky undertaking. The author has little use for book reviewers who, he tells us, “are bad middlemen. … Book reviews are judged according to how plausible and well-written they are; never in how they map the book (unless of course the author makes them responsible for misrepresentations).”

The risk is very much worth undertaking, though, because Skin in the Game is an excellent book, filled with insights. These insights stress a central antithesis. Irresponsible people, with what C.D. Broad called “clever silly” intellectuals prominent among them, defend reckless policies that impose risks on others but not on themselves. They have no “skin in the game,” and in this to Taleb lies their chief defect.

Interventionist foreign policy suffers from this defect. “A collection of people classified as interventionistas … who promoted the Iraq invasion of 2003, as well as the removal of the Libyan leader in 2011, are advocating the imposition of additional such regime change on another batch of countries, which includes Syria, because it has a ‘dictator’. So we tried that thing called regime change in Iraq, and failed miserably. … But we satisfied the objective of ‘removing a dictator.’ By the same reasoning, a doctor would inject a patient with ‘moderate’ cancer cells to improve his cholesterol numbers, and proudly claim victory after the patient is dead, particularly if the postmortem showed remarkable cholesterol readings.”

But what has this to do with risk? The fallacy of the interventionists, Taleb tells us, is that they disregard the chance that their schemes will fail to work as planned. A key theme of Taleb’s work is that uncertain outcomes mandate caution.

“And when a blowup happens, they invoke uncertainty, something called a Black Swan (a high-impact unexpected event), … not realizing that one should not mess with a system if the results are fraught with uncertainty, or, more generally, should avoid engaging in an action with a big downside if one has no idea of the outcomes.”

The same mistaken conception of risk affects economic policy. “For instance, bank blowups came in 2008 because of the accumulation of hidden and asymmetric risks in the system: bankers, master risk transferors, could make steady money from a certain class of concealed explosive risks, use academic risk models that don’t work except on paper … then invoke uncertainty after a blowup … and keep past income — what I have called the Bob Rubin trade.”

Instead of relying on mathematical models, economists should realize that the free market works. Why use misguided theory to interfere with success in practice? “Under the right market structure, a collection of idiots produces a well-functioning market. … Friedrich Hayek has been, once again, vindicated. Yet one of the most cited ideas in history, that of the invisible hand, appears to be the least integrated into the modern psyche.”

Upsetting a complex system like the free market, can have disastrous consequences. Given this truth, libertarianism is the indicated course of action. “We libertarians share a minimal set of beliefs, the central one being to substitute the rule of law for the rule of authority. Without necessarily realizing it, libertarians believe in complex systems.”

Tuesday, June 12, 2018

"Privacy" Isn't What's Really At Stake...,


NewYorker  |  The question about national security and personal convenience is always: At what price? What do we have to give up? On the criminal-justice side, law enforcement is in an arms race with lawbreakers. Timothy Carpenter was allegedly able to orchestrate an armed-robbery gang in two states because he had a cell phone; the law makes it difficult for police to learn how he used it. Thanks to lobbying by the National Rifle Association, federal law prohibits the National Tracing Center from using a searchable database to identify the owners of guns seized at crime scenes. Whose privacy is being protected there?

Most citizens feel glad for privacy protections like the one in Griswold, but are less invested in protections like the one in Katz. In “Habeas Data,” Farivar analyzes ten Fourth Amendment cases; all ten of the plaintiffs were criminals. We want their rights to be observed, but we also want them locked up.

On the commercial side, are the trade-offs equivalent? The market-theory expectation is that if there is demand for greater privacy then competition will arise to offer it. Services like Signal and WhatsApp already do this. Consumers will, of course, have to balance privacy with convenience. The question is: Can they really? The General Data Protection Regulation went into effect on May 25th, and privacy-advocacy groups in Europe are already filing lawsuits claiming that the policy updates circulated by companies like Facebook and Google are not in compliance. How can you ever be sure who is eating your cookies?

Possibly the discussion is using the wrong vocabulary. “Privacy” is an odd name for the good that is being threatened by commercial exploitation and state surveillance. Privacy implies “It’s nobody’s business,” and that is not really what Roe v. Wade is about, or what the E.U. regulations are about, or even what Katz and Carpenter are about. The real issue is the one that Pollak and Martin, in their suit against the District of Columbia in the Muzak case, said it was: liberty. This means the freedom to choose what to do with your body, or who can see your personal information, or who can monitor your movements and record your calls—who gets to surveil your life and on what grounds.

As we are learning, the danger of data collection by online companies is not that they will use it to try to sell you stuff. The danger is that that information can so easily fall into the hands of parties whose motives are much less benign. A government, for example. A typical reaction to worries about the police listening to your phone conversations is the one Gary Hart had when it was suggested that reporters might tail him to see if he was having affairs: “You’d be bored.” They were not, as it turned out. We all may underestimate our susceptibility to persecution. “We were just talking about hardwood floors!” we say. But authorities who feel emboldened by the promise of a Presidential pardon or by a Justice Department that looks the other way may feel less inhibited about invading the spaces of people who belong to groups that the government has singled out as unpatriotic or undesirable. And we now have a government that does that. 


Smarter ____________ WILL NOT Take You With Them....,



nautilus  |  When it comes to artificial intelligence, we may all be suffering from the fallacy of availability: thinking that creating intelligence is much easier than it is, because we see examples all around us. In a recent poll, machine intelligence experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after.1 But, like a tribe on a tropical island littered with World War II debris imagining that the manufacture of aluminum propellers or steel casings would be within their power, our confidence is probably inflated.

AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators to Homo Sapiens. What evolution accomplished required tremendous resources. While silicon-based technologies are increasingly capable of simulating a mammalian or even human brain, we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior.

But there is hope. By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.
It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability.
The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun.

 

Proposed Policies For Advancing Embryonic Cell Germline-Editing Technology


niskanencenter |  In a previous post, I touched on the potential social and ethical consequences that will likely emerge in the wake of Dr. Shoukhrat Mitalipov’s recent experiment in germline-edited embryos. The short version: there’s probably no stopping the genetic freight train. However, there are steps we can take to minimize the potential costs, while capitalizing on the many benefits these advancements have to offer us. In order to do that, however, we need to turn our attention away from hyperbolic rhetoric of “designer babies” and focus on the near-term practical considerations—mainly, how we will govern the research, development, and application of these procedures.
Before addressing the policy concerns, however, it’s important to understand the fundamentals of what is being discussed in this debate. In the previous blog, I noted the difference between somatic cell editing and germline editing—one of the major ethical faultlines in this issue space. In order to have a clear perspective of the future possibilities, and current limitations, of genetic modification, let’s briefly examine how CRISPR actually works in practice. 

CRISPR stands for “clustered regularly interspaced short palindromic repeats”—a reference to segments of DNA that function as a defense used by bacteria to ward off foreign infections. That defense system essentially targets specific patterns of DNA in a virus, bacteria, or other threat and destroys it. This approach uses Cas9—an RNA-guided protein—to search through a cell’s genetic material until it finds a genetic sequence that matches the sequence programmed into its guide RNA. Once it finds its target, the protein splices the two strands of the DNA helix. Repair enzymes can then heal the gap in the broken DNA, or filled using new genetic information introduced into the sequence. Conceptually, we can think of CRISPR as the geneticist’s variation of a “surgical laser knife, which allows a surgeon to cut out precisely defective body parts and replace them with new or repaired ones.”

The technology is still cutting edge, and most researchers are still trying to get a handle on the technical difficulties associated with its use. Right now, we’re still in the Stone Age of genetic research. Even though we’ve made significant advancements in recent years, we’re still a long, long way from editing human IQs of our children on-demand. That technology is much further into the future and some doubt that we’ll ever be able to “program” inheritable traits into our individual genomes. In short, don’t expect any superhumanly intelligent, disease-resistant super soldiers any time soon.

The Parallels Between Artificial Intelligence and Genetic Modification
There are few technologies that inspire fantastical embellishments in popular media as much as the exaggerations surrounding genetic modification. In fact, the only technology that comes close to comparison—and indeed, actually parallels the rhetoric quite closely—is artificial intelligence (AI).

Monday, June 11, 2018

Office 365 CRISPR Editing Suite


gizmodo |  The gene-editing technology CRISPR could very well one day rid the world of its most devastating diseases, allowing us to simply edit away the genetic code responsible for an illness. One of the things standing in the way of turning that fantasy into reality, though, is the problem of off-target effects. Now Microsoft is hoping to use artificial intelligence to fix this problem. 

You see, CRISPR is fawned over for its precision. More so than earlier genetic technologies, it can accurately target and alter a tiny fragment of genetic code. But it’s still not always as accurate as we’d like it to be. Thoughts on how often this happens vary, but at least some of the time, CRISPR makes changes to DNA it was intended to leave alone. Depending on what those changes are, they could inadvertently result in new health problems, such as cancer.

Scientists have long been working on ways to fine-tune CRISPR so that less of these unintended effects occur. Microsoft thinks that artificial intelligence might be one way to do it. Working with computer scientists and biologists from research institutions across the U.S., the company has developed a new tool called Elevation that predicts off-target effects when editing genes with the CRISPR. 

It works like this: If a scientist is planning to alter a specific gene, they enter its name into Elevation. The CRISPR system is made up of two parts, a protein that does the cutting and a synthetic guide RNA designed to match a DNA sequence in the gene they want to edit. Different guides can have different off-target effects depending on how they are used. Elevation will suggest which guide is least likely to result in off-target effects for a particular gene, using machine learning to figure it out. It also provides general feedback on how likely off-target effects are for the gene being targeted. The platform bases its learning both on Microsoft research and publicly available data about how different genetic targets and guides interact. 

The work is detailed in a paper published Wednesday in the journal Nature Biomedical Engineering. The tool is publicly available for researchers to use for free. It works alongside a tool released by Microsoft in 2016 called Azimuth that predicts on-target effects.

There is lots of debate over how problematic the off-target effects of CRISPR really are, as well as over how to fix them. Microsoft’s new tool, though, will certainly be a welcome addition to the toolbox. Over the past year, Microsoft has doubled-down on efforts to use AI to attack health care problems.

Who Will Have Access To Advanced Reproductive Technology?



futurism |  In November 2017, a baby named Emma Gibson was born in the state of Tennessee. Her birth, to a 25-year-old woman, was fairly typical, but one aspect made her story unique: she was conceived 24 years prior from anonymous donors, when Emma’s mother was just a year old.  The embryo had been frozen for more than two decades before it was implanted into her mother’s uterus and grew into the baby who would be named Emma.

Most media coverage hailed Emma’s birth as a medical marvel, an example of just how far reproductive technology has come in allowing people with fertility issues to start a family.

Yet, the news held a small detail that gave others pause. The organization that provided baby Emma’s embryo to her parents, the National Embryo Donation Center (NEDC), has policies that state they will only provide embryos to married, heterosexual couples, in addition to several health requirements. Single women and non-heterosexual couples are not eligible.

In other industries, this policy would effectively be labeled as discriminatory. But for reproductive procedures in the United States, such a policy is completely legal. Because insurers do not consider reproductive procedures to be medically necessary, the U.S. is one of the few developed nations without formal regulations or ethical requirements for fertility medicine. This loose legal climate also gives providers the power to provide or deny reproductive services at will.

The future of reproductive technology has many excited about its potential to allow biological birth for those who might not otherwise have been capable of it. Experiments going on today, such as testing functional 3D-printed ovaries and incubating animal fetuses in artificial wombs, seem to suggest that future is well on its way, that fertility medicine has already entered the realm of what was once science fiction.

Yet, who will have access to these advances? Current trends seem to suggest that this will depend on the actions of regulators and insurance agencies, rather than the people who are affected the most.

Cognitive Enhancement In The Context Of Neurodiversity And Psychopathology


ama-assn |  In the basement of the Bureau International des Poids et Mesures (BIPM) headquarters in Sevres, France, a suburb of Paris, there lies a piece of metal that has been secured since 1889 in an environmentally controlled chamber under three bell jars. It represents the world standard for the kilogram, and all other kilo measurements around the world must be compared and calibrated to this one prototype. There is no such standard for the human brain. Search as you might, there is no brain that has been pickled in a jar in the basement of the Smithsonian Museum or the National Institute of Health or elsewhere in the world that represents the standard to which all other human brains must be compared. Given that this is the case, how do we decide whether any individual human brain or mind is abnormal or normal? To be sure, psychiatrists have their diagnostic manuals. But when it comes to mental disorders, including autism, dyslexia, attention deficit hyperactivity disorder, intellectual disabilities, and even emotional and behavioral disorders, there appears to be substantial uncertainty concerning when a neurologically based human behavior crosses the critical threshold from normal human variation to pathology.

A major cause of this ambiguity is the emergence over the past two decades of studies suggesting that many disorders of the brain or mind bring with them strengths as well as weaknesses. People diagnosed with autism spectrum disorder (ASD), for example, appear to have strengths related to working with systems (e.g., computer languages, mathematical systems, machines) and in experiments are better than control subjects at identifying tiny details in complex patterns [1]. They also score significantly higher on the nonverbal Raven’s Matrices intelligence test than on the verbal Wechsler Scales [2]. A practical outcome of this new recognition of ASD-related strengths is that technology companies have been aggressively recruiting people with ASD for occupations that involve systemizing tasks such as writing computer manuals, managing databases, and searching for bugs in computer code [3].

Valued traits have also been identified in people with other mental disorders. People with dyslexia have been found to possess global visual-spatial abilities, including the capacity to identify “impossible objects” (of the kind popularized by M. C. Escher) [4], process low-definition or blurred visual scenes [5], and perceive peripheral or diffused visual information more quickly and efficiently than participants without dyslexia [6]. Such visual-spatial gifts may be advantageous in jobs requiring three-dimensional thinking such as astrophysics, molecular biology, genetics, engineering, and computer graphics [7, 8]. In the field of intellectual disabilities, studies have noted heightened musical abilities in people with Williams syndrome, the warmth and friendliness of individuals with Down syndrome, and the nurturing behaviors of persons with Prader-Willi syndrome [9, 10]. Finally, researchers have observed that subjects with attention deficit hyperactivity disorder (ADHD) and bipolar disorder display greater levels of novelty-seeking and creativity than matched controls [11-13].

Such strengths may suggest an evolutionary explanation for why these disorders are still in the gene pool. A growing number of scientists are suggesting that psychopathologies may have conferred specific evolutionary advantages in the past as well as in the present [14]. The systemizing abilities of individuals with autism spectrum disorder might have been highly adaptive for the survival of prehistoric humans. As autism activist Temple Grandin, who herself has autism, surmised: “Some guy with high-functioning Asperger’s developed the first stone spear; it wasn’t developed by the social ones yakking around the campfire” [15].

Sunday, June 10, 2018

Cognitive Enhancement of Other Species?



singularityhub |  Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.

Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans. 

Others are less convinced. Forbes Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.

The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
 
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.

Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.

 

The Use of Clustered, Regularly Inter-spaced, Short, Palindromic Repeats


fortunascorner | “CRISPRs are elements of an ancient system that protects bacteria, and other, single-celled organisms from viruses, acquiring immunity to them by incorporating genetic elements from the virus invaders,” Mr. Wadhwa wrote.  “And, this bacterial, antiviral defense serves as an astonishingly cheap, simple, elegant way to quickly edit the DNA of any organism in the lab.  To set up a CRISPR editing capability, a lab only needs to order an RNA fragment (costing about $10) and purchase off-the-shelf chemicals and enzymes for $30 or less.”  
 
“Because CRISPR is cheap, and easy to use, it has both revolutionized, and democratized genetic research,” Mr. Wadhwa observes.  “Hundreds, if not thousands of labs are now experimenting with CRISPR-based editing projects.” And, access to the WorldWide Web, provides instantaneous know-how, for a would-be terrorist — bent on killing hundreds of millions of people.  As Mr. Wadhwa warns, “though a nuclear weapon can cause tremendous, long-lasting damage, the ultimate biological doomsday machine — is bacteria, because they can spread so quickly, and quietly.”
 
“No one is prepared for an era, when editing DNA is as easy as editing a Microsoft Word document.”
 
This observation, and warning, is why the current scientific efforts aimed at developing a vaccine for the plague; and, hopefully courses of action for any number of doomsday biological weapons.  With the proliferation of drones as a potential method of delivery, the threat seems overwhelming.  Even if we are successful in eradicating the world of the cancer known as militant Islam, there would still be the demented soul, bent on killing as many people as possible, in the shortest amount of time, no matter if their doomsday bug kills them as well.  That’s why the research currently being done on the plague is so important.  
 
As the science fiction/horror writer Stephen King once wrote  “God punishes us for what we cannot imagine.”

The Ghettoization of Genetic Disease


gizmodo |  Today in America, if you are poor, you are also more likely to suffer from poor health. Low socioeconomic status—and the lack of access to healthcare that often accompanies it—has been tied to mental illness, obesity, heart disease and diabetes, to name just a few. 

Imagine now, that in the future, being poor also meant you were more likely than others to suffer from major genetic disorders like cystic fibrosis, Tay–Sachs disease, and muscular dystrophy. That is a future, some experts fear, that may not be all that far off.

Most genetic diseases are non-discriminating, blind to either race or class. But for some parents, prenatal genetic testing has turned what was once fate into choice. There are tests that can screen for hundreds of disorders, including rare ones like Huntington’s disease and 1p36 deletion syndrome. Should a prenatal diagnosis bring news of a genetic disease, parents can either arm themselves with information on how best to prepare, or make the difficult decision to terminate the pregnancy. That is, if they can pay for it. Without insurance, the costs of a single prenatal test can range from a few hundred dollars up to $2,000. 

And genome editing, should laws ever be changed to allow for legally editing a human embryo in the United States, could also be a far-out future factor. It’s difficult to imagine how much genetically engineering an embryo might cost, but it’s a safe bet that it won’t be cheap.

“Reproductive technology is technology that belongs to certain classes,” Laura Hercher, a genetic counselor and professor at Sarah Lawrence College, told Gizmodo. “Restricting access to prenatal testing threatens to turn existing inequalities in our society into something biological and permanent.”

Hercher raised this point earlier this month in pages of Genome magazine, in a piece provocatively titled, “The Ghettoization of Genetic Disease.” Within the genetics community, it caused quite a stir. It wasn’t that no one had ever considered the idea. But for a community of geneticists and genetic counsellors focused on how to help curb the impact of devastating diseases, it was a difficult thing to see articulated in writing.

Prenatal testing is a miraculous technology that has drastically altered the course of a woman’s pregnancy since it was first developed in the 1960s. The more recent advent of noninvasive prenatal tests made the procedure even less risky and more widely available. Today, most women are offered screenings for diseases like Down syndrome that result from an abnormal presence of chromosomes, and targeted testing of the parents can hunt for inherited disease traits like Huntington’s at risk of being passed on to a child, as well. 

But there is a dark side to this miracle of modern medicine, which is that choice is exclusive to those who can afford and access it.


Saturday, June 09, 2018

Genetics in the Madhouse: The Unknown History of Human Heredity


nature  |  Who founded genetics? The line-up usually numbers four. William Bateson and Wilhelm Johannsen coined the terms genetics and gene, respectively, at the turn of the twentieth century. In 1910, Thomas Hunt Morgan began showing genetics at work in fruit flies (see E. Callaway Nature 516, 169; 2014). The runaway favourite is generally Gregor Mendel, who, in the mid-nineteenth century, crossbred pea plants to discover the basic rules of heredity.

Bosh, says historian Theodore Porter. These works are not the fount of genetics, but a rill distracting us from a much darker source: the statistical study of heredity in asylums for people with mental illnesses in late-eighteenth- and early-nineteenth-century Britain, wider Europe and the United States. There, “amid the moans, stench, and unruly despair of mostly hidden places where data were recorded, combined, and grouped into tables and graphs”, the first systematic theory of mental illness as hereditary emerged.

For more than 200 years, Porter argues in Genetics in the Madhouse, we have failed to recognize this wellspring of genetics — and thus to fully understand this discipline, which still dominates many individual and societal responses to mental illness and diversity.

The study of heredity emerged, Porter argues, not as a science drawn to statistics, but as an international endeavour to mine data for associations to explain mental illness. Few recall most of the discipline’s early leaders, such as French psychiatrist, or ‘alienist’, Étienne Esquirol; and physician John Thurnam, who made the York Retreat in England a “model of statistical recording”. Better-known figures, such as statistician Karl Pearson and zoologist Charles Davenport — both ardent eugenicists — come later.

Inevitably, study methods changed over time. The early handwritten correlation tables and pedigrees of patients gave way to more elaborate statistical tools, genetic theory and today’s massive gene-association studies. Yet the imperatives and assumptions of that scattered early network of alienists remain intact in the big-data genomics of precision medicine, asserts Porter. And whether applied in 1820 or 2018, this approach too readily elevates biology over culture and statistics over context — and opens the door to eugenics.

Tipping point for large-scale social change


sciencedaily  |  According to a new paper published in Science, there is a quantifiable answer: Roughly 25% of people need to take a stand before large-scale social change occurs. This idea of a social tipping point applies to standards in the workplace and any type of movement or initiative.
Online, people develop norms about everything from what type of content is acceptable to post on social media, to how civil or uncivil to be in their language. We have recently seen how public attitudes can and do shift on issues like gay marriage, gun laws, or race and gender equality, as well as what beliefs are or aren't publicly acceptable to voice.

During the past 50 years, many studies of organizations and community change have attempted to identify the critical size needed for a tipping point, purely based on observation. These studies have speculated that tipping points can range anywhere between 10 and 40%.

The problem for scientists has been that real-world social dynamics are complicated, and it isn't possible to replay history in precisely the same way to accurately measure how outcomes would have been different if an activist group had been larger or smaller.

"What we were able to do in this study was to develop a theoretical model that would predict the size of the critical mass needed to shift group norms, and then test it experimentally," says lead author Damon Centola, Ph.D., associate professor at the University of Pennsylvania's Annenberg School for Communication and the School of Engineering and Applied Science.

Drawing on more than a decade of experimental work, Centola has developed an online method to test how large-scale social dynamics can be changed.

In this study, "Experimental Evidence for Tipping Points in Social Convention," co-authored by Joshua Becker, Ph.D., Devon Brackbill, Ph.D., and Andrea Baronchelli, Ph.D., 10 groups of 20 participants each were given a financial incentive to agree on a linguistic norm. Once a norm had been established, a group of confederates -- a coalition of activists that varied in size -- then pushed for a change to the norm.

When a minority group pushing change was below 25% of the total group, its efforts failed. But when the committed minority reached 25%, there was an abrupt change in the group dynamic, and very quickly the majority of the population adopted the new norm. In one trial, a single person accounted for the difference between success and failure.

Self-Proclaimed Zionist Biden Joins The Great Pretending...,

Biden, at today's Holocaust Remembrance Ceremony, denounces the "anti-Semitic" student protests in his strongest terms yet. He...