Showing posts with label implicate order. Show all posts
Showing posts with label implicate order. Show all posts

Thursday, September 03, 2020

Are Riots Counterproductive?


opendemocracy  |  When some of the recent Black Lives Matter protests against the murder of George Floyd ended in riots, the pushback was immediate and predictable: different visions of Martin Luther King’s legacy were fought over, rival interpretations of the Civil Rights Movement were deployed, and contrasting lessons were identified.

There can be no single interpretation of the turbulent 1960s, but there is much we can learn from historical work on this period. In particular, Omar Wasow’ s recent analysis of the tactics of the Civil Rights Movement makes a provocative argument that “nonviolent” protest helped to shape a national conversation which raised the profile of the civil rights agenda and led to electoral gains for the Democrats in the early 1960s.

By contrast, he argues, rioting in US cities after the assassination of Martin Luther King pushed white Americans towards the rhetoric of ‘law and order,’ causing large shifts among white voters towards the Republican Party and helping Richard Nixon to win the 1968 presidential election shortly thereafter.

This is a controversial argument, even costing political analyst David Shor his job when he recently tweeted about Wasow’s thesis and received an angry response from those who saw it as a “tone-deaf” attack on legitimate protest. At the root of this controversy are important questions about whether framing riots as a ‘tactical choice’ is appropriate, who that framing makes responsible for ongoing racial injustice, and what the fact that we’re having this debate says about people’s views of politics and priorities. As King warned in 1968:

“A riot is the language of the unheard. And what is it America has failed to hear?...it has failed to hear that large segments of white society are more concerned about tranquillity and the status quo than about justice and humanity.”

But social movements can’t afford to ignore these arguments completely. The idea that violent protests might be risky is not surprising, since in societies that pride themselves on being ‘peaceful,’ riots violate many taken-for-granted liberal values. Wasow’s rigorous, quantitative analysis gives this argument a historical foundation but it also has obvious resonances for today, at a time when President Trump is running for re-election on a ‘law and order’ platform against the background of street protests in cities like Portland and Kenosha.

However, the implications of Wasow’s arguments are not as straightforward as they might appear. One immediate issue concerns his methodology and the size of the effects he estimates. The models reported in Wasow’s paper don’t include any controls for time, which are normally included in statistical analyses to control for general trends affecting society as a whole, trends we assume would have happened anyway.


Monday, August 10, 2020

No DISC - Only Ass-Kissing Lackeys Of The Status Quo Establishment

And this was my next Weinstein moment. It was that “Eureka” moment with negative undertones, which I guess can be called a “Dysreka.” Just as Bret and Eric, years later, saw their advancements being used and pushed by someone else, I was getting the exact same confirmation about my strategy (although he had not stolen it from me). This guy who had been at it longer, whose chapter was the inspiration for the Caucus itself, told me that I had stumbled upon the exact plan I was supposed to have for my chapter.

I don’t think this elected official from Hillsborough or this gentleman from Wake have ever met each other. Nevertheless, they quickly moved to shut down threats to the establishment of the Democratic Party here in North Carolina, as soon as they detected it, in manner much like what Weinstein has described. They were manifestations of the DISC, of an autoimmune response in the Democratic Party, and they moved through indirect, defamatory manners that played upon uninformed and ignorant crowds to derail those in their paths.

These events and others that I could tell really hurt. The chapter I had organized was like a baby of mine or a work of art (and you guys can see some of my art here on Medium to know what I mean by that). All the work I did to make the Democratic Party more accountable, while also trying to plant seeds to make it more electable, just blew up in my face because actors who want to defend the system acted swiftly. The corruption and abuse of power in the Democratic Party exists beyond the DNC. It manifests itself through brutal patronage relationships at the grassroots level as well that allow for decentralized policing and, frankly, sabotage.

I still think the Democratic Party can fix this country, but it needs a lot of home repairs before it can do so. We need something to break the Gated Institutional Narrative (another Weinstein term) that enables this. At the moment I am mostly out of ideas, but I am attempting another run for the NC House, here in Chapel Hill.

I hope this story informs you all decently and that it motivates you to do something good and productive, even though I know something like this is likely to produce more anger. We really do not need more anger. We need people who are more excited about the utopia and less about the revolution. I also hope it inspires you to share your own encounters with the DISC.

I also hope Eric comes across this and can get a few ideas on what to do. He is a Democrat like me, however begrudgingly, and he does have a role to play in reforming it. For those of you Republicans out there, I hope you are also noticing where the DISC exists in your party and are thinking of how to counter it. Fixing America is going to be a bi-partisan job, after all.

Friday, June 19, 2020

What You Call Meritocracy Probably Really Isn't...,


ipsnews |  Since the 1960s, many institutions, the world over, have embraced the notion of meritocracy. With post-Cold War neoliberal ideologies enabling growing wealth concentration, the rich, the privileged and their apologists invoke variants of ‘meritocracy’ to legitimize economic inequality. 

Instead, corporations and other social institutions, which used to be run by hereditary elites, increasingly recruit and promote on the bases of qualifications, ability, competence and performance. Meritocracy is thus supposed to democratize and level society. 

Ironically, British sociologist Michael Young pejoratively coined the term meritocracy in his 1958 dystopian satire, The Rise of the Meritocracy. With his intended criticism rejected as no longer relevant, the term is now used in the English language without the negative connotations Young intended. 

It has been uncritically embraced by supporters of a social philosophy of meritocracy in which influence is supposedly distributed according to the intellectual ability and achievement of individuals. 

Many appreciate meritocracy’s two core virtues. First, the meritocratic elite is presumed to be more capable and effective as their status, income and wealth are due to their ability, rather than their family connections. 

Second, ‘opening up’ the elite supposedly on the bases of individual capacities and capabilities is believed to be consistent with and complementary to ‘fair competition’. They may claim the moral high ground by invoking ‘equality of opportunity’, but are usually careful to stress that ‘equality of outcome’ is to be eschewed at all cost. 

As Yale Law School Professor Daniel Markovits argues in The Meritocracy Trap, unlike the hereditary elites preceding them, meritocratic elites must often work long and hard, e.g., in medicine, finance or consulting, to enhance their own privileges, and to pass them on to their children, siblings and other close relatives, friends and allies.

Gaming meritocracy
Meritocracy is supposed to function best when an insecure ‘middle class’ constantly strives to secure, preserve and augment their income, status and other privileges by maximizing returns to their exclusive education. But access to elite education – that enables a few of modest circumstances to climb the social ladder – waxes and wanes. 

Most middle class families cannot afford the privileged education that wealth can buy, while most ordinary, government financed and run schools have fallen further behind exclusive elite schools, including some funded with public money. In recent decades, the resources gap between better and poorer public schools has also been growing. 

Elite universities and private schools still provide training and socialization, mainly to children of the wealthy, privileged and connected. Huge endowments, obscure admissions policies and tax exemption allow elite US private universities to spend much more than publicly funded institutions.
Meanwhile, technological and social changes have transformed the labour force and economies greatly increasing economic returns to the cognitive, ascriptive and other attributes as well as credentials of ‘the best’ institutions, especially universities and professional guilds, which effectively remain exclusive and elitist.

As ‘meritocrats’ captured growing shares of the education pies, the purported value of ‘schooling’ increased, legitimized by the bogus notion of ‘human capital’. While meritocracy transformed elites over time, it has also increasingly inhibited, not promoted social mobility.

Sunday, December 15, 2019

What Did the Ancient Messages Say?


technologyreview |  In 1886, the British archaeologist Arthur Evans came across an ancient stone bearing a curious set of inscriptions in an unknown language. The stone came from the Mediterranean island of Crete, and Evans immediately traveled there to hunt for more evidence. He quickly found numerous stones and tablets bearing similar scripts and dated them from around 1400 BCE.

Linear B deciphering
That made the inscription one of the earliest forms of writing ever discovered. Evans argued that its linear form was clearly derived from rudely scratched line pictures belonging to the infancy of art, thereby establishing its importance in the history of linguistics.

He and others later determined that the stones and tablets were written in two different scripts. The oldest, called Linear A, dates from between 1800 and 1400 BCE, when the island was dominated by the Bronze Age Minoan civilization.

 The other script, Linear B, is more recent, appearing only after 1400 BCE, when the island was conquered by Mycenaeans from the Greek mainland.

Evans and others tried for many years to decipher the ancient scripts, but the lost languages resisted all attempts. The problem remained unsolved until 1953, when an amateur linguist named Michael Ventris cracked the code for Linear B.

His solution was built on two decisive breakthroughs. First, Ventris conjectured that many of the repeated words in the Linear B vocabulary were names of places on the island of Crete. That turned out to be correct.

His second breakthrough was to assume that the writing recorded an early form of ancient Greek. That insight immediately allowed him to decipher the rest of the language. In the process, Ventris showed that ancient Greek first appeared in written form many centuries earlier than previously thought.

Ventris’s work was a huge achievement. But the more ancient script, Linear A, has remained one of the great outstanding problems in linguistics to this day.

It’s not hard to imagine that recent advances in machine translation might help. In just a few years, the study of linguistics has been revolutionized by the availability of huge annotated databases, and techniques for getting machines to learn from them. Consequently, machine translation from one language to another has become routine. And although it isn’t perfect, these methods have provided an entirely new way to think about language.

Enter Jiaming Luo and Regina Barzilay from MIT and Yuan Cao from Google’s AI lab in Mountain View, California. This team has developed a machine-learning system capable of deciphering lost languages, and they’ve demonstrated it by having it decipher Linear B—the first time this has been done automatically. The approach they used was very different from the standard machine translation techniques.

First some background. The big idea behind machine translation is the understanding that words are related to each other in similar ways, regardless of the language involved.

Tuesday, September 04, 2018

Cymatics - Insights Into the Invisible World of Sound


soundtravels |  We live in a vast ocean of sound, whose infinite waves ripple the shores of our awareness in myriad patterns of intricate design and immeasurably complex vibrations … permeating our bodies, our psyches, to the very core of our being.

So begins the program, Of Sound Mind and Body: Music and Vibrational Healing and so begins this whirlwind account, unveiling the mysteries of sound. Perhaps because it is invisible, less attention has been paid to this sea of sound constantly flowing around and through us than to the denser objects with which we routinely interact. To those of us for whom ‘seeing is believing’, Cymatics, the science of wave phenomena, can be a portal into this invisible world and its myriad effects on matter, mind and emotions.

The long and illustrious lineage of scientific inquiry into the physics of sound can be traced back to Pythagoras, but this article will focus on more recent explorations into the effects that sound has upon matter. However, a brief sum- mary of the last three centuries of acoustic research will help to highlight a few of the pioneers who blazed the trail so that Cymatics could emerge as a distinct discipline in the 1950s.

READ THE WHOLE ARTICLE (PDF) - click this link (opens new window)

READ THE WHOLE BOOK - click this link (opens pdf)

Sunday, June 17, 2018

Musean Hypernumbers


archive.is |  Musean hypernumbers are an algebraic concept envisioned by Charles A. Musès (1919–2000) to form a complete, integrated, connected, and natural number system.[1][2][3][4][5] Musès sketched certain fundamental types of hypernumbers and arranged them in ten "levels", each with its own associated arithmetic and geometry.
Mostly criticized for lack of mathematical rigor and unclear defining relations, Musean hypernumbers are often perceived as an unfounded mathematical speculation. This impression was not helped by Musès' outspoken confidence in applicability to fields far beyond what one might expect from a number system, including consciousness, religion, and metaphysics.
The term "M-algebra" was used by Musès for investigation into a subset of his hypernumber concept (the 16 dimensional conic sedenions and certain subalgebras thereof), which is at times confused with the Musean hypernumber level concept itself. The current article separates this well-understood "M-algebra" from the remaining controversial hypernumbers, and lists certain applications envisioned by the inventor.

"M-algebra" and "hypernumber levels"[edit]

Musès was convinced that the basic laws of arithmetic on the reals are in direct correspondence with a concept where numbers could be arranged in "levels", where fewer arithmetical laws would be applicable with increasing level number.[3] However, this concept was not developed much further beyond the initial idea, and defining relations for most of these levels have not been constructed.
Higher-dimensional numbers built on the first three levels were called "M-algebra"[6][7] by Musès if they yielded a distributive multiplication, unit element, and multiplicative norm. It contains kinds of octonions and historical quaternions (except A. MacFarlane's hyperbolic quaternions) as subalgebras. A proof of completeness of M-algebra has not been provided.

Conic sedenions / "16 dimensional M-algebra"[edit]

The term "M-algebra" (after C. Musès[6]) refers to number systems that are vector spaces over the reals, whose bases consist in roots of −1 or +1, and which possess a multiplicative modulus. While the idea of such numbers was far from new and contains many known isomorphic number systems (like e.g. split-complex numbers or tessarines), certain results from 16 dimensional (conic) sedenions were a novelty. Musès demonstrated the existence of a logarithm and real powers in number systems built to non-real roots of +1.
 

Friday, June 15, 2018

Are Space And Time Quantized?


Forbes |  Throughout the history of science, one of the prime goals of making sense of the Universe has been to discover what's fundamental. Many of the things we observe and interact with in the modern, macroscopic world are composed of, and can be derived from, smaller particles and the underlying laws that govern them. The idea that everything is made of elements dates back thousands of years, and has taken us from alchemy to chemistry to atoms to subatomic particles to the Standard Model, including the radical concept of a quantum Universe.

But even though there's very good evidence that all of the fundamental entities in the Universe are quantum at some level, that doesn't mean that everything is both discrete and quantized. So long as we still don't fully understand gravity at a quantum level, space and time might still be continuous at a fundamental level. Here's what we know so far.

Quantum mechanics is the idea that, if you go down to a small enough scale, everything that contains energy, whether it's massive (like an electron) or massless (like a photon), can be broken down into individual quanta. You can think of these quanta as energy packets, which sometimes behave as particles and other times behave as waves, depending on what they interact with.

Everything in nature obeys the laws of quantum physics, and our "classical" laws that apply to larger, more macroscopic systems can always (at least in theory) be derived, or emerge, from the more fundamental quantum rules. But not everything is necessarily discrete, or capable of being divided into a localized region space.


The energy level differences in Lutetium-177. Note how there are only specific, discrete energy levels that are acceptable. While the energy levels are discrete, the positions of the electrons are not.

If you have a conducting band of metal, for example, and ask "where is this electron that occupies the band," there's no discreteness there. The electron can be anywhere, continuously, within the band. A free photon can have any wavelength and energy; no discreteness there. Just because something is quantized, or fundamentally quantum in nature, doesn't mean everything about it must be discrete.

The idea that space (or space and time, since they're inextricably linked by Einstein's theories of relativity) could be quantized goes way back to Heisenberg himself. Famous for the Uncertainty Principle, which fundamentally limits how precisely we can measure certain pairs of quantities (like position and momentum), Heisenberg realized that certain quantities diverged, or went to infinity, when you tried to calculate them in quantum field theory.

Wednesday, April 25, 2018

Grasshopper - You Will NEVER Overcome The Money Power!!!


techcrunch |  A new — and theoretical — system for blockchain-based data storage could ensure that hackers will not be able to crack cryptocurrencies once the quantum era starts. The idea, proposed by researchers at the Victoria University of Wellington in New Zealand, would secure cryptocurrency futures for decades using a blockchain technology that is like a time machine.


To understand what’s going on here we have to define some terms. A blockchain stores every transaction in a system on what amounts to an immutable record of events. The work necessary for maintaining and confirming this immutable record is what is commonly known as mining. But this technology — which the paper’s co-author Del Rajan claims will make up “10 percent of global GDP… by 2027” — will become insecure in an era of quantum computers.

Therefore the solution to store a blockchain in a quantum era requires a quantum blockchain using a series of entangled photons. Further, Spectrum writes: “Essentially, current records in a quantum blockchain are not merely linked to a record of the past, but rather a record in the past, one that does not exist anymore.”

Yeah, it’s weird.

From the paper intro:
Our method involves encoding the blockchain into a temporal GHZ (Greenberger–Horne–Zeilinger) state of photons that do not simultaneously coexist. It is shown that the entanglement in time, as opposed to an entanglement in space, provides the crucial quantum advantage. All the subcomponents of this system have already been shown to be experimentally realized. Perhaps more shockingly, our encoding procedure can be interpreted as non-classically influencing the past; hence this decentralized quantum blockchain can be viewed as a quantum networked time machine.
In short, the quantum blockchain is immutable because the photons that it contains do not exist at the current time but are still extant and readable. This means the entire blockchain is visible but cannot be “touched” and the only entry you would be able to try to tamper with is the most recent one. In fact, the researchers write, “In this spatial entanglement case, if an attacker tries to tamper with any photon, the full blockchain would be invalidated immediately.”

Is this possible? The researchers note that the technology already exists.

Tuesday, November 28, 2017

Knowledge Engineering: Human "Intelligence" Mirrors That of Eusocial Insects


Cambridge |  The World Wide Web has had a notable impact on a variety of epistemically-relevant activities, many of which lie at the heart of the discipline of knowledge engineering. Systems like Wikipedia, for example, have altered our views regarding the acquisition of knowledge, while citizen science systems such as Galaxy Zoo have arguably transformed our approach to knowledge discovery. Other Web-based systems have highlighted the ways in which the human social environment can be used to support the development of intelligent systems, either by contributing to the provision of epistemic resources or by helping to shape the profile of machine learning. In the present paper, such systems are referred to as ‘knowledge machines’. In addition to providing an overview of the knowledge machine concept, the present paper reviews a number of issues that are associated with the scientific and philosophical study of knowledge machines. These include the potential impact of knowledge machines on the theory and practice of knowledge engineering, the role of social participation in the realization of intelligent systems, and the role of standardized, semantically enriched data formats in supporting the ad hoc assembly of special-purpose knowledge systems and knowledge processing pipelines.

Knowledge machines are a specific form of social machine that is concerned with the sociotechnical
realization of a broad range of knowledge processes. These include processes that are thetraditional focus of the discipline of knowledge engineering, for example, knowledge acquisition, knowledge modeling and the development of knowledge-based systems.

In the present paper, I have sought to provide an initial overview of the knowledge machine concept, and I have highlighted some of the ways in which the knowledge machine concept can be applied to existing areas of research. In particular, the present paper has identified a number of examples of knowledge machines (see Section 3), discussed some of the mechanisms that underlie their operation (see Section 5), and highlighted the role of Web technologies in supporting the emergence of ever-larger knowledge processing organizations (see Section 8). The paper has also highlighted a number of opportunities for collaboration between a range of disciplines. These include the disciplines of knowledge engineering, WAIS, sociology, philosophy, cognitive science, data science, and machine learning.

Given that our success as a species is, at least to some extent, predicated on our ability to manufacture, represent, communicate and exploit knowledge (see Gaines 2013), there can be little doubt about the importance and relevance of knowledge machines as a focus area for future scientific and philosophical enquiry. In addition to their ability to harness the cognitive and epistemic capabilities of the human social environment, knowledge machines provide us with a potentially important opportunity to scaffold the development of new forms of machine intelligence. Just as much of our own human intelligence may be rooted in the fact that we are born into a superbly structured and deliberately engineered environment (see Sterelny 2003), so too the next generation of synthetic intelligent systems may benefit from a rich and structured informational environment that houses the sum total of human knowledge. In this sense, knowledge machines are important not just with respect to the potential transformation of our own (human) epistemic capabilities, they are also important with respect to the attempt to create the sort of environments that enable future forms of intelligent system to press maximal benefit from the knowledge that our species has managed to create and codify.

Saturday, October 07, 2017

Emotional Sentience and the Nature of Phenomenal Experience


emotionalsentience |  When phenomenal experience is examined through the lens of physics, several conundrums come to light including: Specificity of mindebody interactions, feelings of free will in a deterministic universe, and the relativity of subjective perception. The new biology of “emotion” can shed direct light upon these issues, via a broadened categorical definition that includes both affective feelings and their coupled (yet often subconscious) hedonic motivations. In this new view, evaluative (good/bad) feelings that trigger approach/avoid behaviors emerged with life itself, a crude stimulus-response information loop between organism and its environment, a semiotic signaling system embodying the first crude form of “mind”.
 
Emotion serves the ancient function of sensory-motor self-regulation and affords organisms e at every level of complexity e an active, adaptive, role in evolution. A careful examination of the biophysics involved in emotional “self-regulatory” signaling, however, acknowledges constituents that are incompatible with classical physics. This requires a further investigation, proposed herein, of the fundamental nature of “the self” as the subjective observer central to the measurement process in quantum mechanics, and ultimately as an active, unified, self-awareness with a centrally creative role in “self-organizing” processes and physical forces of the classical world. In this deeper investigation, a new phenomenological dualism is proposed: The flow of complex human experience is instantiated by both a classically embodied mind and a deeper form of quantum consciousness that is inherent in the universe itself, implying much deeper e more Whiteheadian e interpretations of the “self-regulatory” and “self-relevant” nature of emotional stimulus. A broad stroke, speculative, intuitive sketch of this new territory is then set forth, loosely mapped to several theoretical models of consciousness, potentially relevant mathematical devices and pertinent philosophical themes, in an attempt to acknowledge the myriad questions e and limitations e implicit in the quest to understand “sentience” in any ontologically pansentient universe.

Friday, September 29, 2017

Why the Future Doesn't Need Us


ecosophia |  Let’s start with the concept of the division of labor. One of the great distinctions between a modern industrial society and other modes of human social organization is that in the former, very few activities are taken from beginning to end by the same person. A woman in a hunter-gatherer community, as she is getting ready for the autumn tuber-digging season, chooses a piece of wood, cuts it, shapes it into a digging stick, carefully hardens the business end in hot coals, and then puts it to work getting tubers out of the ground. Once she carries the tubers back to camp, what’s more, she’s far more likely than not to take part in cleaning them, roasting them, and sharing them out to the members of the band.

A woman in a modern industrial society who wants to have potatoes for dinner, by contrast, may do no more of the total labor involved in that process than sticking a package in the microwave. Even if she has potatoes growing in a container garden out back, say, and serves up potatoes she grew, harvested, and cooked herself, odds are she didn’t make the gardening tools, the cookware, or the stove she uses. That’s division of labor: the social process by which most members of an industrial society specialize in one or another narrow economic niche, and use the money they earn from their work in that niche to buy the products of other economic niches.

Let’s say it up front: there are huge advantages to the division of labor.  It’s more efficient in almost every sense, whether you’re measuring efficiency in terms of output per person per hour, skill level per dollar invested in education, or what have you. What’s more, when it’s combined with a social structure that isn’t too rigidly deterministic, it’s at least possible for people to find their way to occupational specialties for which they’re actually suited, and in which they will be more productive than otherwise. Yet it bears recalling that every good thing has its downsides, especially when it’s pushed to extremes, and the division of labor is no exception.

Crackpot realism is one of the downsides of the division of labor. It emerges reliably whenever two conditions are in effect. The first condition is that the task of choosing goals for an activity is assigned to one group of people and the task of finding means to achieve those goals is left to a different group of people. The second condition is that the first group needs to be enough higher in social status than the second group that members of the first group need pay no attention to the concerns of the second group.

Consider, as an example, the plight of a team of engineers tasked with designing a flying car.  People have been trying to do this for more than a century now, and the results are in: it’s a really dumb idea. It so happens that a great many of the engineering features that make a good car make a bad aircraft, and vice versa; for instance, an auto engine needs to be optimized for torque rather than speed, while an aircraft engine needs to be optimized for speed rather than torque. Thus every flying car ever built—and there have been plenty of them—performed just as poorly as a car as it did as a plane, and cost so much that for the same price you could buy a good car, a good airplane, and enough fuel to keep both of them running for a good long time.

Engineers know this. Still, if you’re an engineer and you’ve been hired by some clueless tech-industry godzillionaire who wants a flying car, you probably don’t have the option of telling your employer the truth about his pet project—that is, that no matter how much of his money he plows into the project, he’s going to get a clunker of a vehicle that won’t be any good at either of its two incompatible roles—because he’ll simply fire you and hire someone who will tell him what he wants to hear. Nor do you have the option of sitting him down and getting him to face what’s behind his own unexamined desires and expectations, so that he might notice that his fixation on having a flying car is an emotionally charged hangover from age eight, when he daydreamed about having one to help him cope with the miserable, bully-ridden public school system in which he was trapped for so many wretched years. So you devote your working hours to finding the most rational, scientific, and utilitarian means to accomplish a pointless, useless, and self-defeating end. That’s crackpot realism.

You can make a great party game out of identifying crackpot realism—try it sometime—but I’ll leave that to my more enterprising readers. What I want to talk about right now is one of the most glaring examples of crackpot realism in contemporary industrial society. Yes, we’re going to talk about space travel again.

Tuesday, September 26, 2017

Concepts in Kron's Later Papers....,


stackexchange |  Gabriel Kron was an important research electrical engineer known for applying differential geometry and algebraic topology to the study of electrical system. Towards the end of his career he published a number of unusual, even by his standards, papers on concepts with names like polyhedral networks, self organizing automata, wave automata, multidimensional space filters and crystal computer, which I think are more or less synonymous. I have obtained a few of these papers and did not understand them at all. If they were not written be Kron, I would be suspicious of them.

I have not been able to find any significant secondary literature on these ideas. The few citations I have tracked down only mention them tangentially, but I have also found no refutations of these papers and no suggestions that Kron had gone off the rails. The papers were published in respectable journals.

I am looking for an understandable exposition or refutation of these ideas, or pointers to such. Also pointers to follow on research by others, possibly using different terminology.

I am not looking for explanations of Kron's other ideas like diakoptics and tensor analysis of networks.

Some of the relevant papers are:
  • G. Kron, Multi-dimensional space filters. Matrix and Tensor Quarterly, 9, 40 - 43 (1958).
  • G. Kron, Basic concepts of multi-dimensional space filters. AIEE Transactions, 78, 554 - 561 (1959).
  • G. Kron, Self-organizing, dynamo-type automata. Matrix and Tensor Quarterly, 11, 42 - 52 (1960).
  • G. Kron, Power-system type self-organizing automata. RAAG Memoirs, III, 392 - 417 (1962).
  • G. Kron, Multi-dimensional curve-fitting with self-organizing automata. Journal of Mathematical Analysis and Applications, 5, 46 - 49 (1962).
I have mainly looked at the last one and material at the end of
  • Diakoptics; the piecewise solution of large-scale systems. MacDonald, London, 1963. 166 pp.

Saturday, February 18, 2017

Is Google Deep Mind Exhibiting Greed and Aggression?


theantimedia |  Will artificial intelligence get more aggressive and selfish the more intelligent it becomes? A new report out of Google’s DeepMind AI division suggests this is possible based on the outcome of millions of video game sessions it monitored. The results of the two games indicate that as artificial intelligence becomes more complex, it is more likely to take extreme measures to ensure victory, including sabotage and greed.

The first game, Gathering, is a simple one that involves gathering digital fruit. Two DeepMind AI agents were pitted against each other after being trained in the ways of deep reinforcement learning. After 40 million turns, the researchers began to notice something curious. Everything was ok as long as there were enough apples, but when scarcity set in, the agents used their laser beams to knock each other out and seize all the apples.

The aggression, they determined, was the result of higher levels of complexity in the AI agents themselves. When they tested the game on less intelligent AI agents, they found that the laser beams were left unused and equal amounts of apples were gathered. The simpler AIs seemed to naturally gravitate toward peaceful coexistence.

Researchers believe the more advanced AI agents learn from their environment and figure out how to use available resources to manipulate their situation — and they do it aggressively if they need to.

“This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning,” a DeepMind team member, Joel Z Leibo, told Wired.

Sunday, January 08, 2017

Other Minds


If we think the rest of the universe is without awareness we have to invent a disembodied "God" to replace what is missing. And then we treat the planet as if it were a mindless object - resources to strip (as if that caused no harm or pain) .... and a place to dump our toxic chemicals and trash.

Imagine what our thought would be like, Dogen says, if we had no separate words for "mind" and "nature."

WaPo |  “The two of you look at each other. This one is small, about the size of a tennis ball. You reach forward a hand and stretch out one finger, and one octopus arm slowly uncoils and comes out to touch you. The suckers grab your skin, and the hold is disconcertingly tight. Having attached the suckers, it tugs your finger, pulling you gently in. . . . Behind the arm, large round eyes watch you the whole time.

Encountering an octopus in the wild, as Peter Godfrey-Smith argues in his fascinating book, “Other Minds,” is as close as we will get to meeting an intelligent alien. The octopus and its near relatives — squid, cuttlefish and nautilus — belong to a vast and eclectic group of creatures that lack backbones, the invertebrates. Collectively known as cephalopods (head-footed), they are related to snails and clams, sharing with them the unfortunate characteristic of tasting wonderful. Don’t read this book, though, if you want to continue eating calamari with an untroubled conscience, for living cephalopods are smart, beautiful and possessed with extraordinary personalities.

Sunday, December 11, 2016

the semiosis of evolution


springer |  Most contemporary evolutionary biologists consider perception, cognition, and communication just like any other adaptation to the environmental selection pressures. A biosemiotic approach adds an unexpected turn to this Neo-Darwinian logic and focuses not so much on the evolution of semiosis as it does on the semiosis of evolution. What is meant here, is that evolutionary forces are themselves semiotically constrained and contextualized. The effect of environmental conditions is always mediated by the responses of organisms, who select their developmental pathways and actions based on heritable or memorized past experience and a variety of external and internal signals. In particular, recognition and categorization of objects, learning, and communication (both intraspecific and interspecific) can change the evolutionary fate of lineages. Semiotic selection, an effect of choice upon other species (Maran and Kleisner 2010), active habitat preference (Lindholm 2015), making use of and reinterpreting earlier semiotic structures – known as semiotic co-option (Kleisner 2015), and semiotic scaffolding (Hoffmeyer 2015; Kull 2015), are some further means by which semiosis makes evolution happen.

Semiotic processes are easily recognized in animals that communicate and learn, but it is difficult to find directly analogous processes in organisms without nerves and brains. Molecular biologists are used to talk about information transfer via cell-to-cell communication, DNA replication, RNA or protein synthesis, and signal transduction cascades within cells. However, these informational processes are difficult to compare with perception-related sign processes in animals because information requires interpretation by some agency, and it is not clear where the agency in cells is. In bacterial cells, all molecular processes appear deterministic, with every signal, such as the presence of a nutrient or toxin, launching a pre-defined cascade of responses targeted at confronting new conditions. These processes lack an element of learning during the bacterial life span, and thus cannot be compared directly with complex animal and human semiosis, where individual learning plays a decisive role.

The determinism of the molecular clockwork was summarized in the dogma that genes determine the phenotype and not the other way around. As a result, the Modern Synthesis (MS) theory presented evolution as a mechanical process that starts with blind random variation of the genome, and ends with automatic selection of the fittest phenotypes. Although this theory may explain quantitative changes in already existing features, it certainly cannot describe the emergence of new organs or signaling pathways. The main deficiency of such explanations is that the exact correspondence between genotypes and phenotypes is postulated a priori. In other words, MS was built like Euclidean geometry, where questioning the foundational axioms will make the whole system fall, like a house of cards.

The discipline of biosemiotics has generated a new platform for explaining biological evolution. It considers that evolution is semiosis, a process of continuous interpretation and re-interpretation of hereditary signs alongside other signs that originate in the environment or the body.

Saturday, December 10, 2016

Distances Between Nucleotide Sequences Contain Biologically Relevant Information


g3journal |  Enhancers physically interact with transcriptional promoters, looping over distances that can span multiple regulatory elements. Given that enhancer-promoter (EP) interactions generally occur via common protein complexes, it is unclear whether EP pairing is predominantly deterministic or proximity guided. Here we present cross-organismic evidence suggesting that most EP pairs are compatible, largely determined by physical proximity rather than specific interactions. By re-analyzing transcriptome datasets, we find that the transcription of gene neighbors is correlated over distances that scale with genome size. We experimentally show that non-specific EP interactions can explain such correlation, and that EP distance acts as a scaling factor for the transcriptional influence of an enhancer. We propose that enhancer sharing is commonplace among eukaryotes, and that EP distance is an important layer of information in gene regulation.

Monday, January 25, 2016

order for free?


edge |  What kinds of complex systems can evolve by accumulation of successive useful variations? Does selection by itself achieve complex systems able to adapt? Are there lawful properties characterizing such complex systems? The overall answer may be that complex systems constructed so that they're on the boundary between order and chaos are those best able to adapt by mutation and selection.

Chaos is a subset of complexity. It's an analysis of the behavior of continuous dynamical systems — like hydrodynamic systems, or the weather — or discrete systems that show recurrences of features and high sensitivity to initial conditions, such that very small changes in the initial conditions can lead a system to behave in very different ways. A good example of this is the so called butterfly effect: the idea is that a butterfly in Rio can change the weather in Chicago. An infinitesimal change in initial conditions leads to divergent pathways in the evolution of the system. Those pathways are called trajectories. The enormous puzzle is the following: in order for life to have evolved, it can't possibly be the case that trajectories are always diverging. Biological systems can't work if divergence is all that's going on. You have to ask what kinds of complex systems can accumulate useful variation.

We've discovered the fact that in the evolution of life very complex systems can have convergent flow and not divergent flow. Divergent flow is sensitivity to initial conditions. Convergent flow means that even different starting places that are far apart come closer together. That's the fundamental principle of homeostasis, or stability to perturbation, and it's a natural feature of many complex systems. We haven't known that until now. That's what I found out twenty-five years ago, looking at what are now called Kauffman models — random networks exhibiting what I call "order for free."

Complex systems have evolved which may have learned to balance divergence and convergence, so that they're poised between chaos and order. Chris Langton has made this point, too. It's precisely those systems that can simultaneously perform the most complex tasks and evolve, in the sense that they can accumulate successive useful variations. The very ability to adapt is itself, I believe, the consequence of evolution. You have to be a certain kind of complex system to adapt, and you have to be a certain kind of complex system to coevolve with other complex systems. We have to understand what it means for complex systems to come to know one another — in the sense that when complex systems coevolve, each sets the conditions of success for the others. I suspect that there are emergent laws about how such complex systems work, so that, in a global, Gaia- like way, complex coevolving systems mutually get themselves to the edge of chaos, where they're poised in a balanced state. It's a very pretty idea. It may be right, too.

My approach to the coevolution of complex systems is my order-for-free theory. If you have a hundred thousand genes and you know that genes turn one another on and off, then there's some kind of circuitry among the hundred thousand genes. Each gene has regulatory inputs from other genes that turn it on and off. This was the puzzle: What kind of a system could have a hundred thousand genes turning one another on and off, yet evolve by creating new genes, new logic, and new connections?

Suppose we don't know much about such circuitry. Suppose all we know are such things as the number of genes, the number of genes that regulate each gene, the connectivity of the system, and something about the kind of rules by which genes turn one another on and off. My question was the following: Can you get something good and biology-like to happen even in randomly built networks with some sort of statistical connectivity properties? It can't be the case that it has to be very precise in order to work — I hoped, I bet, I intuited, I believed, on no good grounds whatsoever — but the research program tried to figure out if that might be true. The impulse was to find order for free. As it happens, I found it. And it's profound.

One reason it's profound is that if the dynamical systems that underlie life were inherently chaotic, then for cells and organisms to work at all there'd have to be an extraordinary amount of selection to get things to behave with reliability and regularity. It's not clear that natural selection could ever have gotten started without some preexisting order. You have to have a certain amount of order to select for improved variants.

Saturday, December 05, 2015

are attorneys general the godfathers in this thing of ours?



techdirt | Earlier this year, Judge Alex Kozinski went much further than his one-off comments in judicial opinions to take the prosecutors to task for… well, pretty much everything. The "epidemic of Brady [exonerating evidence] violations" he noted in 2013's USA v. Olsen decision was just the leadoff. Kozinski teed off on faulty forensic evidence (comparing arson "specialists" to "witch doctors"), the way the "first impression" almost always favors prosecutors (who get to present their case first in criminal trials), and the general unreliability of eyewitness testimony, which is often portrayed as infallible when it's the goverment presenting the witnesses. 

Several months later, the Department of Justice -- home to a great many prosecutors -- has finally responded. And its feelings are terribly hurt.
Federal prosecutors, who Judge Kozinski actually described in glowing terms, took offense at the fact they are not considered infallible by the Judge. And in the last few weeks, they have made their hurt feelings known. 

Andrew Goldsmith, National Criminal Discovery Coordinator at the Department of Justice, and John Walsh, United States Attorney for the District of Colorado, wrote aletter to the Georgetown Law Journal expressing their displeasure with Kozinski’s contribution to the journal. Rather than take the opportunity to join in Kozinki’s call for a more careful justice system, Goldsmith and Walsh demonstrated a stunning lack of awareness about what they do and how often it goes wrong these days.
According to this defensive group of prosecutors, Kozinski's "provocative preface" was certainly food for thought for whoever Kozinski was referring to, but not them, because federal prosecutors are upstanding men and women whom the judge has insulted deeply.
While the preface raises several points that merit discussion, such as the reliability of certain forms of evidence, Judge Kozinski goes too far in casting aspersions on the men and women responsible for the administration of justice in this country. His preface seemed to question not only the integrity of our agents and prosecutors, but also the government’s capacity to self-correct in the (very small) minority of cases when someone falls short.
The problem is, Kozinski is one of a very few judges to question the integrity of prosecutors. And for all the umbrage being hauled in by the semi-truckful, Kozinski was rather restrained when discussing federal prosecutors. Still, the DOJ cannot sit idly by while someone suggests a few prosecutors don't play by the rules and that the rules themselves are faulty. So, it does what the DOJ always does in these situations: defends the honor of the (not even directly) accused. When the DOJ takes down a local police force for misconduct or abuse, it always makes sure to rub the bellies of the police force at large before getting to the bad stuff. 

In this case, the bad stuff preceded the defensive statements from the DOJ, which now have to stand alone.
We have both worked with many prosecutors during our combined thirty-three years at the Justice Department. We have served as line prosecutors and supervisors, and now hold positions with national responsibility. Throughout our careers, what has always struck us is the professionalism, integrity, and decency of our colleagues. They care deeply about the work that they do, not because they are trying to rack up convictions or long sentences, but because they seek to ensure that justice is done in each and every case they handle. This extends to the seriousness with which they take their discovery obligations. Our prosecutors comply with these obligations—because they are required to do so and because it is the right thing to do. It is a principle embedded not only in the Department’s internal rules, but in the Department’s culture.
And being so good is oh so exhausting.
At the Department of Justice, we recognize our responsibility to work tirelessly to improve the work that we do, and to enhance the fair administration of justice.
In support of its assertions, the DOJ claims only a small handful of prosecutions have resulted in the courts calling it out for abusive actions. But that does nothing to diminish Kozinski's points.

Wednesday, October 21, 2015

microcosmic reality mechanics and rao's hyparchic folding machine...,


theatlantic |  Rao and fellow student Adrian Sanborn think that the key to this process is a cluster of proteins called an “extrusion complex,” which looks like a couple of Polo mints stuck together. The complex assembles on a stretch of DNA so that the long molecule threads through one hole, forms a very short loop, and then passes through the other one. Then, true to its name, the complex extrudes the DNA, pushing both strands outwards so that the loop gets longer and longer. And when the complex hits one of the CTCF landing sites, it stops, but only if the sites are pointing in the right direction.

This explanation is almost perfect. It accounts for everything that the team have seen in their work: why the loops don’t get tangled, and why the CTCF landing sites align the way they do. “This is an important milestone in understanding the three dimensional structure of chromosomes, but like most great papers, it raises more questions than it provides answers,” says Kim Nasmyth, a biochemist at the University of Oxford who first proposed the concept of an extrusion complex in 2001.

The big mystery, he says, is how the loops actually grow. Is there some kind of ratcheting system that stops the DNA from sliding back? Is such a system even necessary? And “even when we understand how loops are created, we still need to understand what they are doing for the genome,” Nasmyth adds. “It’s very early days.”

And then there’s the really big problem: No one knows if the extrusion complex exists.

Since Nasmyth conceived of it, no one has yet proved that it’s real, let alone worked out which proteins it contains. CTCF is probably part of it, as is a related protein called cohesin. Beyond that, it’s a mystery. It’s like a ghostly lawnmower, whose presence is inferred by looking at a field of freshly shorn grass, or the knife that we only know about by studying the stab wounds. It might not actually be a thing.

Except: The genome totally behaves as if the extrusion complex was a thing. Rao and Sanborn created a simulation that predicts the structure of the genome on the basis that the complex is real and works they way they think it does.

These predictions were so accurate that the team could even re-sculpt the genome at will. They started playing around with the CTCF landing pads, deleting, flipping, and editing these sequences using a powerful gene-editing technique called CRISPR. In every case, their simulation predicted how the changes would alter the 3-D shape of the genome, and how it would create, move, or remove the existing loops. And in every case, it was right.

“Our model requires very little knowledge beyond where CTCF is binding, but it tells us where the loops will be,” says Rao. “It now allows us to do genome surgery, where we can reengineer the genome on a large scale.”

This predictive power has several applications. Remember that loops allow seemingly innocuous stretches of DNA to control the activity of distant genes. If biologists can understand the principles behind these interactions, and predict their outcomes, they can more efficiently engineer new genetic circuits.

There’s a growing appreciation that some diseases are related to how the genome is oriented rather than just a mutation,” adds Rao. “This is a little speculative, but there might be diseases where you could go in, put a loop back, and fix the problem.”

Friday, September 04, 2015

a previously unguessed mathematical secret of how the world works?


WaPo |  In nature, the relationship between predators and their prey seems like it should be simple: The more prey that’s available to be eaten, the more predators there should be to eat them. 

If a prey population doubles, for instance, we would logically expect its predators to double too. But a new study, published Thursday in the journal Science, turns this idea on its head with a strange discovery: There aren’t as many predators in the world as we expect there to be. And scientists aren’t sure why.

By conducting an analysis of more than a thousand studies worldwide, researchers found a common theme in just about every ecosystem across the globe: Predators don’t increase in numbers at the same rate as their prey. In fact, the faster you add prey to an ecosystem, the slower predators’ numbers grow. 

“When you double your prey, you also increase your predators, but not to the same extent,” says Ian Hatton, a biologist and the study’s lead author. “Instead they grow at a much diminished rate in comparison to prey.” This was true for large carnivores on the African savanna all the way down to the tiniest microbe-munching fish in the ocean.

Even more intriguing, the researchers noticed that the ratio of predators to prey in all of these ecosystems could be predicted by the same mathematical function — in other words, the way predator and prey numbers relate to each other is the same for different species all over the world.

Master Arbitrageur Nancy Pelosi Is At It Again....,

🇺🇸TUCKER: HOW DID NANCY PELOSI GET SO RICH? Tucker: "I have no clue at all how Nancy Pelosi is just so rich or how her stock picks ar...