Showing posts with label co-evolution. Show all posts
Showing posts with label co-evolution. Show all posts

Friday, July 01, 2022

Quantum Physics And Engineered Viruses

MIT  | Nature has had billions of years to perfect photosynthesis, which directly or indirectly supports virtually all life on Earth. In that time, the process has achieved almost 100 percent efficiency in transporting the energy of sunlight from receptors to reaction centers where it can be harnessed — a performance vastly better than even the best solar cells.

One way plants achieve this efficiency is by making use of the exotic effects of quantum mechanics — effects sometimes known as “quantum weirdness.” These effects, which include the ability of a particle to exist in more than one place at a time, have now been used by engineers at MIT to achieve a significant efficiency boost in a light-harvesting system.

Surprisingly, the researchers at MIT and Eni, the Italian energy company, achieved this new approach to solar energy not with high-tech materials or microchips — but by using genetically engineered viruses.

This achievement in coupling quantum research and genetic manipulation, described this week in the journal Nature Materials, was the work of MIT professors Angela Belcher, an expert on engineering viruses to carry out energy-related tasks, and Seth Lloyd, an expert on quantum theory and its potential applications; research associate Heechul Park; and 14 collaborators at MIT, Eni, and Italian universities.

Lloyd, the Nam Pyo Suh Professor in the Department of Mechanical Engineering, explains that in photosynthesis, a photon hits a receptor called a chromophore, which in turn produces an exciton — a quantum particle of energy. This exciton jumps from one chromophore to another until it reaches a reaction center, where that energy is harnessed to build the molecules that support life.

But the hopping pathway is random and inefficient unless it takes advantage of quantum effects that allow it, in effect, to take multiple pathways at once and select the best ones, behaving more like a wave than a particle.

This efficient movement of excitons has one key requirement: The chromophores have to be arranged just right, with exactly the right amount of space between them. This, Lloyd explains, is known as the “Quantum Goldilocks Effect.”

That’s where the virus comes in. By engineering a virus that Belcher has worked with for years, the team was able to get it to bond with multiple synthetic chromophores — or, in this case, organic dyes. The researchers were then able to produce many varieties of the virus, with slightly different spacings between those synthetic chromophores, and select the ones that performed best.

In the end, they were able to more than double excitons’ speed, increasing the distance they traveled before dissipating — a significant improvement in the efficiency of the process.

 

Thursday, June 30, 2022

Ancient Viruses And The Origins Of Complex Life On Earth

scitechdaily |  The first discovery of viruses infecting a group of microbes that may include the ancestors of all complex life has been found, scientists at The University of Texas at Austin (UT Austin) report in Nature Microbiology. The incredible discovery offers tantalizing clues about the origins of complex life and suggests new directions for investigating the hypothesis that viruses were essential to the evolution of humans and other complex life forms.

There is a well-supported hypothesis that all complex life forms such as humans, starfish, and trees — which feature cells with a nucleus and are called eukaryotes — originated when archaea and bacteria merged to form a hybrid organism. Recent research suggests the first eukaryotes are direct descendants of so-called Asgard archaea. The latest research, by Ian Rambo (a former doctoral student at UT Austin) and other members of Brett Baker’s lab, sheds light on how viruses, too, may have played a role in this billions-year-old history.


 

Comparison of all known virus genomes. Those viruses with similar genomes are grouped together including those that infect bacteria (on the left), eukaryotes (on the right and bottom center). The viruses that infect Asgard archaea are unique from those that have been described before. Credit: University of Texas at Austin

“This study is opening a door to better resolving the origin of eukaryotes and understanding the role of viruses in the ecology and evolution of Asgard archaea,” Rambo said. “There is a hypothesis that viruses may have contributed to the emergence of complex cellular life.”

Rambo is referring to a hotly debated hypothesis called viral eukaryogenesis. It suggests that, in addition to bacteria and archaea, viruses might have contributed some genetic component to the development of eukaryotes. While this latest discovery does not settle that debate, it does offer some interesting clues.

The newly discovered viruses that infect currently living Asgard archaea do have some features similar to viruses that infect eukaryotes, including the ability to copy their own DNA and hijack protein modification systems of their hosts. The fact that these recovered Asgard viruses display characteristics of both viruses that infect eukaryotes and prokaryotes, which have cells without a nucleus, makes them unique since they are not exactly like those that infect other archaea or complex life forms.

“The most exciting thing is they are completely new types of viruses that are different from those that we’ve seen before in archaea and eukaryotes, infecting our microbial relatives,” said Baker, associate professor of marine science and integrative biology and corresponding author of the study.

The Asgard archaea, which probably evolved more than 2 billion years ago and whose descendants are still living, have been discovered in deep-sea sediments and hot springs around the world, but so far only one strain has been successfully grown in the lab. To identify them, scientists collect their genetic material from the environment and then piece together their genomes. In this latest study, the researchers scanned the Asgard genomes for repeating DNA regions known as CRISPR arrays, which contain small pieces of viral DNA that can be precisely matched to viruses that previously infected these microbes. These genetic “fingerprints” allowed them to identify these stealthy viral invaders that infect organisms with key roles in the complex origin story of eukaryotes.

Monday, June 11, 2018

Cognitive Enhancement In The Context Of Neurodiversity And Psychopathology


ama-assn |  In the basement of the Bureau International des Poids et Mesures (BIPM) headquarters in Sevres, France, a suburb of Paris, there lies a piece of metal that has been secured since 1889 in an environmentally controlled chamber under three bell jars. It represents the world standard for the kilogram, and all other kilo measurements around the world must be compared and calibrated to this one prototype. There is no such standard for the human brain. Search as you might, there is no brain that has been pickled in a jar in the basement of the Smithsonian Museum or the National Institute of Health or elsewhere in the world that represents the standard to which all other human brains must be compared. Given that this is the case, how do we decide whether any individual human brain or mind is abnormal or normal? To be sure, psychiatrists have their diagnostic manuals. But when it comes to mental disorders, including autism, dyslexia, attention deficit hyperactivity disorder, intellectual disabilities, and even emotional and behavioral disorders, there appears to be substantial uncertainty concerning when a neurologically based human behavior crosses the critical threshold from normal human variation to pathology.

A major cause of this ambiguity is the emergence over the past two decades of studies suggesting that many disorders of the brain or mind bring with them strengths as well as weaknesses. People diagnosed with autism spectrum disorder (ASD), for example, appear to have strengths related to working with systems (e.g., computer languages, mathematical systems, machines) and in experiments are better than control subjects at identifying tiny details in complex patterns [1]. They also score significantly higher on the nonverbal Raven’s Matrices intelligence test than on the verbal Wechsler Scales [2]. A practical outcome of this new recognition of ASD-related strengths is that technology companies have been aggressively recruiting people with ASD for occupations that involve systemizing tasks such as writing computer manuals, managing databases, and searching for bugs in computer code [3].

Valued traits have also been identified in people with other mental disorders. People with dyslexia have been found to possess global visual-spatial abilities, including the capacity to identify “impossible objects” (of the kind popularized by M. C. Escher) [4], process low-definition or blurred visual scenes [5], and perceive peripheral or diffused visual information more quickly and efficiently than participants without dyslexia [6]. Such visual-spatial gifts may be advantageous in jobs requiring three-dimensional thinking such as astrophysics, molecular biology, genetics, engineering, and computer graphics [7, 8]. In the field of intellectual disabilities, studies have noted heightened musical abilities in people with Williams syndrome, the warmth and friendliness of individuals with Down syndrome, and the nurturing behaviors of persons with Prader-Willi syndrome [9, 10]. Finally, researchers have observed that subjects with attention deficit hyperactivity disorder (ADHD) and bipolar disorder display greater levels of novelty-seeking and creativity than matched controls [11-13].

Such strengths may suggest an evolutionary explanation for why these disorders are still in the gene pool. A growing number of scientists are suggesting that psychopathologies may have conferred specific evolutionary advantages in the past as well as in the present [14]. The systemizing abilities of individuals with autism spectrum disorder might have been highly adaptive for the survival of prehistoric humans. As autism activist Temple Grandin, who herself has autism, surmised: “Some guy with high-functioning Asperger’s developed the first stone spear; it wasn’t developed by the social ones yakking around the campfire” [15].

Wednesday, May 16, 2018

Did Autistic Attention To Detail And Collaborative Morality Drive Human Evolution?


tandfonline |  Selection pressures to better understand others’ thoughts and feelings are seen as a primary driving force in human cognitive evolution. Yet might the evolution of social cognition be more complex than we assume, with more than one strategy towards social understanding and developing a positive pro-social reputation? Here we argue that social buffering of vulnerabilities through the emergence of collaborative morality will have opened new niches for adaptive cognitive strategies and widened personality variation. Such strategies include those that that do not depend on astute social perception or abilities to think recursively about others’ thoughts and feelings. We particularly consider how a perceptual style based on logic and detail, bringing certain enhanced technical and social abilities which compensate for deficits in complex social understanding could be advantageous at low levels in certain ecological and cultural contexts. ‘Traits of autism’ may have promoted innovation in archaeological material culture during the late Palaeolithic in the context of the mutual interdependence of different social strategies, which in turn contributed to the rise of innovation and large scale social networks.

physorg | The ability to focus on detail, a common trait among people with autism, allowed realism to flourish in Ice Age art, according to researchers at the University of York. 



Around 30,000 years ago realistic art suddenly flourished in Europe. Extremely accurate depictions of bears, bison, horses and lions decorate the walls of Ice Age archaeological sites such as Chauvet Cave in southern France.

Why our ice age ancestors created exceptionally realistic art rather than the very simple or stylised art of earlier modern humans has long perplexed researchers.

Many have argued that psychotropic drugs were behind the detailed illustrations. The popular idea that drugs might make people better at art led to a number of ethically-dubious studies in the 60s where participants were given art materials and LSD.

The authors of the new study discount that theory, arguing instead that individuals with "detail focus", a trait linked to , kicked off an artistic movement that led to the proliferation of realistic cave drawings across Europe.
The ability to focus on detail, a common trait among people with autism, allowed realism to flourish in Ice Age art, according to researchers at the University of York.
Around 30,000 years ago realistic art suddenly flourished in Europe. Extremely accurate depictions of bears, bison, horses and lions decorate the walls of Ice Age archaeological sites such as Chauvet Cave in southern France.
Why our ice age ancestors created exceptionally realistic art rather than the very simple or stylised art of earlier modern humans has long perplexed researchers.
Many have argued that psychotropic drugs were behind the detailed illustrations. The popular idea that drugs might make people better at art led to a number of ethically-dubious studies in the 60s where participants were given art materials and LSD.
The authors of the new study discount that theory, arguing instead that individuals with "detail focus", a trait linked to , kicked off an artistic movement that led to the proliferation of realistic cave drawings across Europe.

Sunday, January 14, 2018

Posing With a Rifle IS NOT Fighting the Money Power...,


subrealism |  However, money does not emerge from barter-based economic activities, but rather from the sovereign's desire to organize economic activity. The state issues currency and then imposes taxes. Because citizens are forced to use the state's currency to pay their taxes, they can trust that the currency will carry value in day-to-day economic activities. Governments with their own currency and a floating exchange rate (sovereign currency issuers like the United States) do not have to borrow from "bond vigilantes" to spend. They themselves first spend the money into existence and then collect it through taxation to enforce its usage. The state can spend unlimited amounts of money. It is only constrained by biophysical resources, and if the state spends beyond the availability of resources, the result is inflation, which can be mitigated by taxation. 
These simple facts carry radical policy implications.

theroot |  I first met Zac Henson a few years ago when we were both invited to a forum in Birmingham, Ala., to talk about economic development. He has an unkempt beard and talks with a Southern accent as thick as Karo Syrup. He looks like a redneck. He sounds like a redneck. I figured that I would be the lone voice railing against the gentrification of one of the blackest cities in America, until he spoke up.

It turns out that Henson is a redneck. It also turns out that Henson is a UC Berkeley-educated economist and scholar with a Ph.D. in environmental science, policy and management and heads the Cooperative New School for Urban Studies and Environmental Justice. Henson doesn’t consider the term “redneck” a pejorative, and defines a redneck simply as “a white working-class Southerner.” He has been working for years to separate redneck culture from its neo-Confederate, racist past and redefine it according to its working-class roots.

“The only culture that white people and upper-middle-class white people have is whiteness, Henson explains. “To fit in that class, you must strip yourself of everything else. What I would like to do is show white working-class whites that the neo-Confederate bullshit is a broken ideology. ... A lot of the activism in anti-racism is all about white people giving up their privilege in regards to white supremacy. I believe that will never work with working-class whites. You have to find a way to show working folks that anti-racism is within the self-interests of working-class white people. And you have to do that with a culture.”

Henson is one of the people trying to renew the legacy of the Young Patriots and build the anti-racism redneck movement. He is one of the people trying to spread the message and history of the Young Patriots Organization and its connection to redneck culture.

The original YPO was led by William “Preacherman” Fesperman and made up of “hillbillies” from Chicago’s South Side. They saw the similarity in how the Chicago machine treated blacks and how it treated poor whites. Preacherman believed that solidarity was the only answer.

“Let racism become a disease,” he said at the 1969 conference. “I’m talking to the white brothers and sisters because I know what it’s done. I know what it’s done to me. I know what it does to people every day. … It’s got to stop, and we’re doing it.”

Modeled after the Black Panther Party, the YPO adapted the Panthers’ ideas into its platform. It used an 11-point plan (pdf) similar to the Panther Party’s 10-point plan. It opened a free health clinic like the Panthers. The YPO, too, was raided by the “pigs” (pdf).

Today the Young Patriots Organization is looking to build on the legacy interrupted by the death of Fred Hampton. It embraces the term “redneck” as a cultural term and wants to build a movement that fights racism the same way as the Black Panthers it modeled itself after almost five decades ago.
Hy Thurman, an original member of the YPO who is looking to resurrect the organization, says: “Racism was a demon that had to be driven out and slain if we were going to have unity with other groups and to believe that all people have a right to self-determination and freedom. … We had to change to make life tolerable, and for life to have some sort of meaning.”

Henson, Thurman and the YPO chapters across the country are using their history with the Panthers to fight racism, class warfare and oppression on all fronts, and they are rounding up unafraid rednecks willing to fight the power structure in any way possible.

Monday, September 18, 2017

The Promise and Peril of Immersive Technologies


weforum |  The best place from which to draw inspiration for how immersive technologies may be regulated is the regulatory frameworks being put into effect for traditional digital technology today. In the European Union, the General Data Protection Regulation (GDPR) will come into force in 2018. Not only does the law necessitate unambiguous consent for data collection, it also compels companies to erase individual data on request, with the threat of a fine of up to 4% of their global annual turnover for breaches. Furthermore, enshrined in the bill is the notion of ‘data portability’, which allows consumers to take their data across platforms – an incentive for an innovative start-up to compete with the biggest players. We may see similar regulatory norms for immersive technologies develop as well.

Providing users with sovereignty of personal data
Analysis shows that the major VR companies already use cookies to store data, while also collecting information on location, browser and device type and IP address. Furthermore, communication with other users in VR environments is being stored and aggregated data is shared with third parties and used to customize products for marketing purposes.

Concern over these methods of personal data collection has led to the introduction of temporary solutions that provide a buffer between individuals and companies. For example, the Electronic Frontier Foundation’s ‘Privacy Badger’ is a browser extension that automatically blocks hidden third-party trackers and allows users to customize and control the amount of data they share with online content providers. A similar solution that returns control of personal data should be developed for immersive technologies. At present, only blunt instruments are available to individuals uncomfortable with data collection but keen to explore AR/VR: using ‘offline modes’ or using separate profiles for new devices.

Managing consumption
Short-term measures also exist to address overuse in the form of stopping mechanisms. Pop-up usage warnings once healthy limits are approached or exceeded are reportedly supported by 71% of young people in the UK. Services like unGlue allow parents to place filters on content types that their children are exposed to, as well as time limits on usage across apps.

All of these could be transferred to immersive technologies, and are complementary fixes to actual regulation, such as South Korea’s Shutdown Law. This prevents children under the age of 16 from playing computer games between midnight and 6am. The policy is enforceable because it ties personal details – including date of birth – to a citizen’s resident registration number, which is required to create accounts for online services. These solutions are not infallible: one could easily imagine an enterprising child might ‘borrow’ an adult’s device after-hours to find a workaround to the restrictions. Further study is certainly needed, but we believe that long-term solutions may lie in better design.
Rethinking success metrics for digital technology
As businesses develop applications using immersive technologies, they should transition from using metrics that measure just the amount of user engagement to metrics that also take into account user satisfaction, fulfilment and enhancement of well-being. Alternative metrics could include a net promoter score for software, which would indicate how strongly users – or perhaps even regulators – recommend the service to their friends based on their level of fulfilment or satisfaction with a service.

The real challenge, however, is to find measures that align with business policy and user objectives. As Tristan Harris, Founder of Time Well Spent argues: “We have to come face-to-face with the current misalignment so we can start to generate solutions.” There are instances where improvements to user experience go hand-in-hand with business opportunities. Subscription-based services are one such example: YouTube Red will eliminate advertisements for paying users, as does Spotify Premium. These are examples where users can pay to enjoy advertising-free experiences and which do not come at the cost to the content developers since they will receive revenue in the form of paid subscriptions.

More work remains if immersive technologies are to enable happier, more fulfilling interactions with content and media. This will largely depend on designing technology that puts the user at the centre of its value proposition.

This is part of a series of articles related to the disruptive effects of several technologies (virtual/augmented reality, artificial intelligence and blockchain) on the creative economy.


Virtual Reality Health Risks...,


medium |  Two decades ago, our research group made international headlines when we published research showing that virtual reality systems could damage people’s health.

Our demonstration of side-effects was not unique — many research groups were showing that it could cause health problems. The reason that our work was newsworthy was because we showed that there were fundamental problems that needed to be tackled when designing virtual reality systems — and these problems needed engineering solutions that were tailored for the human user.

In other words, it was not enough to keep producing ever faster computers and higher definition displays — a fundamental change in the way systems were designed was required.

So why do virtual reality systems need a new approach? The answer to this question lies in the very definition of how virtual reality differs from how we traditionally use a computer.

Natural human behaviour is based on responses elicited by information detected by a person’s sensory systems. For example, rays of light bouncing off a shiny red apple can indicate that there’s a good source of food hanging on a tree.

A person can then use the information to guide their hand movements and pick the apple from the tree. This use of ‘perception’ to guide ‘motor’ actions defines a feedback loop that underpins all of human behaviour. The goal of virtual reality systems is to mimic the information that humans normally use to guide their actions, so that humans can interact with computer generated objects in a natural way.

The problems come when the normal relationship between the perceptual information and the corresponding action is disrupted. One way of thinking about such disruption is that a mismatch between perception and action causes ‘surprise’. It turns out that surprise is really important for human learning and the human brain appears to be engineered to minimise surprise.

This means that the challenge for the designers of virtual reality is that they must create systems that minimise the surprise experienced by the user when using computer generated information to control their actions.

Of course, one of the advantages of virtual reality is that the computer can create new and wonderful worlds. For example, a completely novel fruit — perhaps an elppa — could be shown hanging from a virtual tree. The elppa might have a completely different texture and appearance to any other previously encountered fruit — but it’s important that the information used to specify the location and size of the elppa allows the virtual reality user to guide their hand to the virtual object in a normal way.

If there is a mismatch between the visual information and the hand movements then ‘surprise’ will result, and the human brain will need to adapt if future interactions between vision and action are to maintain their accuracy. The issue is that the process of adaptation may cause difficulties — and these difficulties might be particularly problematic for children as their brains are not fully developed. 

This issue affects all forms of information presented within a virtual world (so hearing and touch as well as vision), and all of the different motor systems (so postural control as well as arm movement systems). One good example of the problems that can arise can be seen through the way our eyes react to movement.

In 1993, we showed that virtual reality systems had a fundamental design flaw when they attempted to show three dimensional visual information. This is because the systems produce a mismatch between where the eyes need to focus and where the eyes need to point. In everyday life, if we change our focus from something close to something far away our eyes will need to change focus and alter where they are pointing.

The change in focus is necessary to prevent blur and the change in eye direction is necessary to stop double images. In reality, the changes in focus and direction are physically linked (a change in fixation distance causes change in the images and where the images fall at the back of the eyes).

Tuesday, April 25, 2017

The Magical Technosignatures of Truly Intelligent Species


space |  Space.com: So, intelligence can be considered on a planetary scale?

Grinspoon: The basic ability to not wipe oneself out, to endure, to use your technological interaction with the world in such a way that has the possibility of the likelihood of lasting and not being temporary — that seems like a pretty good definition of intelligence. I talk about true intelligence, planetary intelligence. It's part and parcel of this notion of thinking of us as an element of a planet. And when we think in that way, then you can discriminate between one type of interaction with the planet that we would have that would not be sustainable, that would mark us as a temporary kind of entity, and another type in which we use our knowledge to integrate into planetary systems [in]some kind of long-term graceful way. That distinction seems to me a worthwhile definition of a kind of intelligence

Especially then going back to the SETI [search for extraterrestrial intelligence] question, because longevity is so important in the logic and the math of SETI. There may be a bifurcation or subshell [of life] that don't make this leap to this type of intelligence. The ones that do make that leap have a very long lifetime. And they're the ones that in my view are intelligent. Using your knowledge of the universe to prolong your lifetime seems like an obviously reasonable criterion [of intelligence]. If you use that criteria, then it's not obvious that we have intelligence on Earth yet, but we can certainly glimpse it.

Space.com: You also wrote that sustainable alien populations could be harder to detect. What would that mean?

Grinspoon: One possible answer to the Fermi Paradox, which asks "Where are they?" is that they're all over the place, but they're not obviously detectable in ways that we imagine they would be. Truly intelligent life may not be wasteful and profligate and highly physical. Arthur C. Clarke said that the best technology would be indistinguishable from magic. What if really highly advanced technology is indistinguishable from nature? Or is hard to distinguish.

There's the set of assumptions embedded in [the search for extraterrestrial intelligence] that the more advanced a civilization is the more energy they'll use, the more they'll expand. It's funny to think about that and realize that we're talking about this while realizing things about our own future, that there is no future in this thoughtless, cancerous expansion of material energy use. That's a dead end. So why would an advanced civilization value that? You can understand why a primitive organization would value that — there's a biological imperative that makes sense for Darwinian purposes for us to multiply as much as possible, that's how you avoid becoming extinct. But in a finite container, that's a trap. I assume that truly intelligent species would not be bound by that primitive biological imperative. Maybe intelligent life actually questions its value and realizes that quality is more important than quantity. 

I'm not claiming to know that this is true about advanced aliens because I don't think anybody can know anything about advanced aliens, but I think it's an interesting possibility. That could be why the universe isn't full of obviously advanced civilizations: there's something in their nature that makes them not obvious.

Tuesday, November 29, 2016

The Mysterious Interlingua


slashdot |  After a little over a month of learning more languages to translate beyond Spanish, Google's recently announced Neural Machine Translation system has used deep learning to develop its own internal language. TechCrunch reports:GNMT's creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa... could it translate Korean to Japanese, without resorting to English as a bridge between them? They made this helpful gif to illustrate the idea of what they call "zero-shot translation" (it's the orange one). As it turns out -- yes! It produces "reasonable" translations between two languages that it has not explicitly linked in any way. Remember, no English allowed. But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked... does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another? In other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google's language and AI boffins think that it has. The paper describing the researchers' work (primarily on efficient multi-language translation but touching on the mysterious interlingua) can be read at Arxiv.

Sunday, December 27, 2015

associated consciousness wars...,


realitysandwich |  The esoteric, Hermetic tradition, forced underground by the rise of material, mechanical science, has suffered, I believe, a full scale, no holds barred assault by the left brain and the deficient mode of the mental-rational structure. Its right brain worldview, with its sense of a living, intelligent universe with which we can participate through our imagination, was targeted for attack by its left brain antagonist. It is not the case, as it is generally accepted, that the Hermetic/esoteric view, anchored in what it erroneously believed was a profound “ancient wisdom,” was, with the rise of reason, rationality and the Enlightenment, simply superseded by a more correct view. It was not simply a case of “superstition” giving way to “science,” or of dogma dissolving in the face of free thought. That “more correct view,” informed by the proselytizing zeal of a competing form of consciousness, seems to have purposely and ruthlessly set out to consciously obliterate its rival. This was, indeed, a real war, one carried out on the fields of consciousness.

In the early stages of its campaign, the anti-esoteric view enjoyed many victories, and it eventually established itself as the sole arbiter of what is true, and what is “real” knowledge and what is not. But now, some four hundred years after Hermes Trismegistus the thrice great sage of magic and the ancient wisdom was dethroned, his usurper’s position seems threatened – or at least the foundations on which it established its supremacy seem somewhat less secure. In our time, the deficient mode of Gebser’s mental-rational consciousness structure has reached its peak, as it were. Developments like “deconstructionism” and “post-modernism” suggest that the western intellectual tradition has begun to take itself apart, with the left brain’s obsession with analysis turning on itself. Even earlier than these, the rise of the “new physics” of quantum theory and related fields in the early part of the last century has shown that the neat nineteenth-century vision of a perfectly explainable mechanical universe is no longer tenable. But there are more pressing concerns. We’ve seen that Gebser in his last days believed that we were heading toward a “global catastrophe,” and the various crises – ecological, environmental, economic, social, political, religious, and cultural – that fill our daily news reports suggest he was not far wrong. Our era has had no shortage of Cassandras, and it would be easy to lump Gebser’s concerns together with other, less eloquent — not to mention researched — jeremiads. But there is a tension, an anxiety about our time that somehow seems to suggest that something will happen, that some dike will burst, and that we will have a flood. As the philosopher Richard Tarnas remarked, “late modern man”– that is, ourselves — is “the incongruously sensitive denizen of an implacable vastness devoid of meaning,” living in a world in which “gigantism and turmoil, excessive noise, speed and complexity dominate the human environment.” Things, many believe, cannot stay this way much longer. As Yeats said long ago, “the center cannot hold.”

Friday, November 20, 2015

syraq's speed freaks, jihad junkies, and captagon cartels...,


foreignpolicy |  In a dank garage in a poor neighborhood in south Beirut, young men are hard at work. Industrial equipment hums in the background as they put on their surgical masks and form assembly lines, unpacking boxes of caffeine and quinine, in powder and liquid form. They have turned the garage into a makeshift illegal drug factory, where they produce the Middle East’s most popular illicit drug: an amphetamine called Captagon

For at least a decade, the multimillion-dollar Captagon trade has been a fixture of the Middle East’s black markets. It involves everyone from and  gangs, to Hezbollah, to members of the Saudi royal family. On Oct. 26, Lebanese police arrested Saudi prince Abdel Mohsen Bin Walid Bin Abdulaziz at Beirut’s Rafik Hariri International Airport for allegedly trying to smuggle 40 suitcases full of Captagon (along with some cocaine) to Riyadh aboard a private jet.

The past  have seen the global trade in illegal Captagon skyrocket, as authorities across the region have observed a major spike in police seizures of the drug. Local law enforcement, Interpol, and the U.N. Office on Drugs and Crime (UNODC) all agree on the catalyst: the conflict in Syria. Captagon now links addicts in the Gulf to Syrian drug lords and to brigades fighting Syrian President Bashar al-Assad, who are funded by the profits, and, after years of fighting, are now hooked on the product.

Captagon began as a pharmaceutical-grade amphetamine called. Patented by German pharmaceutical giant  in the 1960s, doctors used it to treat a range of disorders, from narcolepsy to depression. But the drug fell out of favor in the 1970s, when the U.S. Food and Drug Administration deemed it too addictive to justify its use, with the World Health Organization following suit and recommending a worldwide ban in the 1980's. 

This is where the free market history of Captagon ends and the hazier black market story — one told by drug lords, smugglers, and law enforcement — begins.

Friday, May 15, 2015

offloading cognition onto cognitive technology


arvix |  "Cognizing" (e.g., thinking, understanding, and knowing) is a mental state. Systems without mental states, such as cognitive technology, can sometimes contribute to human cognition, but that does not make them cognizers. Cognizers can offload some of their cognitive functions onto cognitive technology, thereby extending their performance capacity beyond the limits of their own brain power. Language itself is a form of cognitive technology that allows cognizers to offload some of their cognitive functions onto the brains of other cognizers. Language also extends cognizers' individual and joint performance powers, distributing the load through interactive and collaborative cognition. Reading, writing, print, telecommunications and computing further extend cognizers' capacities. And now the web, with its network of cognizers, digital databases and software agents, all accessible anytime, anywhere, has become our 'Cognitive Commons,' in which distributed cognizers and cognitive technology can interoperate globally with a speed, scope and degree of interactivity inconceivable through local individual cognition alone. And as with language, the cognitive tool par excellence, such technological changes are not merely instrumental and quantitative: they can have profound effects on how we think and encode information, on how we communicate with one another, on our mental states, and on our very nature.  Cognition Distributed

Wednesday, August 13, 2014

microbial colonization...,


thescientist |  Infants start out mostly microbe-free but quickly acquire gut bacteria, which take root in three successive groups. First, Bacilli dominate. Then Gammaproteobacteria surge, followed by Clostridia. But the pace at which these bacterial groups colonize the gastrointestinal tract depends on the time since the babies were conceived, not since when they were born. And time since conception appears to have more of an influence on the infant gut microbiome than other factors, such as exposure to antibiotics, whether babies were born vaginally or by cesarean section, and if they were breastfed. These are a few of the findings from a survey of 922 fecal samples collected from 58 premature babies, published today (August 11) in PNAS.

“It is an interesting study that provides useful data regarding temporal changes in microbial composition in the infant gut that can be mined further,” Shyamal Peddada, a biostatistician at the National Institute of Environmental Health Sciences who was not involved in the study, wrote in an e-mail to The Scientist.

“I think the paper does a nice job of showing that premature babies develop differently from full-term babies . . . it is not just a function of colonization after birth,” Rob Knight from the University of Colorado, Boulder, told The Scientist in an e-mail. “Differences in gut physiology or in the infant immune system could explain this pattern.”

Researchers at the Washington University School of Medicine in St. Louis embarked on this survey in an effort to better understand the role of the microbiota in the development of gut disorders common in premature infants, such as necrotizing enterocolitis. Without first defining the premature infant gut in the absence of gastrointestinal issues, the researchers struggled to identify potentially pathogenic bacterial patterns.

The researchers collected stool samples each time the babies defecated until they were thirty days old, and sampled every third elimination after that. They then sequenced 16S ribosomal RNA genes to identify the bacterial composition of each sample.

Monday, September 23, 2013

darwin's apple: the evolutionary biology of religion

sonic | The various theories explaining the evolutionary and biological origin of religion do not show comprehensively how religion is adaptive and increases individual fitness. The Split hypothesis proposes the co-evolution of religion and consciousness—that intrinsic religion is a compensating mechanism for higher-order consciousness. It unifies all ritual and religious behaviors under one umbrella and explains how all these behaviors are adaptive and increase evolutionary success. Empirical science is the basis for this inquiry. 

Contemporary scientists such as Daniel Wegner, Antonio Damasio, V.S. Ramachandran, Joseph LeDoux, and John Bargh attest that consciousness, the rational mind, is not solely, or even mostly, the seat of volition as many believe. Humans remain tethered to their biological heritage, which behaviorally is the emotional system as Jaak Panksepp helps reinforce. Consciousness gives humans distinct advantages (culture), but there is no absolute truth accessible solely to consciousness. It has limited inherent intelligence. Consciousness also evolved to be far noisier and distracting than is useful and carries a distinct and significant downside. The limbic system’s reward and pleasure centers are the primary arbiters of behavior and serve homeostasis, the physiological activities that ensure survival and reproductive success. Cognitive biases are the unconscious drivers of “conscious” decision-making and, themselves, are derived from innate emotional predispositions and conditioning. Religion serves to rein in overbearing consciousness and promotes instinctual knowledge. Religion quiets consciousness to let emotions and homeostasis run the show, hence religion is adaptive. 

All religious behaviors including music, art, dance, mythology, and prayer evolved to elicit emotions and suppress or quiet consciousness to various degrees. Contemporary brain and behavioral research show these ritual activities stimulate the brain’s reward systems, and people benefit from them. This would not have happened had they not been evolutionarily adaptive. These ritual behaviors are utilized by psychologists and psychotherapists to help people integrate emotions, which strongly supports their adaptive functions. 

Read the introduction that lays out The Split hypothesis of religion in more detail.

Saturday, January 05, 2013

welcome to 2013: centennial anniversary of the irs and the fed...,

seekingalpha | The IRS was Conceived 100 Years Ago Next Month
On February 3, 1913, Delaware became the 36th state to ratify the proposed 16th Amendment authorizing income taxes. With three-fourths of the 48 states backing the resolution, the 16th Amendment became an official part of the U.S. Constitution on February 25, while Republican William Taft was a lame duck President awaiting the inauguration of Democrat Woodrow Wilson a week later on March 4, 1913.

Six months later, the Revenue Act of 1913 was signed into law on October 13, 1913 authorizing tax rates ranging up to 7% for those earning $500,000 or more. The lowest (1%) income tax rate kicked in for single taxpayers making $3,000 per year or couples making $4,000 or more. Therefore, fewer than 5% of U.S. workers were obligated to pay any income tax at first. Businessmen, proud of their success, showed off their tax bill in bars as if to say "I'm one of the top 5%," a badge of honor in a capitalist economy.

World War I changed all that. By 1917, President Woodrow Wilson raised the top tax rates tenfold.

In 1916, President Wilson campaigned against joining the "war to end all wars," but just one month after his second inauguration, he pushed us into World War I and used the income tax to fund that war effort. In 1917, the top income tax rate grew nearly tenfold, from 7% to 67% on top income earners. The new income tax tool was powerful enough to fund America's first entry into a major European conflict.

Unlike most politicians, who tend to mask their views in patriotic pieties, Wilson clearly stated the pragmatism of his politics much earlier in his 1889 book The State:
We are not bound to adhere to the doctrines held by the signers of the Declaration of Independence … Government does now whatever experience permits or the times demand.
Wow! Those last 10 words form a chilling expression of raw unprincipled power. They are also applicable to today's fiscal cliff debate: "Whatever experience permits or the times demand" is a fair description of raising tax rates to fund runaway spending needs.

The Federal Reserve was also Born a Century Ago, in 1913
In a parallel track, the Federal Reserve was conceived and born a century ago this year. On March 31, 1913, J.P. Morgan, America's unofficial one-man central bank, died in his sleep in Rome. Like any good banking man, he died at the closing day of a financial quarter handing new President Woodrow Wilson the opening to create a central bank. After a close call in the Panic of 1907, J.P. Morgan, then entering his 70s, told the nation he was retiring from the central banking business, saying that the next panic would sink him - and the country - even if other syndicate members joined him (as they usually did).

The death of J.P. Morgan almost nine months later led to the centralized solution everyone seemed to favor then. At 6:02 pm on December 23, 1913, The Federal Reserve Act, authorizing the creation of the Federal Reserve, was signed into law by President Woodrow Wilson using four golden pens in a lightly-attended ceremony during the Christmas break. Like income taxes, the Fed quickly grew quite powerful.

The Federal Reserve took shape in stages, throughout 1914, with an official launch date of November 16, 1914. Ironically, the Fed was formed for the express purpose of avoiding the financial panics so painful in recent memory - 1893 and 1907 - but the Fed merely continued the same kind of boom-bust cycle of panics, ranging from a short, sharp shock in 1920-21, to the long-term Great Depression of 1929 to 1941.
In particular, the Fed fueled a huge wave of inflation after providing liquidity for World War I spending. That was followed by a sharp cutoff in liquidity and a "flash" depression in 1920. The Fed then fueled too much liquidity throughout the 1920s, leading to a real estate and stock market crash, followed by a sharp (33%) cut in liquidity between 1929 and 1932. The Fed just couldn't seem to find a balance.

The early Fed was quite clear in its mission. In its 1923 Annual Report, the Federal Reserve described its role clearly:
The Federal Reserve banks are…the source to which the member banks turn when the demands of the business community have outrun their own unaided resources.
This is why the Fed increased credit 61% in the 1920s, from $45.3 billion on June 30, 1921 to over $73 billion in July 1929.

The Fed's inflationary monetary policies led to a nearly 99% decline in the purchasing power of the U.S. dollar in gold terms. In 1913, gold traded for $20.67 per ounce vs. around $1,690 today. Our official cost of living increase since 1913 is +2,261%, meaning that an item costing $1 in 1913 costs $23.61 now. The Fed's policies have also led to a series of stock market booms and busts over the century, begging the question of whether the Fed has been any more effective than J.P. Morgan and his big-banker syndicate.

Saturday, December 29, 2012

the future status of modernity's chief cognitive object

ala | As e-books and the emerging digital library occupy today’s headlines, there appears to be a tacit consensus emerging from the discourse among academics, journalists, and librarians about the future of the book. That vision of the future, as portrayed in the trade literature and popular press, consigns this centuries-old technology to obsolescence, as if it were merely another information format.

This report explores alternative scenarios, where the technology of the printed book does not disappear or become extinct, but occupies a different position in a technological ecology characterized by the proliferation of e-books and digital libraries. The printed book has for centuries been the chief cognitive object of the library. The future status of that object should be of interest to all librarians, especially as they plan for the future; therefore, this report intentionally favors the continued existence of the printed book as a viable technology.

The goal of this report is to draw attention to our assumptions about the future of the book, assumptions that are grounded in our current e-book zeitgeist. Strategic decisions are often based on underlying—and often unexamined—assumptions about the larger environment in which those decisions will be carried out. The future often turns out not as expected because we do not entertain alternative possibilities and base strategic thinking and actions on one specific belief about the future. Much of our current thinking about the future of libraries appears based on the assumption that printed books will give way to e-books and the digital transmission of textual objects.

This research report presents four scenarios so that academic and research librarians may expand their thinking about the future to include a richer set of environmental conditions:

  1. Consensus: a scenario where e-books overwhelm and make obsolete the printed book 
  2. Nostalgic: a scenario where printed books are still highly in demand and e-books haveproven to be a fad
  3. Privatization of the book: a scenario where printed books are vestigial to an ecology dominated by e-books 
  4.  Printed books thrive: a scenario where e-books and printed books exist in balance and have equal importance
Scenario thinking exercises can help to develop situational awareness. Mica R. Endsley defines situational awareness as “the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.”

 Futuring is an exercise in expanding situational awareness by developing greater comprehension of the elements that make up the larger environment of libraries—indeed, viewing the library as a complex dynamic system affected not only by operational elements such as collections and user services but also by political, economic, social, and technological elements of the environment within which the library is situated. Beyond comprehending these elements and understanding the complex ways in which they interact, academic and research librarians must also be able to envision the future status of that system. We assume that the complex system that is the library will itself undergo change, and librarians must be able to anticipate those changes. Thus, using the language of situational awareness, scenarios should be viewed as one effort to describe a future state of the system in which decisions will need to be carried out. As academic and research librarians undertake strategic planning for their organizations, awareness of the larger environment and understanding the potential for changes in that environment will prove critical to improved decision making.

After reviewing each of the scenarios, those involved in strategic decision making should then consider their own plans—and their budgets— with respect to these questions:
  • Which state of the system do you believe best describes the environment in which your library’s strategic thinking and planning will unfold?
  • Which of these models of the future currently guides your strategic thinking and actions regarding printed books?

the impact of technology on the recording industry

jakobgoesblogging | Enablers are the development of advanced hardware and software products as well as the increased importance of internet and especially social networks. During my research, I found a number of interesting statistics about recent changes in the music industry. Based on this data I will try to analyze each step of the value chain to explore how new technologies and the change in customer behavior affect the companies’ business models as well as the music industry as a whole.

Approach: After a short description of the music industry as it was some years ago, I will have a look at the most recent trends. Based on that I will point out the changes in the business model of record labels and identify some important areas in which companies have to act in order to stay competitive. To make this analysis more practical, I want to include the income statement of Warner Music Group, a leading record label to show how it is affected by recent industry chances.

The typical value chain in the music industry shows five steps. It starts with creation of the content by the artist. Traditionally, the artist tried to raise awareness by sending demo tapes to the record companies and participate in band contests. The artist and repertoire (A&R) unit is the division of a record label that is responsible for talent scouting, contracting and overseeing the artistic development. Once the contract is signed, the record company takes care of the financing and records the songs. The next step is the promotion and PR of the album done by the record company. The distribution traditionally was done through merchants and retail stores. Most of them were independent but there were also big retail chains, owned by the major record labels.

Monday, November 19, 2012

currencies of the future..,

lfb | Banking industry insiders are upset with Amex and Wal-Mart, that also is offering prepaid cards, because these prepaid accounts would amount to uninsured deposits, according to Andrew Kahr, who wrote a scathing piece on the issue for American Banker.
Kahr rips into the idea with this analogy:
“To provide even lower ‘discount prices,’ should Wal-Mart rent decaying buildings that don’t satisfy local fire laws and building codes — and offer still better deals to consumers? And why should Walmart have to honor the national minimum wage law, any more than Amex honors state banking statutes? With Bluebird, Amex can already violate both the Bank Holding Company Act and many state banking statues.”
Kahr is implying that regulated fractionalized banking is safe and sound, while prepaid cards provided by huge companies like Amex and Wal-Mart is a shady scheme set up to rip off consumers. The fact is, in the case of IndyMac, panicked customers forced regulators to close the S&L by withdrawing only 7% of the huge S&L’s deposits. It was about the same for WaMu and Wachovia when regulators engineered sales of those banks being run on. Bitcoin supporters, unlike the general public, are well aware of fractionalized banking’s fragility.

Maybe what the banking industry is really afraid of is the Amexes and Wal-Marts of the world creating their own currencies and banking systems. Wal-Mart has tried to get approval to open a bank for years, and bankers have successfully stopped the retail giant for competing with them.

However, prepaid credit cards might be just the first step toward Wal-Mart issuing their own currency — Marts — that might initially be used only for purchases in Wal-Mart stores. But over time, it’s not hard to imagine Marts being traded all over town and easily converted to dollars, pesos, Yuan, or other currencies traded where Wal-Mart has stores. Fist tap Dale.

Monday, August 06, 2012

religious conservatism: an evolutionarily evoked disease-avoidance strategy

tandfonline | Issues of purity and symbolic cleansing (e.g., baptism) play an important role in most religions, especially Christianity. The purpose of the current research was to provide an evolutionary framework for understanding the role of disgust in religiosity, which may help elucidate the relationship between religious conservatism and non-proscribed prejudice (e.g., prejudice toward sexual minorities). The behavioral immune system (BIS) is a cluster of psychological mechanisms (e.g., disgust) that encourage disease-avoidance (Schaller, 2006). Out-group members have historically been a source of contamination. Consequently, evidence suggests that the BIS predicts negative attitudes toward out-groups (Faulkner, Schaller, Park, & Duncan, 2004). The purpose of the current research is to investigate whether religious conservatism mediates the relationship between the BIS and prejudice toward sexual minorities. Study 1 demonstrated that the disease-avoidant components of disgust (e.g., sexual and pathogen disgust), but not moral disgust, were positively correlated with religious conservatism. Additionally, the data supported a model in which religious conservatism mediated the relationship between disgust and prejudice toward homosexuals. In Study 2, the correlations and mediation model were replicated with a more diverse sample and different measures. The current research suggests that religious conservatism may be in part an evolutionarily evoked disease-avoidance strategy.

ethnic nepotism: its proponents don't just study it, they practice it

wikipedia | In sociology, the term ethnic nepotism describes a human tendency for in-group bias or in-group favouritism applied by nepotism for people with the same ethnicity within a multi-ethnic society.

The theory views ethnocentrism and racism as nepotism toward extended kin and an extension of kin selection. In other words, ethnic nepotism points toward a biological basis for the phenomenon of people preferring others of the same ethnicity or race; it explains the tendency of humans to favor members of their own racial group by postulating that all animals evolve toward being more altruistic toward kin in order to propagate more copies of their common genes.

"The myth of common descent", proposed by many social scientists as a prominent ethnic marker, is in his view often not a myth at all.[clarification needed] "Ethnicity is defined by common descent and maintained by endogamy".[2]

To guard one's genetic interests, Frank Salter notes altruism toward one's co-ethnics:
Hamilton's 1975 model of a genetic basis for tribal altruism shows that it is theoretically possible to defend ethnic genetic interests in an adaptive manner, even when the altruism entails self sacrifice. He argued mathematically that an act of altruism directed towards the tribe was adaptive if it protected the aggregate of distant relatives in the tribe. In sexually-reproducing species a population's genetic isolation leads to rising levels of interrelatedness of its members and thus makes greater altruism adaptive. Low levels of immigration between tribes allow growing relatedness of tribal members, which in turn permits selection of altruistic acts directed at tribal members, but only if these acts "actually aid in group fitness in some way...." Closely related individuals are less likely to free ride and more likely to invest in and thus strengthen the group as a whole, improving the fitness of its members.[3]
Regarding how this translates into politics and why homogeneous societies are more altruistic, Frank Salter writes:
Relatively homogeneous societies invest more in public goods, indicating a higher level of public altruism. For example, the degree of ethnic homogeneity correlates with the government's share of gross domestic product as well as the average wealth of citizens. Case studies of the United States, Africa and South-East Asia find that multi-ethnic societies are less charitable and less able to cooperate to develop public infrastructure. Moscow beggars receive more gifts from fellow ethnics than from other ethnics. A recent multi-city study of municipal spending on public goods in the United States found that ethnically or racially diverse cities spend a smaller portion of their budgets and less per capita on public services than do the more homogenous cities.

What Is France To Do With The Thousands Of Soldiers Expelled From Africa?

SCF  |    Russian President Vladimir Putin was spot-on this week in his observation about why France’s Emmanuel Macron is strutting around ...