Showing posts with label neuromancy. Show all posts
Showing posts with label neuromancy. Show all posts

Thursday, March 10, 2016

most theories of consciousness are worse than wrong..,


theatlantic |  According to medieval medicine, laziness is caused by a build-up of phlegm in the body. The reason? Phlegm is a viscous substance. Its oozing motion is analogous to a sluggish disposition.

The phlegm theory has more problems than just a few factual errors. After all, suppose you had a beaker of phlegm and injected it into a person. What exactly is the mechanism that leads to a lazy personality? The proposal resonates seductively with our intuitions and biases, but it doesn’t explain anything.

In the modern age we can chuckle over medieval naiveté, but we often suffer from similar conceptual confusions. We have our share of phlegm theories, which flatter our intuitions while explaining nothing. They’re compelling, they often convince, but at a deeper level they’re empty.

One corner of science where phlegm theories proliferate is the cognitive neuroscience of consciousness. The brain is a machine that processes information, yet somehow we also have a conscious experience of at least some of that information. How is that possible? What is subjective experience? It’s one of the most important questions in science, possibly the most important, the deepest way of asking: What are we? Yet many of the current proposals, even some that are deep and subtle, are phlegm theories.

the tides of mind

thescientist |  Many scientists who study the mind live in fantasyland. They ought to move back to reality: neuroscientists, psychologists, computer scientists pursuing artificial intelligence, and the philosophers of mind who are, in many cases, the sharpest thinkers in the room.
The mind makes us rational. That mind is the one we choose to study. When we study sleep or dreaming, we isolate them first—as the specialized topics they are. But, as I argue in my new bookThe Tides of Mind, we will never reach a deep understanding of mind unless we start with an integrated view, stretching from rational, methodical thought to nightmares.
Integrating dreaming with the rest of mind is something like being asked to assemble a car from a large pile of metal, plastic, rubber, glass, and an ocelot. Dreaming is hallucination, centering on a radically different self from our waking selves, within unreal settings and stories. Dreams can please or scare us far more vividly than our ordinary thoughts. And they are so slippery, so hard to grasp, that we start losing them the moment we wake up.
But dreaming fits easily into the big picture of mind; and we will make no basic progress on understanding the mind until we see how. Dreaming is the endpoint of the spectrum of consciousness, the smooth progression from one type of consciousness to the next, that we each experience daily.

Sunday, December 20, 2015

cathedralized WEIRD-ness will shape the algorithmic baselines for normalcy



telegraph |  Yes, we now live in a world where your phone might observe you to help assess your mental health. If you don’t find that prospect disturbing, you’re either fantastically trusting of companies and governments or you haven’t thought about it enough.

But that feeling of unease should not determine our response to technology in mental health. In fact, we should embrace and encourage the tech giants as they seek to chart the mind and its frailties, albeit on the condition that we can overcome the enormous challenge of devising rules and regulations protecting privacy and consent.

Because, simply, existing healthcare systems are failing and will continue to fail on mental health. Even if the current model of funding the NHS was sustainable, the stigma that prevents us discussing mental health problems would ensure their prevention and treatment got a disproportionately small slice of the pie.

We pour ever more billions into dealing with the worst problems of physical health, and with considerable success. Death rates from cancer and heart disease have fallen markedly over the last 40 years. Over the same period, suicide rates have gone up. 

Even as the NHS budget grows, NHS trusts’ spending on mental health is falling. If someone with cancer went untreated, we’d say it was a scandal. Some estimates suggest one in five people who need “talking therapies” don’t get them. In a rare bit of enlightened thinking, some NHS trusts are supporting Big White Wall, an online service where people can anonymously report stress, anxiety and depression, take simple clinical tests and talk to therapists.

Technology will never be a panacea for mental illnesses, or our social failure to face up to them. But anything that makes them cheaper and easier and more mundane to deal with should be encouraged.

If you think the idea of Google assessing your state of mind and your phone monitoring you for depression is worrying, you’re right. But what’s more worrying is that allowing these things is the least bad option on mental health. Fist tap Arnach.

Thursday, November 05, 2015

broad spectrum bioweapons exploiting narrowband degeneracy as delivery system...,


independent |  The rise of 'chemsex' - sex under the influence of illegal drugs - is putting people at risk of HIV and other STIs, health experts have warned.

People who are taking GHB, GBL and crystal meth to enhance sexual pleasure and reduce inhibitions are jeopardising their both sexual and mental health, a study found.

Its growing popularity, particularly among gay men, has led doctors to warn that HIV rates and sexually-transmitted diseases cases are rising rapidly. 

Sex during the illegal drug-fuelled sessions is often unprotected - with those having chemsex reporting an average five sexual partners per session, according to the British Medical Journal. 

'These drugs are often used in combination to facilitate sexual sessions lasting several hours or days with multiple sexual partners,' it reported.

'Mephedrone and crystal meth are physiological stimulants, increasing heart rate and blood pressure, as well as triggering euphoria and sexual arousal.

'GHB (and its precursor GBL) is a powerful psychological disinhibitor and also a mild anaesthetic.'

The experts said the increase in chemsex was also putting people at risk of serious mental health problems caused by drug dependencies.>The authors of the report said there were many barriers for people who want to get help, including the shame and stigma often associated with drug use and ignorance of available drug services.

Some services are now developing specific chemsex and 'party drug' clinics, with specialist mental health support and help for withdrawing from the drugs.

But they warned that users often describe 'losing days' - not sleeping or eating for up to 72 hours - which 'may harm their general health'.

original wudan at war with the west - on an undeclared front - don zaluchi style...,


miamiherald |  The drug deaths of the three young men this year shared a common thread, one that ties them to scores of other overdose, suicide, accident and even murder victims in Miami-Dade and Broward counties: The synthetic substances medical examiners found in their bodies most likely arrived though the China Pipeline, which delivers illegal drugs, sold as bulk research chemicals on the Internet, to stateside dealers through the mail.

Authorities are scrambling to shut down the pipeline but they acknowledge that it remains the primary source of an array of dangerous so-called designer drugs flowing into South Florida. The grim result: a rising number of addicts, emergency room visits and deaths — particularly related to newer, more potent synthetics like infamous flakka and the less known —but even more lethal —fentanyl.

“This is Breaking Bad gone wild,” said George Hime, assistant director of the Miami-Dade County Medical Examiner’s toxicology lab. “There is no quality control. They don’t even know what they’ve created. Is it something that can cause pleasure for a short period of time? Yes. But it could also kill you.”

Flakka has run rampant among the homeless and in poor corners of Broward, offering a cheap and powerful rush aptly described as “$5 insanity.” Flakka, street slang for a chemical called alpha-PVP, induced one man up the coast in Brevard County to strip, proclaim himself the Norse god Thor and try to have sex with a tree. Two other men, suffering a serious flakka-fueled lapse in judgment, tried to break into Fort Lauderdale police headquarters.

Fentanyl users haven’t produced such attention-grabbing crazy rages, but the drug has quietly proven even deadlier in South Florida, according to a Miami Herald review of medical examiner records in both Miami-Dade and Broward counties. A fast-acting painkiller 50 times more potent than heroin, it has been used as a surgical analgesic for decades.

But investigators believe that underground labs in China fueling the synthetics pipeline have concocted illegal fentanyl as well as chemically tweaked “analogs” that are typically sold as heroin or mixed with it.

“Fentanyl and its analogs are often laced in heroin and are extremely dangerous, more so than alpha-PVP,” said Diane Boland, director of the Miami-Dade Medical Examiner’s toxicology lab. “People are dying at an alarming rate, especially those who believe they are using heroin when it’s in fact fentanyl. A small dose is enough to cause death.”

Read more here: http://www.miamiherald.com/news/local/crime/article36723141.html#storylink=cpy

Sunday, September 13, 2015

understanding technology is vital to understanding contemporary policy decisions...,


Take a look at how detailed this photo becomes when you enlarge it.

If you ever wondered why drones are so successful in hitting the right target, wonder no more.

Rather hard to disappear in a crowd nowadays….The Man can see you everywhere. Pick a small part of the crowd, place your computer’s cursor in the mass of people, click a couple of times -- wait --click a few more times and see how clear each individual face becomes.

This picture was taken with a 70,000 x 30,000 pixel camera (2100 Mega Pixels.) These cameras are not sold to the general public of course.

wikipedia |  A gigapixel image is a digital image bitmap composed of one billion (109) pixels (picture elements), 1000 times the information captured by a 1 megapixel digital camera. Current technology for creating such veryhigh-resolution images usually involves either making mosaics of a large number of high-resolution digital photographs or using a film negative as large as 12" × 9" (30 cm × 23 cm) up to 18" × 9" (46 cm × 23 cm), which is then scanned with a high-end large-format film scanner with at least 3000 dpi resolution. Only a few cameras are capable of creating a gigapixel image in a single sweep of a scene, such as the Pan-STARRS PS1 and the Gigapxl Camera.[1][2]

A gigamacro image is a gigapixel image which is a close up or macro image.
Gigapixel images may be of particular interest to the following:

Sunday, August 23, 2015

Strengthening Human Adaptive Reasoning and Problem-Solving (SHARP)


iarpa.gov |  Adaptive reasoning and problem-solving are increasingly valuable for information-oriented workplaces, where inferences from sparse, voluminous, or conflicting data must be drawn, validated, and communicated—often under stressful, time-sensitive conditions. In such contexts, one’s ability to accurately update one’s mental models, make valid conclusions, and effectively deploy attention and other cognitive resources is critical. Accordingly, optimizing an analyst’s adaptive reasoning could pay large dividends in the quality of their analytic conclusions and information products. Given adaptive reasoning tests’ high predictive value for performance and productivity, proven methods for strengthening adaptive reasoning and problem-solving could have significant benefits for society in general, as well as for individuals whose work is both analytical and cognitively demanding. Intriguingly, some recent research suggests that these capabilities may be strengthened, even among high-performing adults. Despite some promising results, however, there are methodological and practical shortcomings that currently limit the direct applicability of this research for the Intelligence Community.

Therefore, the Strengthening Human Adaptive Reasoning and Problem-Solving (SHARP) Program is seeking to fund rigorous, high-quality research to address these limitations and advance the science on optimizing human adaptive reasoning and problem-solving. The goal of the program is to test and validate interventions that have the potential to significantly improve these capabilities, leading to improvements in performance for high-performing adults in information-rich environments.

The research funded in this program will use innovative and promising approaches from a variety of fields with an emphasis on collecting data from a set of cognitive, behavioral, and biological outcome measures in order to determine convergent validity of successful approaches. It is anticipated that successful teams will be multidisciplinary, and may include (but not be limited to) research expertise in cognitive and behavioral neuroscience; psychology and psychometrics; human physiology and neurophysiology; structural and functional imaging; molecular biology and genetics; human subjects research design, methodology, and regulations; mathematical statistics and modeling; data visualization and analytics

transcranial direct current stimulation


newyorker |  What does this part of the brain do, again?” I asked, pointing to the electrode on my right temple.

“That’s the right inferior frontal cortex,” said Vince Clark, the director of the University of New Mexico Psychology Clinical Neuroscience Center, in Albuquerque. “It does a lot of things. It evaluates rules. People get thrown in jail when it’s impaired. It might help solve math problems. You can’t really isolate what it does. It has emotional components.”

It was early December, and night was falling, though it was barely five. The shadows were getting longer in the lab. My legs felt unusually calm. Something somewhere was buzzing. Outside the window, a tree stood black against the deepening sky.

“Verbal people tend to get really quiet,” Clark said softly. “That’s one effect we noticed. And it can do funny things with your perception of time.”

The device administering the current started to beep, and I saw that twenty minutes had passed. As the current returned to zero, I felt a slight burning under the electrodes—both the one on my right temple and another, on my left arm. Clark pressed some buttons, trying to get the beeping to stop. Finally, he popped out the battery, the nine-volt rectangular kind.

This was my first experience of transcranial direct-current stimulation, or tDCS—a portable, cheap, low-tech procedure that involves sending a low electric current (up to two milliamps) to the brain. Research into tDCS is in its early stages. A number of studies suggest that it may improve learning, vigilance, intelligence, and working memory, as well as relieve chronic pain and the symptoms of depression, fibromyalgia, Parkinson’s, and schizophrenia. However, the studies have been so small and heterogeneous that meta-analyses have failed to prove any conclusive effects, and long-term risks have not been established. The treatment has yet to receive F.D.A. approval, although a few hospitals, including Beth Israel, in New York, and Beth Israel Deaconess, in Boston, have used it to treat chronic pain and depression.

“What’s the plan now?” Clark asked, unhooking the electrodes. I could see he was ready to answer more questions. But, as warned, I felt almost completely unable to speak. It wasn’t like grasping for words; it was like no longer knowing what words were good for.

Clark offered to drive me back to my hotel. Everything was mesmerizing: a dumpster in the rear-view camera, the wide roads, the Route 66 signs, the Land of Enchantment license plates.

After some effort, I managed to ask about a paper I’d read regarding the use of tDCS to treat tinnitus. My father has tinnitus; the ringing in his ears is so loud it wakes him up at night. I had heard that some people with tinnitus were helped by earplugs, but my father wasn’t, so where in the head was tinnitus, and were there different kinds?

“There are different kinds,” Clark said. “Sometimes, there’s a real noise. It’s rare, but it happens with dogs.” He told me a story about a dog with this rare affliction. When a microphone was placed in its ear, everyone could hear a ringing tone—the result, it turned out, of an oversensitive tympanic membrane. “The poor dog,” he said.

We drove the rest of the way in silence.

Wednesday, July 29, 2015

of course no laws were broken (except the "peasant, play at your level"! law protocol....,)


courant |  With the viral video of a gun-firing drone making national headlines, Connecticut advocates are re-energized to pass a law next year that would ban such weapons.

The state Senate unanimously passed a bill this year that would have banned weapons on drones used by both the police and the general public. But the bill never came to a vote in the state House of Representatives as time ran out in the legislative session. Advocates say it will be a top priority when the new legislative session begins in February.

Lawmakers have been studying the issue for the past two years, including forming a task force to better understand the new technology.

The latest interest spiked when 18-year-old Austin Haughwout of Clinton released a video that showed a drone carrying a gun firing bullets — which has been shown on television news shows and viewed more than 2.8 million times on YouTube. He was not charged in the case after police said he had not violated any state laws.

"We do not want to see drones with weapons on them, as in this incident, where we can't take any legal action," said Cromwell chief Anthony Salvatore, who has represented the Connecticut police chiefs at the state Capitol for the past two decades. "From law enforcement's perspective, now, probably more than ever, we need to bring the bill back and address this type of situation."

Salvatore has been studying the issue for the past two years, and police have said from the start that they wanted to ban weapons and bombs from drones. But Salvatore said he was surprised at the speed of the change in the technology.

"I didn't think, this soon, that we would have somebody to this extent putting a semi-automatic pistol on a drone," Salvatore said Friday in an interview. "It certainly causes us great concern that it was done, and there were no laws broken. The whole thing causes law enforcement great concern."
He added, "Outside of the military, I cannot see any beneficial use. You wouldn't hunt this way. It's not something I would endorse."

David J. McGuire, a staff attorney for the American Civil Liberties Union of Connecticut, and others said that legislators were so tied up with the last-minute scramble on the two-year, $40 billion budget that they never debated the drone bill in the House.

"It essentially ran out of time," McGuire said. "The dysfunction of the legislature got to it. ... Everyone was expecting it to pass. It had a lot of momentum."

The video has resharpened the spotlight on the issue.

Friday, February 20, 2015

biology is technology, time is precious, and stupid only gets in the way...,


H+ |  DARPA, the Defense Research Projects Agency, is perhaps best known for its role as progenitors of the computer networking and the Internet. Formed in the wake of the Soviet Union’s surprise launch of Sputnik, DARPA’s objective was to ensure that the United States would avoid technological surprises in the future. This role was later expanded to causing technological surprises as well.

And although DARPA is and has been the leading source of funding for artificial intelligence and a number of other transhumanist projects, they’ve been missing in action for a while. Nothing DARPA has worked on since seems to have had the societal impact of the invention of the Internet. But that is about to change.

The current director of DARPA is Dr. Arati Prabhakar. She is the second female director of the organization, following the previous and controversial director Regina Dugan who left the government to work at Google. The return to big visions and big adventures was apparent and in stark contrast to Dugan’s leadership of the organization.

Quoted in WIRED, Dugan had, for example, stated that “There is a time and a place for daydreaming. But it is not at DARPA,” and she told a congressional panel in March 2011, “Darpa is not the place of dreamlike musings or fantasies, not a place for self-indulging in wishes and hopes. DARPA is a place of doing.”

Those days are gone. DARPA’s new vision is simply to revolutionize the human situation and it is fully transhumanist in its approach.

The Biological Technologies Office or BTO was announced with little fanfare in the spring of 2014. This announcement didn’t get that much attention, perhaps because the press release announcing the BTO was published on April Fool’s Day.

But DARPA is determined to turn that around, and to help make that happen, they held a two day event in the SIlicon Valley area to facilitate and communicate about radical changes ahead in the area of biotechnologies. Invitees included some of the top biotechnology scientists in the world. And the audience was a mixed group of scientists, engineers, inventors, investors, futurists, along with a handful of government contractors and military personnel.

Wednesday, November 19, 2014

you know who you are...,


WaPo |  Some people just can't seem to keep a beat.

You know the ones: They seem to be swaying to their own music or clapping along to a beat only they can hear. You may even think that describes you.

The majority of humans, however, do this very well. We clap, dance, march in unison with few problems; that ability is part of what sets us apart from other animals.

But it is true that rhythm — specifically, coordinating your movement with something you hear — doesn't come naturally to some people. Those people represent a very small sliver of the population and have a real disorder called "beat deafness."

Unfortunately, your difficulty dancing or keeping time in band class probably doesn't quite qualify.
A new study by McGill University researchers looked more closely at what might be going on with "beat deaf" individuals, and the findings may shed light on why some people seem to be rhythm masters while others struggle.

processing structure in language and music


springer |  The relationship between structural processing in music and language has received increasing interest in the past several years, spurred by the influential Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, Nature Neuroscience, 6, 674–681, 2003). According to this resource-sharing framework, music and language rely on separable syntactic representations but recruit shared cognitive resources to integrate these representations into evolving structures. The SSIRH is supported by findings of interactions between structural manipulations in music and language. However, other recent evidence suggests that such interactions also can arise with nonstructural manipulations, and some recent neuroimaging studies report largely nonoverlapping neural regions involved in processing musical and linguistic structure. These conflicting results raise the question of exactly what shared (and distinct) resources underlie musical and linguistic structural processing. This paper suggests that one shared resource is prefrontal cortical mechanisms of cognitive control, which are recruited to detect and resolve conflict that occurs when expectations are violated and interpretations must be revised. By this account, musical processing involves not just the incremental processing and integration of musical elements as they occur, but also the incremental generation of musical predictions and expectations, which must sometimes be overridden and revised in light of evolving musical input.

Tuesday, November 11, 2014

reflective elaborations on intuitions - what sort of beliefs are religious beliefs?


wustl.edu |  Religious beliefs are, from a cognitive standpoint, a puzzling phenomenon. They are not empirically motivated and often contradict the believers’ own assumption that the world obeys a set of stable rules. They are also  apparently very different from one cultural group to another. Assuming that we have cognitive systems because these provide us with reliable information to navigate our environment, it would seem that being strongly committed to hugely variable, non-empirical beliefs is wasteful if not downright damaging (McKay & Dennett, 2009). While some evolutionary anthropologists and psychologists conjecture that such beliefs may actually have adaptive adavantges (Bulbulia, 2004; Irons, 2001), others see them mostly as by-products of other, adaptive cognitive functions (Boyer, 1994b, 2001). This debate is orthogonal to the question, How do such beliefs occur in human minds? One possibility is that religious representations consist in post-hoc explicit elaborations on common intuitions (Boyer, 1994a, 2001). In this perspective, beliefs about ancestors are parasitic on intuitions about persons, as notions of contagious magic are on intuitions about pathogens, and beliefs about the afterworld are on intuitions about dead people (Boyer, 2001; Pyysiainen, 2001). But, if religious beliefs are derivative, what sorts of “beliefs” are they, and how do they get triggered and sustained in cognitive systems?

Monday, November 10, 2014

which cheek did jesus turn?


tandfonline |  In portraiture, subjects are mostly depicted with a greater portion of the left side of their face (left hemiface) facing the viewer. This bias may be induced by the right hemisphere's dominance for emotional expression and agency. Since negative emotions are particularly portrayed by the left hemiface, and since asymmetrical hemispheric activation may induce alterations of spatial attention and action-intention, we posited that paintings of the painful and cruel crucifixion of Jesus would be more likely to show his left hemiface than observed in portraits of other people. By analyzing depictions of Jesus's crucifixion from book and art gallery sources, we determined a significantly greater percent of these crucifixion pictures showed the left hemiface of Jesus facing the viewer than found in other portraits. In addition to the facial expression and hemispatial attention-intention hypotheses, there are other biblical explanations that may account for this strong bias, and these alternatives will have to be explored in future research.

In portraits, most subjects are depicted with their head rotated rightward, with more of the left than right side of the subject's face being shown. For example, in the largest study of facial portraiture, McManus and Humphrey (1973) studied 1474 portraits and found a 60% bias to portray a greater portion of the subjects’ left than right hemiface. Nicholls, Clode, Wood, and Wood (1999) found the same left hemiface bias even when accounting for the handedness of the painter.
Multiple theories have been proposed in an attempt to explain the genesis of this left hemiface bias in portraits. One hypothesis is that the right hemisphere is dominant for mediating facial emotional expressions. In an initial study, Buck and Duffy (1980) reported that patients with right hemisphere damage were less capable of facially expressing emotions than those with left hemisphere damage when viewing slides of familiar people, unpleasant scenes, and unusual pictures. These right-left hemispheric differences in facial expressiveness have been replicated in studies involving the spontaneous and voluntary expression of emotions in stroke patients with focal lesions (Borod, Kent, Koff, Martin, & Alpert, 1988; Borod, Koff, Lorch, & Nicholas, 1985; Borod, Koff, Perlman Lorch, & Nicholas, 1986; Richardson, Bowers, Bauer, Heilman, & Leonard, 2000).
Hemispheric asymmetries are even reported in more “naturalistic” settings outside the laboratory. For example, Blonder, Burns, Bowers, Moore, and Heilman (1993) videotaped interviews with patients and spouses in their homes and found that patients with right hemisphere damage were rated as less facially expressive than left hemisphere-damaged patients and normal control patients. These lesion studies suggest that the right hemisphere has a dominant role in mediating emotional facial expressions. Whereas corticobulbar fibers that innervate the forehead are bilateral, the contralateral hemisphere primarily controls the lower face. Thus, these lesion studies suggest that the left hemiface below the forehead, which is innervated by the right hemisphere, may be more emotionally expressive.
This right hemisphere-left hemiface dominance postulate has been further supported by studies of normal subjects portraying emotional facial expressions. For example, Borod et al. (1988) asked subjects to portray emotions, either by verbal command or visual imitation. The judges who rated these facial expressions ranked the left face as expressing stronger emotions. Sackeim and Gur (1978) showed normal subjects photographs of normal people facially expressing their emotions and asked participants to rate the intensity of the emotion being expressed. However, before showing these pictures of people making emotional faces, Sackeim and Gur altered the photographs. They either paired the left hemiface with a mirror image of this photograph's left hemiface to form a full face made up of two left hemifaces or formed full faces from right hemifaces. Normal participants found that the composite photographs of the left hemiface were more emotionally expressive than the right hemiface. Triggs, Ghacibeh, Springer, and Bowers (2005) administered transcranial magnetic stimulation (TMS) to the motor cortex of 50 subjects during contraction of bilateral orbicularis oris muscles and analyzed motor evoked potentials (MEPs). They found that the MEPs elicited in the left lower face were larger than the right face, and thus the left face might appear to be more emotionally expressive because it is more richly innervated.
Another reason portraits often have the subjects rotated to the right may be related to the organization of the viewer's brain. Both lesion studies (e.g., Adolphs, Damasio, Tranel, & Damasio, 1996; Bowers, Bauer, Coslett, & Heilman, 1985; DeKosky, Heilman, Bowers, & Valenstein, 1980) and physiological and functional imaging studies (e.g., Davidson & Fox, 1982; Puce, Allison, Asgari, Gore, & McCarthy, 1996; Sergent, Ohta, & Macdonald, 1992) have revealed that the right hemisphere is dominant for the recognition of emotional facial expressions and the recognition of previously viewed faces (Hilliard, 1973; Jones, 1979). In addition, studies of facial recognition and the recognition of facial emotional expressions have demonstrated that facial pictures shown in the left visual field and left hemispace are better recognized than those viewed on the right (Conesa, Brunold-Conesa, & Miron, 1995). Since the right hemisphere is dominant for facial recognition and the perception of facial emotions when viewing faces, the normal viewer of portraits may attend more to the left than right visual hemispace and hemifield. When the head of a portrait is turned to the right and the observer focuses on the middle of the face (midsagittal plane), more of the subject's face would fall in the viewer's left visual hemispace and thus be more likely to project to the right hemisphere.
Agency is another concept that may influence the direction of facial deviation in portraiture. Chatterjee, Maher, Gonzalez Rothi, and Heilman (1995) demonstrated that when right-handed individuals view a scene with more than one figure, they are more likely to see the left figure as being the active agent and the right figure as being the recipient of action or the patient. From this perspective the artist is the agent, and perhaps he or she is more likely to paint the left hemiface of the subject, which from the artist's perspective is more to the right, the position of the patient. Support for this agency hypothesis comes from studies in which individuals rated traits of left- versus right-profiled patients, and found that those with the right cheek exposed were considered more “active” (Chatterjee, 2002).
Taking this background information into account and applying it to depictions of the crucifixion of Jesus Christ highlights the various influences on profile painting in portraiture. Specifically, we confirm the predilection to display the left hemiface in portraiture and predict the same in portraits of Jesus’ crucifixion.
The strongest artistic portrayals of a patient being subject to cruel and painful agents are images of the crucifixion of Jesus. The earliest depiction of Christ on the cross dates back to around 420 AD. As Christianity existed for several centuries before that, this seems to be a late onset for this type of art. Because of the strong focus on Christ's resurrection and the disgrace of his agony and death, art historians postulate that there was a hesitation for early followers to show Christ on the cross. The legalization of Christianity also may have lifted the stigma. Based on the artwork still in existence from that period, Jesus was often pictured alive during the crucifixion scene. Several centuries later, from the end of the seventh century to the beginning of the eighth century, Christ is more often shown dead on the cross (Harries, 2005).

Sunday, October 26, 2014

why we need a neuroscience of creativity and psychopathology


frontiersin |  Individuals with a predisposition to mental disorder may utilize different strategies, or they may use familiar strategies in unusual ways, to solve creative tasks. For over a century, knowledge of psychopathological states in the brain has illuminated our knowledge of normal brain states, and that should also be the case with the study of the creative brain. Neuroscience can approach this study in two ways. First, it can identify genetic variations that may underlie both creativity and psychopathology. This molecular biology approach is already underway, with several studies indicating polymorphisms of the DRD2 and DRD4 genes (Reuter et al., 2006; Mayseless et al., 2013), the 5HT2a gene (Ott et al., 2005) and the NRG1 gene (Kéri, 2009) that have been associated with both creativity and certain forms of psychopathology.

Second, brain imaging work can be applied to the study of the cognitive mechanisms that may be commonly shared between creativity and psychopathology. For example, psychologists have long suggested that both schizotypal and highly creative individuals tend to utilize states of cognitive disinhibition to access associations that are ordinarily hidden from conscious awareness (e.g., Kris, 1952; Koestler, 1964; Eysenck, 1995). Research is revealing that indeed both highly creative subjects and subjects who are high in schizotypy demonstrate more disinhibition during creative tasks than less creative or less schizotypal subjects (see Martindale, 1999; Carson et al., 2003; Abraham and Windmann, 2008; Dorfman et al., 2008). However, the neural substrates of cognitive disinhibition, as applied to creativity, need to be further studied.

My colleagues and I have found that cognitive disinhibition (in the form of reduced latent inhibition) combined with very high IQ levels predicts extraordinary creative achievement (Carson et al., 2003). These results have since been replicated (Kéri, 2011). We hypothesized that cognitive disinhibition allows a broadening of stimuli available to consciousness while high IQ affords the cognitive resources to process and manipulate that increased stimuli to form novel and creative ideas without the individual becoming overwhelmed and confused. What we did not test is whether the high creative achievers in our studies exhibited phasic changes in latent inhibition, or whether their reduced inhibition was more trait-like, as is seen in persons at risk for psychosis. Because latent inhibition tasks are compatible with neuroimaging, the study of controlled cognitive disinhibition is one area of potential study for the neuroscience of creativity.

Additional areas of study are suggested by the shared vulnerability model of creativity and psychopathology (Carson, 2011, 2013). The shared vulnerability model suggests that creativity and psychopathology may share genetically-influenced factors that are expressed as either pathology or creativity depending upon the presence or absence of other moderating factors (see Figure 1). The shared vulnerability components that have been identified, in addition to cognitive disinhibition, include novelty salience, neural hyperconnectivity, and emotional lability.

Friday, September 19, 2014

socioeconomic status and structural brain development..,


frontiersin |  Recent advances in neuroimaging methods have made accessible new ways of disentangling the complex interplay between genetic and environmental factors that influence structural brain development. In recent years, research investigating associations between socioeconomic status (SES) and brain development have found significant links between SES and changes in brain structure, especially in areas related to memory, executive control, and emotion. This review focuses on studies examining links between structural brain development and SES disparities of the magnitude typically found in developing countries. We highlight how highly correlated measures of SES are differentially related to structural changes within the brain.

Introduction
Human development does not occur within a vacuum. The environmental contexts and social connections a person experiences throughout his or her lifetime significantly impact the development of both cognitive and social skills. The incorporation of neuroscience into topics more commonly associated with the social sciences, such as culture or socioeconomic status (SES), has led to an increased understanding of the mechanisms that underlie development across the lifespan. However, more research is necessary to disentangle the complexities surrounding early environmental variation and neural development. This review highlights studies examining links between structural brain development and SES disparities of the magnitude typically found in developing countries. We do not include studies examining children who have experienced extreme forms of early adversity, such as institutionalization or severe abuse. We also limit this review to findings concerning socioeconomic disparities in brain structure, as opposed to brain function.

Friday, September 12, 2014

sgt. connolly is as different from overseer principe as chalk is different from cheese...,

start at 5:48

bbcnews |  Monkeys at the top and bottom of the social pecking order have physically different brains, research has found.

A particular network of brain areas was bigger in dominant animals, while other regions were bigger in subordinates.

The study suggests that primate brains, including ours, can be specialised for life at either end of the hierarchy.

The differences might reflect inherited tendencies toward leading or following, or the brain adapting to an animal's role in life - or a little of both.

Neuroscientists made the discovery, which appears in the journal Plos Biology, by comparing brain scans from 25 macaque monkeys that were already "on file" as part of ongoing research at the University of Oxford.

"We were also looking at learning and memory and decision-making, and the changes that are going on in your brain when you're doing those things," explained Dr MaryAnn Noonan, the study's first author. 

The decision to look at the animals' social status produced an unexpectedly clear result, Dr Noonan said. 

"It was surprising. All our monkeys were of different ages and different genders - but with fMRI (functional magnetic resonance imaging) you can control for all of that. And we were consistently seeing these same networks coming out."

The monkeys live in groups of up to five, so the team identified their social status by watching their behaviour, then compared it to different aspects of the brain data.

In monkeys at the top of their social group, three particular bits of the brain tended to be larger (specifically the amygdala, the hypothalamus and the raphe nucleus). In subordinate monkeys, the tendency was for a different cluster of regions to be bigger (all within the striatum).

Monday, September 08, 2014

are humans wired for bad news, angry faces, and sad memories?



aeon |  I have good news and bad news. Which would you like first? If it’s bad news, you’re in good company – that’s what most people pick. But why?

Negative events affect us more than positive ones. We remember them more vividly and they play a larger role in shaping our lives. Farewells, accidents, bad parenting, financial losses and even a random snide comment take up most of our psychic space, leaving little room for compliments or pleasant experiences to help us along life’s challenging path. The staggering human ability to adapt ensures that joy over a salary hike will abate within months, leaving only a benchmark for future raises. We feel pain, but not the absence of it.

Hundreds of scientific studies from around the world confirm our negativity bias: while a good day has no lasting effect on the following day, a bad day carries over. We process negative data faster and more thoroughly than positive data, and they affect us longer. Socially, we invest more in avoiding a bad reputation than in building a good one. Emotionally, we go to greater lengths to avoid a bad mood than to experience a good one. Pessimists tend to assess their health more accurately than optimists. In our era of political correctness, negative remarks stand out and seem more authentic. People – even babies as young as six months old – are quick to spot an angry face in a crowd, but slower to pick out a happy one; in fact, no matter how many smiles we see in that crowd, we will always spot the angry face first.

The machinery by which we recognise facial emotion, located in a brain region called the amygdala, reflects our nature as a whole: two-thirds of neurons in the amygdala are geared toward bad news, immediately responding and storing it in our long-term memory, points out neuropsychologist Rick Hanson, Senior Fellow of the Greater Good Science Center at University of California, Berkeley. This is what causes the ‘fight or flight’ reflex – a survival instinct based on our ability to use memory to quickly assess threats. Good news, by comparison, takes 12 whole seconds to travel from temporary to long-term memory. Our ancient ancestors were better off jumping away from every stick that looked like a snake than carefully examining it before deciding what to do.

Saturday, July 12, 2014

european neuroscientists boycotting the european version of BRAIN?

scientificamerican |  The Human Brain Project is an attempt to create a computerized facsimile of the entire brain, down to the level of individual molecules, within a 10-year time frame. It has always been viewed with skepticism by some neuroscientists, who view its objectives as impossibly ambitious. The project is principally the  brainchild of neuroscientist Henry Markram, who wrote for Scientific American on the topic. (A detailed story on the protest by Ian Sample appears in The Guardian. Also, check out this great book excerpt by Sebastian Seung that we ran on Markram and his desire to create a digital brain. Our coverage on this has been ongoing. Maybe also give a look  here and here.)

Here’s a snippet of the letter that was drafted by scientists to convey their discontent:
..we wish to express the view that the HBP [Human Brain Project] is not on course and that the European Commission must take a very careful look at both the science and the management of the HBP before it is renewed. We strongly question whether the goals and implementation of the HBP are adequate to form the nucleus of the collaborative effort in Europe that will further our understanding of the brain.
The letter calls for an independent review of the Human Brain Project or perhaps a reallocation of funding to an array of broad-based neuroscience projects that do not just focus on a SimBrain. If either of these options is not forthcoming, the scientists who signed the letter pledge to not participate in the project.

Not your usual Big Science outing, eh?

Markram got back to me with a comment:
It seems that it will take decades more for the neuroscience community to mature to the level of other disciplines. This is such an exciting direction that can bring everyone together to take on this grand challenge. Just so sad that it gets torn apart by scientists that don’t  want to understand, that believe second-hand rumors and just want money for their next experiment. For the first time in my career as a neuroscientist, I lose hope of neuroscience ever answering any real questions about how the brain works and its many diseases.
Then Zach Mainen, a principal investigator at the Champalimaud Neuroscience Programme in Portugal, got back about the reasons for the upsurge of criticism:
A large group (now more than 250) neuroscientists in Europe are trying to send a wake up call to the European Commission to say that the Human Brain Project is not an effective vehicle to form the hub of European neuroscience. Unlike the U.S. Brain initiative, the HBP is a narrowly focused information computing technology effort that, contrary to how it was sold, does not have a realistic plan for understanding brain function. We want the public to know that neuroscience research is not represented by the HBP. We hope that the open message to the EC can help to initiate a dialogue and find a better solution.
The Obama Administration’s brain initiative—possibly a multi-billion dollar undertaking, if fully funded—has also met with some grumbling, but at least some of that has subsided as major neuroscientists have assumed an important advisory role.

It will be interesting to see what happens next. One of the trendiest fields in all of science now has to contend with an unprecedented rebellion in the ranks that festers away as the whole world watches.

Funny How America Bristles When Netanyahu Indicted...,

thehill  |    U.S. officials went on the offensive Monday after the International Criminal Court (ICC) filed arrest warrants against two t...