Showing posts with label neuromancy. Show all posts
Showing posts with label neuromancy. Show all posts

Thursday, February 01, 2018

We Know A Little About What Matter Does, But Nothing About What It Is....,


qz |  Interest in panpsychism has grown in part thanks to the increased academic focus on consciousness itself following on from Chalmers’ “hard problem” paper. Philosophers at NYU, home to one of the leading philosophy-of-mind departments, have made panpsychism a feature of serious study. There have been several credible academic books on the subject in recent years, and popular articles taking panpsychism seriously.

One of the most popular and credible contemporary neuroscience theories on consciousness, Giulio Tononi’s Integrated Information Theory, further lends credence to panpsychism. Tononi argues that something will have a form of “consciousness” if the information contained within the structure is sufficiently “integrated,” or unified, and so the whole is more than the sum of its parts. Because it applies to all structures—not just the human brain—Integrated Information Theory shares the panpsychist view that physical matter has innate conscious experience.

Goff, who has written an academic book on consciousness and is working on another that approaches the subject from a more popular-science perspective, notes that there were credible theories on the subject dating back to the 1920s. Thinkers including philosopher Bertrand Russell and physicist Arthur Eddington made a serious case for panpsychism, but the field lost momentum after World War II, when philosophy became largely focused on analytic philosophical questions of language and logic. Interest picked up again in the 2000s, thanks both to recognition of the “hard problem” and to increased adoption of the structural-realist approach in physics, explains Chalmers. This approach views physics as describing structure, and not the underlying nonstructural elements.

“Physical science tells us a lot less about the nature of matter than we tend to assume,” says Goff. “Eddington”—the English scientist who experimentally confirmed Einstein’s theory of general relativity in the early 20th century—“argued there’s a gap in our picture of the universe. We know what matter does but not what it is. We can put consciousness into this gap.”  Fist tap Dale.

Monday, January 01, 2018

Is Ideology The Original Augmented Reality?


nautil.us |  Released in July 2016, Pokémon Go is a location-based, augmented-reality game for mobile devices, typically played on mobile phones; players use the device’s GPS and camera to capture, battle, and train virtual creatures (“Pokémon”) who appear on the screen as if they were in the same real-world location as the player: As players travel the real world, their avatar moves along the game’s map. Different Pokémon species reside in different areas—for example, water-type Pokémon are generally found near water. When a player encounters a Pokémon, AR (Augmented Reality) mode uses the camera and gyroscope on the player’s mobile device to display an image of a Pokémon as though it were in the real world.* This AR mode is what makes Pokémon Go different from other PC games: Instead of taking us out of the real world and drawing us into the artificial virtual space, it combines the two; we look at reality and interact with it through the fantasy frame of the digital screen, and this intermediary frame supplements reality with virtual elements which sustain our desire to participate in the game, push us to look for them in a reality which, without this frame, would leave us indifferent. Sound familiar? Of course it does. What the technology of Pokémon Go externalizes is simply the basic mechanism of ideology—at its most basic, ideology is the primordial version of “augmented reality.”

The first step in this direction of technology imitating ideology was taken a couple of years ago by Pranav Mistry, a member of the Fluid Interfaces Group at the Massachusetts Institute of Technology Media Lab, who developed a wearable “gestural interface” called “SixthSense.”** The hardware—a small webcam that dangles from one’s neck, a pocket projector, and a mirror, all connected wirelessly to a smartphone in one’s pocket—forms a wearable mobile device. The user begins by handling objects and making gestures; the camera recognizes and tracks the user’s hand gestures and the physical objects using computer vision-based techniques. The software processes the video stream data, reading it as a series of instructions, and retrieves the appropriate information (texts, images, etc.) from the Internet; the device then projects this information onto any physical surface available—all surfaces, walls, and physical objects around the wearer can serve as interfaces. Here are some examples of how it works: In a bookstore, I pick up a book and hold it in front of me; immediately, I see projected onto the book’s cover its reviews and ratings. I can navigate a map displayed on a nearby surface, zoom in, zoom out, or pan across, using intuitive hand movements. I make a sign of @ with my fingers and a virtual PC screen with my email account is projected onto any surface in front of me; I can then write messages by typing on a virtual keyboard. And one could go much further here—just think how such a device could transform sexual interaction. (It suffices to concoct, along these lines, a sexist male dream: Just look at a woman, make the appropriate gesture, and the device will project a description of her relevant characteristics—divorced, easy to seduce, likes jazz and Dostoyevsky, good at fellatio, etc., etc.) In this way, the entire world becomes a “multi-touch surface,” while the whole Internet is constantly mobilized to supply additional data allowing me to orient myself.

Mistry emphasized the physical aspect of this interaction: Until now, the Internet and computers have isolated the user from the surrounding environment; the archetypal Internet user is a geek sitting alone in front of a screen, oblivious to the reality around him. With SixthSense, I remain engaged in physical interaction with objects: The alternative “either physical reality or the virtual screen world” is replaced by a direct interpenetration of the two. The projection of information directly onto the real objects with which I interact creates an almost magical and mystifying effect: Things appear to continuously reveal—or, rather, emanate—their own interpretation. This quasi-animist effect is a crucial component of the IoT: “Internet of things? These are nonliving things that talk to us, although they really shouldn’t talk. A rose, for example, which tells us that it needs water.”1 (Note the irony of this statement. It misses the obvious fact: a rose is alive.) But, of course, this unfortunate rose does not do what it “shouldn’t” do: It is merely connected with measuring apparatuses that let us know that it needs water (or they just pass this message directly to a watering machine). The rose itself knows nothing about it; everything happens in the digital big Other, so the appearance of animism (we communicate with a rose) is a mechanically generated illusion.

Monday, September 18, 2017

The Promise and Peril of Immersive Technologies


weforum |  The best place from which to draw inspiration for how immersive technologies may be regulated is the regulatory frameworks being put into effect for traditional digital technology today. In the European Union, the General Data Protection Regulation (GDPR) will come into force in 2018. Not only does the law necessitate unambiguous consent for data collection, it also compels companies to erase individual data on request, with the threat of a fine of up to 4% of their global annual turnover for breaches. Furthermore, enshrined in the bill is the notion of ‘data portability’, which allows consumers to take their data across platforms – an incentive for an innovative start-up to compete with the biggest players. We may see similar regulatory norms for immersive technologies develop as well.

Providing users with sovereignty of personal data
Analysis shows that the major VR companies already use cookies to store data, while also collecting information on location, browser and device type and IP address. Furthermore, communication with other users in VR environments is being stored and aggregated data is shared with third parties and used to customize products for marketing purposes.

Concern over these methods of personal data collection has led to the introduction of temporary solutions that provide a buffer between individuals and companies. For example, the Electronic Frontier Foundation’s ‘Privacy Badger’ is a browser extension that automatically blocks hidden third-party trackers and allows users to customize and control the amount of data they share with online content providers. A similar solution that returns control of personal data should be developed for immersive technologies. At present, only blunt instruments are available to individuals uncomfortable with data collection but keen to explore AR/VR: using ‘offline modes’ or using separate profiles for new devices.

Managing consumption
Short-term measures also exist to address overuse in the form of stopping mechanisms. Pop-up usage warnings once healthy limits are approached or exceeded are reportedly supported by 71% of young people in the UK. Services like unGlue allow parents to place filters on content types that their children are exposed to, as well as time limits on usage across apps.

All of these could be transferred to immersive technologies, and are complementary fixes to actual regulation, such as South Korea’s Shutdown Law. This prevents children under the age of 16 from playing computer games between midnight and 6am. The policy is enforceable because it ties personal details – including date of birth – to a citizen’s resident registration number, which is required to create accounts for online services. These solutions are not infallible: one could easily imagine an enterprising child might ‘borrow’ an adult’s device after-hours to find a workaround to the restrictions. Further study is certainly needed, but we believe that long-term solutions may lie in better design.
Rethinking success metrics for digital technology
As businesses develop applications using immersive technologies, they should transition from using metrics that measure just the amount of user engagement to metrics that also take into account user satisfaction, fulfilment and enhancement of well-being. Alternative metrics could include a net promoter score for software, which would indicate how strongly users – or perhaps even regulators – recommend the service to their friends based on their level of fulfilment or satisfaction with a service.

The real challenge, however, is to find measures that align with business policy and user objectives. As Tristan Harris, Founder of Time Well Spent argues: “We have to come face-to-face with the current misalignment so we can start to generate solutions.” There are instances where improvements to user experience go hand-in-hand with business opportunities. Subscription-based services are one such example: YouTube Red will eliminate advertisements for paying users, as does Spotify Premium. These are examples where users can pay to enjoy advertising-free experiences and which do not come at the cost to the content developers since they will receive revenue in the form of paid subscriptions.

More work remains if immersive technologies are to enable happier, more fulfilling interactions with content and media. This will largely depend on designing technology that puts the user at the centre of its value proposition.

This is part of a series of articles related to the disruptive effects of several technologies (virtual/augmented reality, artificial intelligence and blockchain) on the creative economy.


Virtual Reality Health Risks...,


medium |  Two decades ago, our research group made international headlines when we published research showing that virtual reality systems could damage people’s health.

Our demonstration of side-effects was not unique — many research groups were showing that it could cause health problems. The reason that our work was newsworthy was because we showed that there were fundamental problems that needed to be tackled when designing virtual reality systems — and these problems needed engineering solutions that were tailored for the human user.

In other words, it was not enough to keep producing ever faster computers and higher definition displays — a fundamental change in the way systems were designed was required.

So why do virtual reality systems need a new approach? The answer to this question lies in the very definition of how virtual reality differs from how we traditionally use a computer.

Natural human behaviour is based on responses elicited by information detected by a person’s sensory systems. For example, rays of light bouncing off a shiny red apple can indicate that there’s a good source of food hanging on a tree.

A person can then use the information to guide their hand movements and pick the apple from the tree. This use of ‘perception’ to guide ‘motor’ actions defines a feedback loop that underpins all of human behaviour. The goal of virtual reality systems is to mimic the information that humans normally use to guide their actions, so that humans can interact with computer generated objects in a natural way.

The problems come when the normal relationship between the perceptual information and the corresponding action is disrupted. One way of thinking about such disruption is that a mismatch between perception and action causes ‘surprise’. It turns out that surprise is really important for human learning and the human brain appears to be engineered to minimise surprise.

This means that the challenge for the designers of virtual reality is that they must create systems that minimise the surprise experienced by the user when using computer generated information to control their actions.

Of course, one of the advantages of virtual reality is that the computer can create new and wonderful worlds. For example, a completely novel fruit — perhaps an elppa — could be shown hanging from a virtual tree. The elppa might have a completely different texture and appearance to any other previously encountered fruit — but it’s important that the information used to specify the location and size of the elppa allows the virtual reality user to guide their hand to the virtual object in a normal way.

If there is a mismatch between the visual information and the hand movements then ‘surprise’ will result, and the human brain will need to adapt if future interactions between vision and action are to maintain their accuracy. The issue is that the process of adaptation may cause difficulties — and these difficulties might be particularly problematic for children as their brains are not fully developed. 

This issue affects all forms of information presented within a virtual world (so hearing and touch as well as vision), and all of the different motor systems (so postural control as well as arm movement systems). One good example of the problems that can arise can be seen through the way our eyes react to movement.

In 1993, we showed that virtual reality systems had a fundamental design flaw when they attempted to show three dimensional visual information. This is because the systems produce a mismatch between where the eyes need to focus and where the eyes need to point. In everyday life, if we change our focus from something close to something far away our eyes will need to change focus and alter where they are pointing.

The change in focus is necessary to prevent blur and the change in eye direction is necessary to stop double images. In reality, the changes in focus and direction are physically linked (a change in fixation distance causes change in the images and where the images fall at the back of the eyes).

Tuesday, March 28, 2017

What IS the Current State of Psychotronic Research?


wired |  The Voice of God weapon — a device that projects voices into your head to make you think God is speaking to you — is the military’s equivalent of an urban myth.  Meaning, it’s mentioned periodically at defense workshops (ironically, I first heard about it at the same defense conference where I first met Noah), and typically someone whispers about it actually being used. Now Steven Corman, writing at the COMOPS journal, describes his own encounter with this urban myth:
At a government workshop some time ago I head someone describe a new tool that was described as the “voice of Allah.” This was said to be a device that would operate at a distance and would deliver a message that only a single person could hear. The story was that it was tested in a conflict situation in Iraq and pointed at one insurgent in a group, who whipped around looking in all directions, and began a heated conversation with his compatriots, who did not hear the message. At the time I greeted this story with some skepticism.
Is there any basis to this technology? Well, Holosonic Research Labs and American Technology Corporation both have versions of directed sound, which can allow a single person to hear a message that others around don’t hear. DARPA appears to be working on its own sonic projector. Intriguingly, Strategy Page reports that troops are using the Long Range Acoustic Device as a modified Voice of God weapon:
It appears that some of the troops in Iraq are using "spoken" (as opposed to "screeching") LRAD to mess with enemy fighters. Islamic terrorists tend to be superstitious and, of course, very religious. LRAD can put the "word of God" into their heads. If God, in the form of a voice that only you can hear, tells you to surrender, or run away, what are you gonna do?
And as Corman also notes, CNET recently wrote about an advertisement in New York for A&E’s TV show Paranormal State, which uses some of this technology. Beyond directed sound, it’s long been known that microwaves at certain frequencies can produce an auditory effect that sounds like it’s coming from within someone’s head (and there’s the nagging question of classified microwave work at Brooks Air Force Base, that the Air Force stubbornly refuses to talk about).

That brings us back to the Voice of God/Allah Weapon. Is it real or bogus? In one version — related to me by another defense reporter — it’s not just Allah’s voice — but an entire holographic image projected above (um, who decides what Allah looks like?). 

Does it exist? I’m not sure, but it’s funny that when you hear it brought up at defense conferences, no one ever asks the obvious question: does anybody think this thing will actually convince people God is speaking to them? I’m thinking, not.

Thursday, March 02, 2017

Music as a Means of Social Control


PenelopeGouk |  Ancient models: Plato and Aristotle

Anyone thinking about music and social control in the early modern period would tend to look for precedents in antiquity, the most significant authors in this regard being Plato and Aristotle. The two crucial texts by Plato are his Republic and the Laws, both of which are concerned with the nature of the best form of political organisation and the proper kind of education for individuals that lead to a stable and harmonious community.

Education of the republic's citizens includes early training in both gymnastics and mousike, which Andrew Barker defines as 'primarily an exposure to poetry and to the music that is its key vehicle.' (For the rest of this discussion I will simply refer to 'music' but will be using it in the broader sense of poetry set to musical accompaniment.) The crucial point is that within Plato's ideal society the kinds of 'music' that are performed must be firmly controlled by the law givers, the argument being that freedom of choice in music and novelty in its forms will inevitably lead to corruption and a breakdown of society. 

Plato's distrust of musical innovation is made concrete in his Laws where he describes what he thinks actually happened once in Greek society, namely that the masses had the effrontery to suppose they were capable of judging music themselves, the result being that 'from a starting point in music, everyone came to believe in their own wisdom about everything, and to reject the law, and liberty followed immediately'. (To Plato liberty is anathema since some people have much greater understanding and knowledge than others.) 

This close association between the laws of music and laws of the state exists because according to Plato music imitates character, and has a direct effect on the soul which itself is a harmonia, the consequence being that bad music results in bad citizens. To achieve a good state some form of regulation must take place, the assumption being that if the right musical rules are correctly followed this will result in citizens of good character. It is fascinating to discover that Plato looks to Egypt with approval for its drastic control of music in society, claiming that its forms had remained unchanged for ten thousand years because of strict regulation that 'dedicated all dancing and all melodies to religion'. To prescribe melodies that possessed a 'natural correctness' he thinks 'would be a task for a god, or a godlike man, just as in Egypt they say that the melodies that have been preserved for this great period of time were the compositions of Isis.' (In fact as we shall see there were similar arguments made for the divine origins of sacred music in the Hebrew tradition.) 

Perhaps thinking himself to be a 'godlike man', Plato lays down a series of strict rules governing musical composition and performance, a prescription that if correctly followed would ensure the virtue of citizens and the stability of the state, as well as the banishment of most professional musicians from society. First, Plato wants to limit the kind of poetry that is set to music at all because songs have such a direct and powerful effect on people's morals. Thus any poems that portray wickedness, immorality, mourning or weakness of any kind must be banned, leaving only music that encourages good and courageous behaviour among citizens. The next thing to be curtailed is the range of musical styles allowed in the city, which Plato would confine to the Dorian and the Phrygian 'harmoniai', a technical term for organisations of musical pitch that for the purposes of this paper need not be discussed in any more detail. Between them these two 'harmoniai' appropriately 'imitate the sounds of the self-restrained and the brave man, each of them both in good fortune and bad.' Thirdly, as well as controlling the words to be sung and the manner in which they are performed, Plato would also regulate the kinds of instrument used for accompaniment, the two most important being the lyra and the kithara. Those that are forbidden include the aulos as well as a range of multi-stringed instruments capable of playing in a variety of different modes. Finally, Plato is emphatic in stating that the metrical foot and the melody must follow the words properly for the right effect to be achieved, rather than the other way around. 

Of course these rules are intrinsically interesting, since they tell us about what Plato thought was wrong with music of his own time. However, for my purposes they are also interesting because they seem to have had a discernable influence on would-be reformers of music and society in the early modern period (that is, between the sixteenth and eighteenth centuries) which I will come to further on in my paper.

Saturday, November 12, 2016

A Different Theory of Quantum Consciousness


theatlantic |  The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain—which would essentially enable the brain to function like a quantum computer.

As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.

Fisher’s hypothesis faces the same daunting obstacle that has plagued microtubules: a phenomenon called quantum decoherence. To build an operating quantum computer, you need to connect qubits—quantum bits of information—in a process called entanglement. But entangled qubits exist in a fragile state. They must be carefully shielded from any noise in the surrounding environment. Just one photon bumping into your qubit would be enough to make the entire system “decohere,” destroying the entanglement and wiping out the quantum properties of the system. It’s challenging enough to do quantum processing in a carefully controlled laboratory environment, never mind the warm, wet, complicated mess that is human biology, where maintaining coherence for sufficiently long periods of time is well nigh impossible.

Over the past decade, however, growing evidence suggests that certain biological systems might employ quantum mechanics. In photosynthesis, for example, quantum effects help plants turn sunlight into fuel. Scientists have also proposed that migratory birds have a “quantum compass” enabling them to exploit Earth’s magnetic fields for navigation, or that the human sense of smell could be rooted in quantum mechanics.

Fisher’s notion of quantum processing in the brain broadly fits into this emerging field of quantum biology. Call it quantum neuroscience. He has developed a complicated hypothesis, incorporating nuclear and quantum physics, organic chemistry, neuroscience and biology. While his ideas have met with plenty of justifiable skepticism, some researchers are starting to pay attention. “Those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy,” wrote John Preskill, a physicist at the California Institute of Technology, after Fisher gave a talk there. “He may be on to something. At least he’s raising some very interesting questions.”

We Were Wrong About Consciousness Disappearing in Dreamless Sleep


sciencealert |  When it comes to dreamlessness, conventional wisdom states that consciousness disappears when we fall into a deep, dreamless sleep. 

But researchers have come up with a new way to define the different ways that we experience dreamlessness, and say there’s no evidence to suggest that our consciousness 'switches off' when we stop dreaming. In fact, they say the state of dreamlessness is way more complicated than we’d even imagined.

"[T]he idea that dreamless sleep is an unconscious state is not well-supported by the evidence," one of the researchers, Evan Thompson from the University of British Columbia in Canada, told Live Science.

Instead, he says the evidence points to the possibility of people having conscious experiences during all states of sleep - including deep sleep - and that could have implications for those accused of committing a crime while sleepwalking.

But first off, what exactly is dreamlessness?

Traditionally, dreamlessness is defined at that part of sleep that occurs between bouts of dreams - a time of deep sleep when your conscious experience is temporarily switched off. This is different from those times when you simply cannot remember your dreams once you've woken up.

As dream researchers from the University of California, Santa Cruz explain, most people over the age of 10 dream at least four to six times per night during a stage of sleep called REM, or Rapid Eye Movement. (Studies suggest that children under age 10 only dream during roughly 20 percent of their REM periods.)

Considering REM periods can vary in length from 5 to 10 minutes for the first REM period of the night to as long as 30-34 minutes later in the night, researchers have suggested that each dream is probably no longer than 34 minutes each. 

While there's some evidence that we can dream during the non-REM sleep that occurs 1 or 2 hours before waking up, if you’re getting your 7 hours of sleep each night, that still leaves a lot of room for dreamlessness.

Thompson and his colleagues suggest that the traditional view of dreamless as being an unconscious state of deep sleep is far too simplistic, arguing that it's not a uniform state of unconsciousness, but actually includes a range of experiences involving certain stimuli and cognitive activity.

Can Science Crack Consciousness?

thescientist |  Ever since I switched my research focus from theoretical physics to neuroscience many years ago, my professional life has focused on the “easy problem” of consciousness—exploring relationships between brain activity and mind. So-called signatures of consciousness, such as increased blood oxygen or electrical activity patterns in different brain regions, are recorded using several different imaging methods, including electroencephalography (EEG) and functional magnetic resonance imaging (fMRI).

The “hard problem”— how and why neural activity produces our conscious awareness—presents a much more profound puzzle. Like many scientists and nonscientists alike, I have a long-running fascination with the mystery of consciousness, which serves as the inspiration for my latest book, The New Science of Consciousness.

A new approach to studying consciousness is emerging based on collaborations between neuroscientists and complexity scientists. Such partnerships encompass subfields of mathematics, physics, psychology, psychiatry, philosophy, and more. This cross-disciplinary effort aims to reveal fresh insights into the major challenges of both the easy and the hard problems. How does human consciousness differ from the apparent consciousness of other animals? Do we enjoy genuine free will or are we slaves to unconscious systems? Above all, how can the interactions of a hundred billion nerve cells lead to the mysterious condition called consciousness?

A Better Way to Crack the Brain


nature |  At least half a dozen major initiatives to study the mammalian brain have sprung up across the world in the past five years. This wave of national and international projects has arisen in part from the realization that deciphering the principles of brain function will require collaboration on a grand scale.

Yet it is unclear whether any of these mega-projects, which include scientists from many subdisciplines, will be effective. Researchers with complementary skill sets often team up on grant proposals. But once funds are awarded, the labs involved often return to work on their parts of the project in relative isolation.

We propose an alternative strategy: grass-roots collaborations involving researchers who may be distributed around the globe, but who are already working on the same problems. Such self-motivated groups could start small and expand gradually over time. But they would essentially be built from the ground up, with those involved encouraged to follow their own shared interests rather than responding to the strictures of funding sources or external directives.

This may seem obvious, but such collaboration is stymied by technical and sociological barriers. And the conventional strategies — constructing collaborations top-down or using funding strings to incentivize them — do not overcome those barriers.

Wednesday, November 02, 2016

Contrary to Popular Belief, Mathematical Ability Is Not Innate


cambridge |  In this review, we are pitting two theories against each other: the more accepted theory—the ‘number sense’ theory—suggesting that a sense of number is innate and non-symbolic numerosity is being processed independently of continuous magnitudes (e.g., size, area, density); and the newly emerging theory suggesting that (1) both numerosities and continuous magnitudes are processed holistically when comparing numerosities, and (2) a sense of number might not be innate. In the first part of this review, we discuss the ‘number sense’ theory. Against this background, we demonstrate how the natural correlation between numerosities and continuous magnitudes makes it nearly impossible to study non-symbolic numerosity processing in isolation from continuous magnitudes, and therefore the results of behavioral and imaging studies with infants, adults and animals can be explained, at least in part, by relying on continuous magnitudes. In the second part, we explain the ‘sense of magnitude’ theory and review studies that directly demonstrate that continuous magnitudes are more automatic and basic than numerosities. Finally, we present outstanding questions. Our conclusion is that there is not enough convincing evidence to support the number sense theory anymore. Therefore, we encourage researchers not to assume that number sense is simply innate, but to put this hypothesis to the test, and to consider if such an assumption is even testable in light of the correlation of numerosity and continuous magnitudes.

Saturday, October 15, 2016

the outgroup intolerance hypothesis for schizophrenia


rpsych |  This article proposes a reformulation of the social brain theory of schizophrenia. Contrary to those who consider schizophrenia to be an inherently human condition, we suggest that it is a relatively recent phenomenon, and that the vulnerability to it remained hidden among our hunter-gatherer ancestors. Hence, we contend that schizophrenia is the result of a mismatch between the post-Neolithic human social environment and the design of the social brain. We review the evidence from human evolutionary history of the importance of the distinction between ingroup and out-group membership that lies at the heart of intergroup conflict, violence, and xenophobia. We then review the evidence for the disparities in schizophrenia incidence around the world and for the higher risk of this condition among immigrants and city dwellers. Our hypothesis explains a range of epidemiological findings on schizophrenia related to the risk of migration and urbanization, the improved prognosis in underdeveloped countries, and variations in the prevalence of the disorder. However, although this hypothesis may identify the ultimate causation of schizophrenia, it does not specify the proximate mechanisms that lead to it. We conclude with a number of testable and refutable predictions for future research.

Tuesday, May 31, 2016

your brain is not a computer...,


aeon |  No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.

To see how vacuous this idea is, consider the brains of babies. Thanks to evolution, human neonates, like the newborns of all other mammalian species, enter the world prepared to interact with it effectively. A baby’s vision is blurry, but it pays special attention to faces, and is quickly able to identify its mother’s. It prefers the sound of voices to non-speech sounds, and can distinguish one basic speech sound from another. We are, without doubt, built to make social connections.

A healthy newborn is also equipped with more than a dozen reflexes – ready-made reactions to certain stimuli that are important for its survival. It turns its head in the direction of something that brushes its cheek and then sucks whatever enters its mouth. It holds its breath when submerged in water. It grasps things placed in its hands so strongly it can nearly support its own weight. Perhaps most important, newborns come equipped with powerful learning mechanisms that allow them to change rapidly so they can interact increasingly effectively with their world, even if that world is unlike the one their distant ancestors faced.

Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word.

Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’.

Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.

Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?

the minecraft generation


NYTimes |  Since its release seven years ago, Minecraft has become a global sensation, captivating a generation of children. There are over 100 million registered players, and it’s now the third-best-­selling video game in history, after Tetris and Wii Sports. In 2014, Microsoft bought Minecraft — and Mojang, the Swedish game studio behind it — for $2.5 billion.

There have been blockbuster games before, of course. But as Jordan’s experience suggests — and as parents peering over their children’s shoulders sense — Minecraft is a different sort of phenomenon.
For one thing, it doesn’t really feel like a game. It’s more like a destination, a technical tool, a cultural scene, or all three put together: a place where kids engineer complex machines, shoot videos of their escapades that they post on YouTube, make art and set up servers, online versions of the game where they can hang out with friends. It’s a world of trial and error and constant discovery, stuffed with byzantine secrets, obscure text commands and hidden recipes. And it runs completely counter to most modern computing trends. Where companies like Apple and Microsoft and Google want our computers to be easy to manipulate — designing point-and-click interfaces under the assumption that it’s best to conceal from the average user how the computer works — Minecraft encourages kids to get under the hood, break things, fix them and turn mooshrooms into random-­number generators. It invites them to tinker.

In this way, Minecraft culture is a throwback to the heady early days of the digital age. In the late ’70s and ’80s, the arrival of personal computers like the Commodore 64 gave rise to the first generation of kids fluent in computation. They learned to program in Basic, to write software that they swapped excitedly with their peers. It was a playful renaissance that eerily parallels the embrace of Minecraft by today’s youth. As Ian Bogost, a game designer and professor of media studies at Georgia Tech, puts it, Minecraft may well be this generation’s personal computer.

At a time when even the president is urging kids to learn to code, Minecraft has become a stealth gateway to the fundamentals, and the pleasures, of computer science. Those kids of the ’70s and ’80s grew up to become the architects of our modern digital world, with all its allures and perils. What will the Minecraft generation become?

“Children,” the social critic Walter Benjamin wrote in 1924, “are particularly fond of haunting any site where things are being visibly worked on. They are irresistibly drawn by the detritus generated by building, gardening, housework, tailoring or carpentry.”

Tuesday, May 24, 2016

doing the most: eavesdropping neuronal snmp < hacking dreams


ted |  Now, if you were interested in studying dreams, I would recommend starting first by just looking at people's thoughts when they are awake, and this is what I do. So I am indeed a neuroscientist, but I study the brain in a very non-traditional way, partially inspired by my background. Before I became a neuroscientist, I was a computer hacker. I used to break into banks and government institutes to test their security. And I wanted to use the same techniques that hackers use to look inside black boxes when I wanted to study the brain, looking from the inside out.
4:34
Now, neuroscientists study the brain in one of two typical methods. Some of them look at the brain from the outside using imaging techniques like EEG or fMRI. And the problem there is that the signal is very kind of blurry, coarse. So others look at the brain from the inside, where they stick electrodes inside the brain and listen to brain cells speaking their own language. This is very precise, but this obviously can be done only with animals. Now, if you were to peek inside the brain and listen to it speak, what you would see is that it has this electrochemical signal that you can translate to sound, and this sound is the common currency of the brain. It sounds something like this.
5:17
(Clicking)
5:21
So I wanted to use this in humans, but who would let you do that? Patients who undergo brain surgery. So I partner with neurosurgeons across the globe who employ this unique procedure where they open the skull of patients, they stick electrodes in the brain to find the source of the problem, and finding the source can take days or sometimes weeks, so this gives us a unique opportunity to eavesdrop on the brains of patients while they are awake and behaving and they have their skull open with electrodes inside.
6:02
So now that we do that, we want to find what triggers those cells active, what makes them tick. So what we do is we run studies like this one. This is Linda, one of our patients. She is sitting here and watching those clips.
6:16
(Video) ... can't even begin to imagine.

mebbe it's the fast talking, but this guy seems more full of shit than a christmas goose?



zdnet |  I'm part of a team that runs studies on humans while they are being monitored with electrodes implanted deep inside their brains. This is unique, allowing us to eavesdrop on the activity of individual nerve cells inside a human brain. We work with patients who have severe problems that require brain surgery, for potential resection of the focus of an epileptic seizure. Most people with epilepsy take medication to reduce the seizures, but a small number of patients are candidates for an invasive surgery [resection] that removes the seizure focus and stops the seizures. You want to find the smallest amount of brain you can resect to stop the seizure. The surgeons put electrodes around the part of the brain that is suspected as the seizure onset site. Then the neurologists can monitor the activity inside the patient's brain and wait until the patient has experienced a number of seizures in the course of a few days while they are in the hospital. One can then monitor the flow of the seizures and isolate the exact source before resecting the site that causes the seizures. Then the surgeons remove the electrodes and resect the part of the brain where the seizures originate. The patient walks away seizure-free.

As researchers, we use this unique opportunity to work with a patient who is awake with electrodes deep inside his or her brain to study cognition. The patients who are in the hospital waiting to have seizures for the doctor are happy to help science by participating in studies. These studies allow us unique access to the building blocks of thought, memories and emotions in a way that is rarely accessible otherwise in humans. There are only a small number of people in the world who have had their brains opened and have participated in studies where scientists recorded directly from within their brain. We ask the patients about their feelings, for example, while looking inside the brain using those micro-electrodes, and we can see how their answers indicate how the brain works. We can map the brain and learn how the brain operates slowly using this unique way, by looking inside the brain of a person who is sitting in front of us.

In one study, we had people look at images. When you look at a picture of, say, your mother, there is a part of your brain that becomes active as you recognize her. Other parts come to life when you think about something else (say, Marilyn Monroe or Big Ben in London). We can decode these thoughts by looking at the patterns that become active when you see an image of one thing and when you later think about that thing voluntarily. We then are able to see what they're thinking of as they think. At the same time, we can decode their current thought on these things and effectively project those to the patients in front of their eyes. You can actually show patients their thoughts. Even more interesting for us is we can look at competing thoughts. We can put two images on the screen and tell them to think of only one of them and see how this competition is resolved inside the brain.

Tuesday, May 17, 2016

brainjacking: the future of security for neural implants


boingboing |  In a new scientific review paper published in World Neurosurgery, a group of Oxford neurosurgeons and scientists round up a set of dire, terrifying warnings about the way that neural implants are vulnerable to networked attacks. 

Most of the article turns on deep brain stimulation devices, which can be used to stimulate or suppress activity in different parts of the brain, already used to treat some forms of mental illness, chronic pain and other disorders. The researchers round up a whole dystopia's worth of potential attacks on these implants, including tampering with the victim's reward system "to exert substantial control over a patient's behaviour"; pain attacks that induce "severe pain in these patients"; and attacks on impulse control that could induce "Mania, hypersexuality, and pathological gambling." 

The researchers discuss some of the ways in which the (dismal) state of medical implant security could be improved. I recently co-authored a set of comments to the FDA asking them to require manufacturers to promise not to use the DMCA to intimidate and silence security researchers who come forward with warnings about dangerous defects in their products. 

The paper has a delightful bibliography, which cites books like Neuromancer, anime like Ghost in the Shell, as well as papers in Nature, Brain, The Journal of Neurosurgery, and Brain Stimulation.

Friday, March 11, 2016

the creepy inevitable and inescapable definition of virtual reality...,

WaPo |  When cookie giant Oreo wanted to promote its latest flavors, its marketing heads decided to spice up its traditional TV ads with something not just new, but otherworldly: a virtual-reality-style fly-through of a whimsical, violet-skied fantasyland, where cream filling flows like a river and cookie pieces rocket past the viewer's head.
The 360-degree “Wonder Vault” animation allowed viewers to look around this world by turning their smartphone, moving their mouse on a screen or gazing through a virtual-reality headset. And many did: In the minute-long sugary utopia’s two weeks of existence, it has enticed nearly 3 million YouTube viewers — about as big as the 12-to-34-year-old audience for “The Big Bang Theory,” the most-watched sitcom on TV.
“Look at the Cinnamon Bun world: There are cinnamon buns, but there are also ice skaters. It evokes that sort of emotional connection,” said Elise Burditt, brand manager for Oreo North America. “It’s all about taking people inside this world we’ve created ... and back to that feeling of being a kid again.”
As VR technology has rapidly grown more vivid, affordable and widespread, its artists and fans have championed the dramatic ways it could change movies, news, video games, on-the-job training and the creative arts. But many newcomers will take their first virtual steps via a more quintessentially American medium — advertising. And companies now are investing heavily in a race to shape those worlds to their design.

the inner-trainment industry...,


timtyler |  The entertainment industry wastes billions of dollars a year on films, games, pornography and escapism.

As such it is like a cancerous growth on humanity, sapping our collective resources and strength.

These funds typically do not produce anything worthwhile. They do not feed anyone. No housing or shelter is provided. The world does not wind up better irrigated as a result. No more useful elements or minerals come into circulation. Scientific knowledge is not advanced.

It is not just the funds that are wasted. Precious natural resources are needlessly depleted as well. Human time and effort - which could usefully be spent in other areas - are also used up. Both the consumers and the producers are affected.

All that is produced as a result of all this expenditure is entertainment.

What is entertainment?

Entertainment is a type of stimulation designed to trigger a drug-like states of euphoria.

Upon receipt of certain kinds of sensory input, the human brain produces drug-like compounds associated with positive behavioural reinforcement.

Various types of entertainment cause different types of stimulation. Comedy activates the nucleus accumbens - a brain area which is known to be involved in the rewarding feelings that follow monetary gain or the use of some addictive drugs. The shock-relief cycle horror movies repeatedly put the viewer through works as another type of drug-based conditioning - based on endorphins. Action adventure games are fuelled on adrenaline. Pornography works on the brain's sexual reward centres - and so on.

The result of all this drug-related stimulation is a high level of fantasy addiction in the population.

Addicts tend to become couch potatoes, often with various other associated pathologies: eye strain, back problems, malnutrition, RSI - and so on.

Some exposure to story telling and fantasies may be beneficial - since it allows humans to gain exposure to the experiences of others quickly and in relative safety. This explains why humans are attracted to this sort of thing in the first place. However, today's fanatsies often tend to go beyond what is healthy and beneficial. They typically represent a super-stimulus, in order to encourage an rapid response and subsequent addiction.

We see the same thing with sugars. Some sugars is useful - so humans are genetically programmed to eat them. However, in the modern environment, food is plentiful, and there is a huge food marketing industry - and the result is an obesity epidemic. This wastes billions of dollars in unwanted food production and healthcare bills, and is a complete and unmitigated managerial disaster.

Similarly some exposure fantasies is beneficial. It is when there is a whole marketing industry pumping consumers to consume fantasies at the maximum possible rate - in order to satisfy its own selfish goals - that problems with over-production and over-consumption arise.

Thursday, March 10, 2016

deepmind stays winning...,


NYTimes |  Computer, one. Human, zero.

A Google computer program stunned one of the world’s top players on Wednesday in a round of Go, which is believed to be the most complex board game ever created.

The match — between Google DeepMind’s AlphaGo and the South Korean Go master Lee Se-dol — was viewed as an important test of how far research into artificial intelligence has come in its quest to create machines smarter than humans.

“I am very surprised because I have never thought I would lose,” Mr. Lee said at a news conference in Seoul. “I didn’t know that AlphaGo would play such a perfect Go.”

Mr. Lee acknowledged defeat after three and a half hours of play.

Demis Hassabis, the founder and chief executive of Google’s artificial intelligence team DeepMind, the creator of AlphaGo, called the program’s victory a “historic moment.”

Does AlphaGo Mean Artificial Intelligence Is the Real Deal?




Protesting The Ethnic Cleansing Of Palestinians In Gaza Frightens Jews In America

NC  | Today’s demonstrations are in opposition to the Biden-Netanyahu genocide in Gaza and the West Bank. The more underlying crisis can...