Showing posts with label transbiological. Show all posts
Showing posts with label transbiological. Show all posts

Tuesday, March 27, 2018

With "Platform" Capitalism - Value Creation Depends on Privacy Invasion


opendemocracy |  The current social mobilization against Facebook resembles the actions of activists who, in opposition to neoliberal globalization, smash a McDonald’s window during a demonstration. 

On March 17, The Observer of London and The New York Times announced that Cambridge Analytica, the London-based political and corporate consulting group, had harvested private data from the Facebook profiles of more than 50 million users without their consent. The data was collected through a Facebook-based quiz app called thisisyourdigitallife, created by Aleksandr Kogan, a University of Cambridge psychologist who had requested and gained access to information from 270,000 Facebook members after they had agreed to use the app to undergo a personality test, for which they were paid through Kogan’s company, Global Science Research.

But as Christopher Wylie, a twenty-eight-year-old Canadian coder and data scientist and a former employee of Cambridge Analytica, stated in a video interview, the app could also collect all kinds of personal data from users, such as the content that they consulted, the information that they liked, and even the messages that they posted.

In addition, the app provided access to information on the profiles of the friends of each of those users who agreed to take the test, which enabled the collection of data from more than 50 million.

All this data was then shared by Kogan with Cambridge Analytica, which was working with Donald Trump’s election team and which allegedly used this data to target US voters with personalised political messages during the presidential campaign. As Wylie, told The Observer, “we built models to exploit what we knew about them and target their inner demons.”

These platforms differ significantly in terms of the services that they offer: some, like eBay or Taobao simply allow exchange of products between buyers and sellers; others, like Uber or TaskRabbit, allow independent service providers to find customers; yet others, like Apple or Google allow developers to create and market apps.

However, what is common to all these platforms is the central role played by data, and not just continuous data collection, but its ever more refined analysis in order to create detailed user profiles and rankings in order to better match customers and suppliers or increase efficiency.

All this is done in order to use data to create value in some way another (to monetize it by selling to advertisers or other firms, to increase sales, or to increase productivity). Data has become ‘the new oil’ of global economy, a new commodity to be bought and sold at a massive scale, and with this development, as a former Harvard Business School professor Shoshana Zuboff has argued, global capitalism has become ‘surveillance capitalism’.

What this means is that platform economy is a model of value creation which is completely dependant on continuous privacy invasions and, what is alarming is that we are gradually becoming used to this.

Saturday, February 17, 2018

Future Genomics: Don't Edit A Rough Copy When You Can Print A Fresh New One


technologyreview  |  It took Boeke and his team eight years before they were able to publish their first fully artificial yeast chromosome. The project has since accelerated. Last March, the next five synthetic yeast chromosomes were described in a suite of papers in Science, and Boeke says that all 16 chromosomes are now at least 80 percent done. These efforts represent the largest amount of genetic material ever synthesized and then joined together.

It helps that the yeast genome has proved remarkably resilient to the team’s visions and revisions. “Probably the biggest headline here is that you can torture the genome in a multitude of different ways, and the yeast just laughs,” says Boeke.

Boeke and his colleagues aren’t simply replacing the natural yeast genome with a synthetic one (“Just making a copy of it would be a stunt,” says Church). Throughout the organism’s DNA they have also placed molecular openings, like the invisible breaks in a magician’s steel rings. These let them reshuffle the yeast chromosomes “like a deck of cards,” as Cai puts it. The system is known as SCRaMbLE, for “synthetic chromosome recombination and modification by LoxP-mediated evolution.”

The result is high-speed, human-driven evolution: millions of new yeast strains with different properties can be tested in the lab for fitness and function in applications like, eventually, medicine and industry. Mitchell predicts that in time, Sc2.0 will displace all the ordinary yeast in scientific labs.

The ultimate legacy of Boeke’s project could be decided by what genome gets synthesized next. The GP-write group originally imagined that making a synthetic human genome would have the appeal of a “grand challenge.” Some bioethicists disagreed and sharply criticized the plan. Boeke emphasizes that the group will “not do a project aimed at making a human with a synthetic genome.” That means no designer people.

Ethical considerations aside, synthesizing a full human genome—which is over 250 times larger than the yeast genome—is impractical with current methods. The effort to advance the technology also lacks funding. Boeke’s yeast work has been funded by the National Science Foundation and by academic institutions, including partners in China, but the larger GP-write initiative has not attracted major support, other than a $250,000 initial donation from the computer design company Autodesk. Compare that with the Human Genome Project, which enjoyed more than $3 billion in US funding.

Watch The Edge Video, It's About The "Church Approach" To Global Warming



edge  |  I would say that there are two things I’m obsessing about recently. One is global warming and the other is augmentation. Global warming is something that strikes me as an interesting social phenomenon and scientific challenge. From the social side, you’ve got denialism, which, to me, is more important. You have denialism on a bunch of fronts. You've got denial of the Holocaust and evolution, but those aren’t things that necessarily in and of themselves impact our lives. It’s very heartrending and callous that anyone would deny the Holocaust, but as long as they don’t add to that a lot of other racism, nobody’s going to get hurt by it.

I imagine that we could probably populate my company enEvolv, which has evolution in the title, mostly with creationists and they would still get the products out. You just follow a recipe. Even though you’re doing evolution, you don’t need to believe it. Maybe it would help if the very top scientists believed in neo-Darwinism or something. Those are curious things that people fight about and have deep feelings about, but they don’t affect day-to-day life.

Global warming is something that could be catastrophic. You could argue that it’s in the same category because you can’t prove that my life today is worse because of global warming, but it’s something where it could be exponential. The odds are against it, but we don’t even know how to calculate the odds. It’s not like we’re playing blackjack or something like that. There’s more carbon in the Arctic tundra than in the entire atmosphere plus all the rain forests put together. And that carbon, unlike the rain forest where you have to burn the rain forest to release it, goes into the atmosphere as soon as you get melting. It’s already many gigatons per year going up. That’s something that could spiral out of control.

Even for the ultra concerned citizens, almost all the suggestions are not about how to prevent an exponential release, but how to slow down the inevitable. It's like the extinction problem: If you don’t have a way of reversing it, then you’re fighting a losing battle. That’s not psychologically a good thing, it’s hard to get enthusiastic funding for it, and you will ultimately fail. Whether it’s solar panels, or not using your SUVs as much, or not buying SUVs, or having smaller houses—all of these things are slowing down the inevitable. It’s hard to get excited about that.

The other thing that is problematic socially is the whole idea that it’s an "inconvenient truth." To some extent Gore’s phrase is brilliant, but it’s also counterproductive because the people for whom it is inconvenient don’t want to believe it’s inconvenient. People don’t want to give up their SUVs and their steak meals. It would be better to talk about a convenient solution, whether or not that’s the real solution or the best solution, just talk about it so you get acceptance first. You need acceptance before you can get to the best solution.

The other part that makes acceptance difficult is blame. People will say, "It’s not my fault," and that gets confused for "it’s not anybody’s fault." You could make an argument that it’s not your fault because you weren’t around during the Industrial Revolution. You didn’t personally do that much; you’re just one seven billionth of the problem at most. You could make an argument that you’re not personally to blame, but then expanding that to no human being has had anything to do with it is where things go off the tracks. The thing that got us into the position of denial was the blame game. 

You want everybody to be inconvenienced because it’s their fault. That’s two strikes against you.
I don't know if you’ve read The Righteous Mind, but Jon Haidt makes the point that even people who consider themselves very rational are not using a rational argument when making decisions. They’re making decisions and then using the rational argument to rationalize. A lot of what he says sounds obvious once you restate it, but I found the way he says it and backs it up with social science research very illuminating, if not compelling.

The elephant, as he refers to it, the thing that’s making your decisions in your life, is deciding that this person is telling you that you’re responsible for something you don’t feel responsible for. It's telling you that you have to sacrifice many things that you don’t want to sacrifice. From your viewpoint, that person is inconvenient, incorrect, and you’re going to ignore them. The more they insult you and your way of life, the less you’re going to listen to them, and then you’re going to make a bunch of rationalizations about that. This is why we have problems. 

Digital Biological Conversion


news.com.au  |   THE human race has come a very long way in a short amount of time, but what is coming around the corner will change everything we thought we knew about mankind. 

Modern medicine and rapidly advancing technology have seen us greatly evolve from the early days of hunter-gatherers, and now the same factors are working toward seeing the introduction of “superhumans” into our society.

At the core of the development is designer bodies using DNA manipulation and human/AI hybrids, both of which were highlighted during the World Government Summit in Dubai.

CHANGING YOUR DNA
Imagine being able to choose if your unborn child will be male or female, their height, weight and even athletic prowess.

Now imagine hacking our memories or making our bodies able to thrive in extreme environments in which survival was previously impossible.

These are both quickly becoming a reality, according to founding director of the Life Sciences Project at Harvard Business School, Juan Enriquez.

Allowing humans to become masters of their DNA is something can be achieved using a gene editing technique known as CRISPR — a simple yet powerful tool used to easily alter DNA sequences and modify gene function.

“These instruments, like CRISPR, are allowing us to, in real-time, edit life on a grand scale,” he said, according to Futurism. “We are rewriting the sentences of life to our purposes.”

Mr Enriquez said these techniques will soon see us living in a world of “unrandom selection”.
“Instead of letting nature select what lives here, I’m going to select what lives here,” he said. “Science used to be about discovery, now it is about creation.”

The academic said more than being able to create athletes from birth, the technology would greatly increase the amount of lives that could be saved on a daily basis.

“You can make the world’s flu vaccine in a week instead of a year. And by the way, this is no longer theoretical,” Mr Enriquez said.

With the likes of Elon Musk and NASA working toward getting humans to colonise Mars, he said gene editing will play a vital role in this.  Fist tap Big Don.



Wednesday, January 10, 2018

Money As Tool, Money As Drug: The Biological Psychology of a Strong Incentive


nih.gov |  Why are people interested in money? Specifically, what could be the biological basis for the extraordinary incentive and reinforcing power of money, which seems to be unique to the human species? We identify two ways in which a commodity which is of no biological significance in itself can become a strong motivator. The first is if it is used as a tool, and by a metaphorical extension this is often applied to money: it is used instrumentally, in order to obtain biologically relevant incentives. Second, substances can be strong motivators because they imitate the action of natural incentives but do not produce the fitness gains for which those incentives are instinctively sought. The classic examples of this process are psychoactive drugs, but we argue that the drug concept can also be extended metaphorically to provide an account of money motivation. From a review of theoretical and empirical literature about money, we conclude that (i) there are a number of phenomena that cannot be accounted for by a pure Tool Theory of money motivation; (ii) supplementing Tool Theory with a Drug Theory enables the anomalous phenomena to be explained; and (iii) the human instincts that, according to a Drug Theory, money parasitizes include trading (derived from reciprocal altruism) and object play.

Monday, January 01, 2018

MindSmash Pipes Up Into The Digital Catheter...,


endgadget |  The new replay tools offered in PlayerUnknown's Battlegrounds are so much more than standard video-capture technology. In fact, it isn't video capture at all -- it's data capture. The 3D replay tools allow players to zoom around the map after a match, tracking their own character, following enemies' movements, slowing down time and setting up cinematic shots of their favorite kills, all within a 1-kilometer radius of their avatar. It's filled with statistics, fresh perspectives and infinite data points to dissect. This isn't just a visual replay; it's a slice of the actual game, perfectly preserved, inviting combatants to play God.

PUBG is an ideal test case. It's a massively popular online game where up to 100 players parachute onto a map, scavenge for supplies, upgrade weapons and attempt to be the last person standing. Even though it technically came out in December, PUBG has been available in early access since March and it's picked up a considerable number of accolades -- and players -- in the process. Just last week, SteamDB reported PUBG hit 3 million concurrent players on PC, vastly outstripping its closest competitor, Dota 2, which has a record of 1.29 million simultaneous players.

Part of PUBG's success stems from developers' relentless focus on making the game fun to watch. Live streaming is now a major part of the video-game world, with sites like Twitch and YouTube Gaming growing in prominence and eSports bursting into the mainstream.

Kim says PUBG creator Brendan Greene and CEO Chang Han Kim built the idea of data-capture into the game from the beginning, and Minkonet's tech is a natural evolution of this focus. Minkonet and PUBG developers connected in late 2016 and started working together on the actual software earlier this year.

"One of their first visions was to have PUBG as not just a great game to play, but a great game to watch," Kim says. "So they were already from the very beginning focused on having PUBG as a great live streaming game; esports was also one of their sort of long-term visions."


Is Ideology The Original Augmented Reality?


nautil.us |  Released in July 2016, Pokémon Go is a location-based, augmented-reality game for mobile devices, typically played on mobile phones; players use the device’s GPS and camera to capture, battle, and train virtual creatures (“Pokémon”) who appear on the screen as if they were in the same real-world location as the player: As players travel the real world, their avatar moves along the game’s map. Different Pokémon species reside in different areas—for example, water-type Pokémon are generally found near water. When a player encounters a Pokémon, AR (Augmented Reality) mode uses the camera and gyroscope on the player’s mobile device to display an image of a Pokémon as though it were in the real world.* This AR mode is what makes Pokémon Go different from other PC games: Instead of taking us out of the real world and drawing us into the artificial virtual space, it combines the two; we look at reality and interact with it through the fantasy frame of the digital screen, and this intermediary frame supplements reality with virtual elements which sustain our desire to participate in the game, push us to look for them in a reality which, without this frame, would leave us indifferent. Sound familiar? Of course it does. What the technology of Pokémon Go externalizes is simply the basic mechanism of ideology—at its most basic, ideology is the primordial version of “augmented reality.”

The first step in this direction of technology imitating ideology was taken a couple of years ago by Pranav Mistry, a member of the Fluid Interfaces Group at the Massachusetts Institute of Technology Media Lab, who developed a wearable “gestural interface” called “SixthSense.”** The hardware—a small webcam that dangles from one’s neck, a pocket projector, and a mirror, all connected wirelessly to a smartphone in one’s pocket—forms a wearable mobile device. The user begins by handling objects and making gestures; the camera recognizes and tracks the user’s hand gestures and the physical objects using computer vision-based techniques. The software processes the video stream data, reading it as a series of instructions, and retrieves the appropriate information (texts, images, etc.) from the Internet; the device then projects this information onto any physical surface available—all surfaces, walls, and physical objects around the wearer can serve as interfaces. Here are some examples of how it works: In a bookstore, I pick up a book and hold it in front of me; immediately, I see projected onto the book’s cover its reviews and ratings. I can navigate a map displayed on a nearby surface, zoom in, zoom out, or pan across, using intuitive hand movements. I make a sign of @ with my fingers and a virtual PC screen with my email account is projected onto any surface in front of me; I can then write messages by typing on a virtual keyboard. And one could go much further here—just think how such a device could transform sexual interaction. (It suffices to concoct, along these lines, a sexist male dream: Just look at a woman, make the appropriate gesture, and the device will project a description of her relevant characteristics—divorced, easy to seduce, likes jazz and Dostoyevsky, good at fellatio, etc., etc.) In this way, the entire world becomes a “multi-touch surface,” while the whole Internet is constantly mobilized to supply additional data allowing me to orient myself.

Mistry emphasized the physical aspect of this interaction: Until now, the Internet and computers have isolated the user from the surrounding environment; the archetypal Internet user is a geek sitting alone in front of a screen, oblivious to the reality around him. With SixthSense, I remain engaged in physical interaction with objects: The alternative “either physical reality or the virtual screen world” is replaced by a direct interpenetration of the two. The projection of information directly onto the real objects with which I interact creates an almost magical and mystifying effect: Things appear to continuously reveal—or, rather, emanate—their own interpretation. This quasi-animist effect is a crucial component of the IoT: “Internet of things? These are nonliving things that talk to us, although they really shouldn’t talk. A rose, for example, which tells us that it needs water.”1 (Note the irony of this statement. It misses the obvious fact: a rose is alive.) But, of course, this unfortunate rose does not do what it “shouldn’t” do: It is merely connected with measuring apparatuses that let us know that it needs water (or they just pass this message directly to a watering machine). The rose itself knows nothing about it; everything happens in the digital big Other, so the appearance of animism (we communicate with a rose) is a mechanically generated illusion.

Thursday, December 07, 2017

3-D Printed, WiFi Connected, No Electronics...,


Washington |  Imagine a bottle of laundry detergent that can sense when you’re running low on soap — and automatically connect to the internet to place an order for more.

University of Washington researchers are the first to make this a reality by 3-D printing plastic objects and sensors that can collect useful data and communicate with other WiFi-connected devices entirely on their own.

With CAD models that the team is making available to the public, 3-D printing enthusiasts will be able to create objects out of commercially available plastics that can wirelessly communicate with other smart devices. That could include a battery-free slider that controls music volume, a button that automatically orders more cornflakes from Amazon or a water sensor that sends an alarm to your phone when it detects a leak.

“Our goal was to create something that just comes out of your 3-D printer at home and can send useful information to other devices,” said co-lead author and UW electrical engineering doctoral student Vikram Iyer. “But the big challenge is how do you communicate wirelessly with WiFi using only plastic? That’s something that no one has been able to do before.”

The system is described in a paper presented Nov. 30 at the Association for Computing Machinery’s SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia.

3-D Printed Living Tattoos


MIT |  MIT engineers have devised a 3-D printing technique that uses a new kind of ink made from genetically programmed living cells.

The cells are engineered to light up in response to a variety of stimuli. When mixed with a slurry of hydrogel and nutrients, the cells can be printed, layer by layer, to form three-dimensional, interactive structures and devices.

The team has then demonstrated its technique by printing a “living tattoo” — a thin, transparent patch patterned with live bacteria cells in the shape of a tree. Each branch of the tree is lined with cells sensitive to a different chemical or molecular compound. When the patch is adhered to skin that has been exposed to the same compounds, corresponding regions of the tree light up in response.

The researchers, led by Xuanhe Zhao, the Noyce Career Development Professor in MIT’s Department of Mechanical Engineering, and Timothy Lu, associate professor of biological engineering and of electrical engineering and computer science, say that their technique can be used to fabricate “active” materials for wearable sensors and interactive displays. Such materials can be patterned with live cells engineered to sense environmental chemicals and pollutants as well as changes in pH and temperature.

What’s more, the team developed a model to predict the interactions between cells within a given 3-D-printed structure, under a variety of conditions. The team says researchers can use the model as a guide in designing responsive living materials.

Zhao, Lu, and their colleagues have published their results today in the journal Advanced Materials. The paper’s co-authors are graduate students Xinyue Liu, Hyunwoo Yuk, Shaoting Lin, German Alberto Parada, Tzu-Chieh Tang, Eléonore Tham, and postdoc Cesar de la Fuente-Nunez.

Sunday, October 01, 2017

Quantum Criticality in Living Systems


phys.org  |  Stuart Kauffman, from the University of Calgary, and several of his colleagues have recently published a paper on the Arxiv server titled 'Quantum Criticality at the Origins of Life'. The idea of a quantum criticality, and more generally quantum critical states, comes perhaps not surprisingly, from solid state physics. It describes unusual electronic states that are are balanced somewhere between conduction and insulation. More specifically, under certain conditions, current flow at the critical point becomes unpredictable. When it does flow, it tends to do so in avalanches that vary by several orders of magnitude in size. 

Ferroelectric metals, like iron, are one familiar example of a material that has classical critical point. Above a of 1043 degrees K the magnetization of iron is completely lost. In the narrow range approaching this point, however, thermal fluctuations in the electron spins that underly the magnetic behavior extend over all length scales of the sample—that's the scale invariance we mentioned. In this case we have a continuous phase transition that is thermally driven, as opposed to being driven by something else like external pressure, magnetic field, or some kind of chemical influence.

Quantum criticality, on the other hand, is usually associated with stranger electronic behaviors—things like high-temperature superconductivity or so-called heavy fermion metals like CeRhIn5. One strange behavior in the case of heavy fermions, for example, is the observation of large 'effective mass'—mass up to 1000 times normal—for the conduction electrons as a consequence of their narrow electronic bands. These kinds of phenomena can only be explained in terms of the collective behavior of highly correlated electrons, as opposed to more familiar theory based on decoupled electrons. 

Experimental evidence for critical points in of materials like CeRhIn5 has only recently been found. In this case the so-called "Fermi surface," a three-dimensional map representing the collective energy states of all electrons in the material, was seen to have large instantaneous shifts at the critical points. When electrons across the entire Fermi surface are strongly coupled, unusual physics like superconductivity is possible.

The potential existence of in proteins is a new idea that will need some experimental evidence to back it up. Kauffman and his group eloquently describe the major differences between current flow in proteins as compared to metallic conductors. They note that in metals charges 'float' due to voltage differences. Here, an electric fields accelerate electrons while scattering on impurities dissipates their energy fixing a constant average propagation velocity.
By contrast, this kind of a mechanism would appear to be uncommon in biological systems. The authors note that charges entering a critically conducting biomolecule will be under the joint influence of the quantum Hamiltonian and the excessive decoherence caused by the environment. Currently a huge focus in Quantum biology, this kind of conductance has been seen for example, for excitons in the light-harvesting systems. As might already be apparent here, the logical flow of the paper, at least to nonspecialists, quickly devolves into the more esoteric world of quantum Hamiltonians and niche concepts like 'Anderson localization.' 

To try to catch a glimpse of what might be going on without becoming steeped in formalism I asked Luca Turin, who actually holds the patent for semiconductor structures using proteins as their active element, for his take on the paper. He notes that the question of how electrons get across proteins is one of the great unsolved problems in biophysics, and that the Kauffman paper points in a novel direction to possibly explain conduction. Quantum tunnelling (which is an essential process, for example, in the joint special ops of proteins of the respiratory chain) works fine over small distances. However, rates fall precipitously with distance. Traditional hole and electron transport mechanisms butt against the high bandgap and absence of obvious acceptor impurities. Yet at rest our body's fuel cell generates 100 amps of electron current.
 
In suggesting that biomolecules, or at least most of them, are quantum critical conductors, Kauffman and his group are claiming that their electronic properties are precisely tuned to the transition point between a metal and an insulator. An even stronger reading of this would have that there is a universal mechanism of charge transport in living matter which can exist only in highly evolved systems. To back all this up the group took a closer look at the electronic structure of a few of our standard issue proteins like myoglobin, profilin, and apolipoprotein E.

In particular, they selected NMR spectra from the Protein Data Bank and used a technique known as the extended Huckel Hamiltonion method to calculate HOMO/LUMO orbitals for the proteins. For more comments on HOMO/LUMO orbital calculations you might look at our post on Turin's experiments on electron spin changes as a potential general mechanism of anesthesia. To fully appreciate what such calculations might imply in this case, we have to toss out another fairly abstract concept, namely, Hofstadter's butterfly as seen in the picture below.

What is Life?


scribd |  Schrodinger unleashed modern molecular biology with his “What Is Life?”.[1] The order in biology must be due, not to statistical processes attributable to statistical mechanics, but due to the stability of the chemical bond. In one brilliant intuition, he said, “It will not be a periodic crystal, for these are dull. “Genes” will be an aperiodic crystal containing a microcode for the organism.” (my quotes around “genes”.) He was brilliantly right, but insufficient. 

The structure of DNA followed, the code and genes turning one another on and off in some vast genetic regulatory network. Later work, including my own,[2] showed that such networks could behave with sufficient order for ontogeny or be enormously chaotic and no life could survive that chaos.

We biologists continue to think largely in terms of classical physics and chemistry, even about the origins of life, and life itself, despite Schrodinger’s clear message that life depends upon quantum mechanics.
 
In this short article, I wish to explore current “classical physics” ideas about the origin of life then introduce the blossoming field of quantum biology and within a newly discovered state of matter, The Poised Realm, hovering reversibly between quantum and “classical” worlds that may be fundamental to life. Life may be lived in the Poised Realm, with wide implications.

The widest implications are a hope for a union of the objective and subjective poles; the latter lost since Descartes’ Res cogitans failed and Newton triumphed with classical physics and Descartes’ Res extensa. What I shall say here is highly speculative.

2 Classical Physics and Chemistry Ideas about the Origin of Life
There are four broad views about the origin of life:
1) The RNA world view, dominant in the USA.
2) The spontaneous emergence of “collectively autocatalytic set”, which might be RNA, peptides, both, or other molecular species.
3) Budding liposomes or other self-reproducing vesicles.
4) Metabolism first, with linked sets of chemical reaction cycles, which are autocatalytic in the sense that each produces an extra copy of at least one product per cycle. 

Almost all workers agree that however molecular reproduction may have occurred, it is plausibly the case that housing such a system in a liposome or similar vesicle is one way to confine reactants. Recent work suggests that a dividing liposome and reproducing molecular system will synchronize divisions, so could form a protocell, hopefully able to evolve to some extent.[3]


Thursday, September 28, 2017

Trans-Turing Machines



bigsmartdata |  From the Poised Realm, the embodiment of Trans-Turing Systems, as a real invention, doth flow. Of course you are familiar with the Turing Machine: the theoretical paper tape compute engine to which all modern processors are obliged to worship every Sunday…you are, of course, familiar with the Turing Machine.
The Turing Machine
The Turing Machine
The work of Alan Turing is the rock from which the quest for a congruent theoretical computer science was launched. Totally awesome quantum computing heavy Scott Aaronson has written that when it comes to AI, we can divide everything that’s been said about it into two categories: the 70% that was covered in Turing’s 1950 paper Computing Machinery and Intelligence, and the remaining 30% that has followed in the decades since then. 

Turing Machine = Foundation of Computer Science 

So whoa — a Trans-Turing System? What?? I must know more!
That was the third thing that drove me to Tucson. I had to ask Kauffman what he meant — what he Tsaw, what he imagined. I found a description in the patent that Kauffman, et al, filed in 2014:
 58
"Further disclosed herein is a Trans-Turing machine that includes a plurality of nodes, each node comprising at least one quantum degree of freedom that is coupled to at least one quantum degree of freedom in another node and at least one classical degree of freedom that is coupled to at least one classical degree of freedom in another node, wherein the nodes are configured such that the quantum degrees of freedom decohere to classicity and thereby alter the classical degrees of freedom, which then alter the decoherence rate of remaining quantum degrees of freedom; at least one input signal generator configured to produce an input signal that recoheres classical degrees of freedom to quantum degrees of freedom; and a detector configured to receive quantum or classical output signals from the nodes."

Sweet. I got it. Quantum computing nodes working in tandem with classical compute (Turing Machine) systems and what emerges is a Trans-Turing Machine, not constrained nor otherwise entailed by a bothersome set of NP-complete limits. Polynomial hierarchy collapse ensues, at long last P = NP, and we are full throttle to ride warp drive engines to the stars! Maybe? Maybe. Maybe not.

I had to ask Kauffman.

After I spotted him at the outdoor mixer on Thursday night, after I got over my fanboy flutters, after I introduced myself, chatted with him for a bit about his new book and how much I liked it, after I explained my own thoughts from my field in computer science, and how his book from a decade and a half earlier had so deeply influenced me, I did finally ask. 

“So how do we build the Trans-Turnig Machine?”
A wry smile crossed his face. His eyes lit. For a moment he stopped being the intellectual giant I had come to revere, and revealed the mischievous, inquisitive, childlike spirit that must have driven him his entire life.

“I have no idea,” he said replied with a grin.

I was all satisfied. I knew he did not mean that he could not conceive of one, nor he did mean that he could not describe one, nor not define the attributes it might require, nor not imagine how it might function. What he meant was we still don’t know enough about quantum computing to imbue an instrument of our own creation with something akin to consciousness — whatever that means.

Today we all harvest the ample fruits from the first baby steps into the Network Age. We are still painting a digital patina over the planet. More stuff soon will think. We are clearly well into the age of pervasive computing, but computing is not yet ubiquitous, though soon it will be. Soon — within a decade — everything will be engineered to connect with everything, and almost all those systems are and will be awesome Turing Machines, programmable systems all, that will link us all together in a transcendent fine-grained meshed digital fabric of increasing value. Yet on the fringes, there is quantum computing, playfully peeking through from behind the classical physics curtain. And therein lies the unpredictable. It could be that Here There Be Monsters. Or not. That’s the beauty and the bizarre of where we are. Both terror and elation are on the rise, though neither are as appropriate nor as compelling as is the raw, robust curiosity that drives us ever forward.

Is the ineffable thing to come a D-Wave progeny? Maybe. Will Scott Aaronson explain and extend the exploding adjacent possible? Probably. Did Kauffman and Hameroff lead us to the brink? Absolutely. And from the wily Trans-Turing Machine, will Machine Consciousness one day emerge … whatever that means? 

I have no idea.

Monday, September 18, 2017

The Promise and Peril of Immersive Technologies


weforum |  The best place from which to draw inspiration for how immersive technologies may be regulated is the regulatory frameworks being put into effect for traditional digital technology today. In the European Union, the General Data Protection Regulation (GDPR) will come into force in 2018. Not only does the law necessitate unambiguous consent for data collection, it also compels companies to erase individual data on request, with the threat of a fine of up to 4% of their global annual turnover for breaches. Furthermore, enshrined in the bill is the notion of ‘data portability’, which allows consumers to take their data across platforms – an incentive for an innovative start-up to compete with the biggest players. We may see similar regulatory norms for immersive technologies develop as well.

Providing users with sovereignty of personal data
Analysis shows that the major VR companies already use cookies to store data, while also collecting information on location, browser and device type and IP address. Furthermore, communication with other users in VR environments is being stored and aggregated data is shared with third parties and used to customize products for marketing purposes.

Concern over these methods of personal data collection has led to the introduction of temporary solutions that provide a buffer between individuals and companies. For example, the Electronic Frontier Foundation’s ‘Privacy Badger’ is a browser extension that automatically blocks hidden third-party trackers and allows users to customize and control the amount of data they share with online content providers. A similar solution that returns control of personal data should be developed for immersive technologies. At present, only blunt instruments are available to individuals uncomfortable with data collection but keen to explore AR/VR: using ‘offline modes’ or using separate profiles for new devices.

Managing consumption
Short-term measures also exist to address overuse in the form of stopping mechanisms. Pop-up usage warnings once healthy limits are approached or exceeded are reportedly supported by 71% of young people in the UK. Services like unGlue allow parents to place filters on content types that their children are exposed to, as well as time limits on usage across apps.

All of these could be transferred to immersive technologies, and are complementary fixes to actual regulation, such as South Korea’s Shutdown Law. This prevents children under the age of 16 from playing computer games between midnight and 6am. The policy is enforceable because it ties personal details – including date of birth – to a citizen’s resident registration number, which is required to create accounts for online services. These solutions are not infallible: one could easily imagine an enterprising child might ‘borrow’ an adult’s device after-hours to find a workaround to the restrictions. Further study is certainly needed, but we believe that long-term solutions may lie in better design.
Rethinking success metrics for digital technology
As businesses develop applications using immersive technologies, they should transition from using metrics that measure just the amount of user engagement to metrics that also take into account user satisfaction, fulfilment and enhancement of well-being. Alternative metrics could include a net promoter score for software, which would indicate how strongly users – or perhaps even regulators – recommend the service to their friends based on their level of fulfilment or satisfaction with a service.

The real challenge, however, is to find measures that align with business policy and user objectives. As Tristan Harris, Founder of Time Well Spent argues: “We have to come face-to-face with the current misalignment so we can start to generate solutions.” There are instances where improvements to user experience go hand-in-hand with business opportunities. Subscription-based services are one such example: YouTube Red will eliminate advertisements for paying users, as does Spotify Premium. These are examples where users can pay to enjoy advertising-free experiences and which do not come at the cost to the content developers since they will receive revenue in the form of paid subscriptions.

More work remains if immersive technologies are to enable happier, more fulfilling interactions with content and media. This will largely depend on designing technology that puts the user at the centre of its value proposition.

This is part of a series of articles related to the disruptive effects of several technologies (virtual/augmented reality, artificial intelligence and blockchain) on the creative economy.


Virtual Reality Health Risks...,


medium |  Two decades ago, our research group made international headlines when we published research showing that virtual reality systems could damage people’s health.

Our demonstration of side-effects was not unique — many research groups were showing that it could cause health problems. The reason that our work was newsworthy was because we showed that there were fundamental problems that needed to be tackled when designing virtual reality systems — and these problems needed engineering solutions that were tailored for the human user.

In other words, it was not enough to keep producing ever faster computers and higher definition displays — a fundamental change in the way systems were designed was required.

So why do virtual reality systems need a new approach? The answer to this question lies in the very definition of how virtual reality differs from how we traditionally use a computer.

Natural human behaviour is based on responses elicited by information detected by a person’s sensory systems. For example, rays of light bouncing off a shiny red apple can indicate that there’s a good source of food hanging on a tree.

A person can then use the information to guide their hand movements and pick the apple from the tree. This use of ‘perception’ to guide ‘motor’ actions defines a feedback loop that underpins all of human behaviour. The goal of virtual reality systems is to mimic the information that humans normally use to guide their actions, so that humans can interact with computer generated objects in a natural way.

The problems come when the normal relationship between the perceptual information and the corresponding action is disrupted. One way of thinking about such disruption is that a mismatch between perception and action causes ‘surprise’. It turns out that surprise is really important for human learning and the human brain appears to be engineered to minimise surprise.

This means that the challenge for the designers of virtual reality is that they must create systems that minimise the surprise experienced by the user when using computer generated information to control their actions.

Of course, one of the advantages of virtual reality is that the computer can create new and wonderful worlds. For example, a completely novel fruit — perhaps an elppa — could be shown hanging from a virtual tree. The elppa might have a completely different texture and appearance to any other previously encountered fruit — but it’s important that the information used to specify the location and size of the elppa allows the virtual reality user to guide their hand to the virtual object in a normal way.

If there is a mismatch between the visual information and the hand movements then ‘surprise’ will result, and the human brain will need to adapt if future interactions between vision and action are to maintain their accuracy. The issue is that the process of adaptation may cause difficulties — and these difficulties might be particularly problematic for children as their brains are not fully developed. 

This issue affects all forms of information presented within a virtual world (so hearing and touch as well as vision), and all of the different motor systems (so postural control as well as arm movement systems). One good example of the problems that can arise can be seen through the way our eyes react to movement.

In 1993, we showed that virtual reality systems had a fundamental design flaw when they attempted to show three dimensional visual information. This is because the systems produce a mismatch between where the eyes need to focus and where the eyes need to point. In everyday life, if we change our focus from something close to something far away our eyes will need to change focus and alter where they are pointing.

The change in focus is necessary to prevent blur and the change in eye direction is necessary to stop double images. In reality, the changes in focus and direction are physically linked (a change in fixation distance causes change in the images and where the images fall at the back of the eyes).

Sunday, September 17, 2017

Artificial Intelligence is Lesbian


thenewyorker |  “The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” Michal Kosinski, an organizational psychologist at the Stanford Graduate School of Business, told the Guardian earlier this week. The photo of Kosinski accompanying the interview showed the face of a man beleaguered. Several days earlier, Kosinski and a colleague, Yilun Wang, had reported the results of a study, to be published in the Journal of Personality and Social Psychology, suggesting that facial-recognition software could correctly identify an individual’s sexuality with uncanny accuracy. The researchers culled tens of thousands of photos from an online-dating site, then used an off-the-shelf computer model to extract users’ facial characteristics—both transient ones, like eye makeup and hair color, and more fixed ones, like jaw shape. Then they fed the data into their own model, which classified users by their apparent sexuality. When shown two photos, one of a gay man and one of a straight man, Kosinski and Wang’s model could distinguish between them eighty-one per cent of the time; for women, its accuracy dropped slightly, to seventy-one per cent. Human viewers fared substantially worse. They correctly picked the gay man sixty-one per cent of the time and the gay woman fifty-four per cent of the time. “Gaydar,” it appeared, was little better than a random guess.

The study immediately drew fire from two leading L.G.B.T.Q. groups, the Human Rights Campaign and GLAAD, for “wrongfully suggesting that artificial intelligence (AI) can be used to detect sexual orientation.” They offered a list of complaints, which the researchers rebutted point by point. Yes, the study was in fact peer-reviewed. No, contrary to criticism, the study did not assume that there was no difference between a person’s sexual orientation and his or her sexual identity; some people might indeed identify as straight but act on same-sex attraction. “We assumed that there was a correlation . . . in that people who said they were looking for partners of the same gender were homosexual,” Kosinski and Wang wrote. True, the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis. And that didn’t diminish the point they were making—that existing, easily obtainable technology could effectively out a sizable portion of society. To the extent that Kosinski and Wang had an agenda, it appeared to be on the side of their critics. As they wrote in the paper’s abstract, “Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”

Thursday, August 31, 2017

Toward Ubiquitous Robotic Organisms


bbvaopenmind |  Nature has always found ways to exploit and adapt to differences in environmental conditions. Through evolutionary adaptation a myriad of organisms has developed that operate and thrive in diverse and often extreme conditions. For example, the tardigrade (Schokraie et al., 2012) is able to survive pressures greater than those found in the deepest oceans and in space, can withstand temperatures from 1K (-272 °C) to 420K (150 °C), and can go without food for thirty years. Organisms often operate in symbiosis with others. The average human, for example, has about 30 trillion cells, but contains about 40 trillion bacteria (Sender et al., 2016). They cover scales from the smallest free-living bacteria, pelagibacter ubique, at around 0.5µm long to the blue whale at around thirty meters long. That is a length range of 7 orders of magnitude and approximately 15 orders of magnitude in volume! What these astonishing facts show is that if nature can use the same biological building blocks (DNA, amino acids, etc.) for such an amazing range of organisms, we too can use our robotic building blocks to cover a much wider range of environments and applications than we currently do. In this way we may be able to match the ubiquity of natural organisms.

To achieve robotic ubiquity requires us not only to study and replicate the feats of nature but to go beyond them with faster (certainly faster than evolutionary timescales!) development and more general and adaptable technologies. Another way to think of future robots is as artificial organisms. Instead of a conventional robot which can be decomposed into mechanical, electrical, and computational domains, we can think of a robot in terms of its biological counterpart and having three core components: a body, a brain, and a stomach. In biological organisms, energy is converted in the stomach and distributed around the body to feed the muscles and the brain, which in turn controls the organisms. There is thus a functional equivalence between the robot organism and the natural organism: the brain is equivalent to the computer or control system; the body is equivalent to the mechanical structure of the robot; and the stomach is equivalent to the power source of the robot, be it battery, solar cell, or any other power source. The benefit of the artificial organism paradigm is that we are encouraged to exploit, and go beyond, all the characteristics of biological organisms. These embrace qualities largely unaddressed by current robotics research, including operation in varied and harsh conditions, benign environmental integration, reproduction, death, and decomposition. All of these are essential to the development of ubiquitous robotic organisms.

The realization of this goal is only achievable by concerted research in the areas of smart materials, synthetic biology, artificial intelligence, and adaptation. Here we will focus on the development of novel smart materials for robotics, but we will also see how materials development cannot occur in isolation of the other much-needed research areas.

IoT Extended Sensoria


bbvaopenmind |  In George Orwell’s 1984,(39) it was the totalitarian Big Brother government who put the surveillance cameras on every television—but in the reality of 2016, it is consumer electronics companies who build cameras into the common set-top box and every mobile handheld. Indeed, cameras are becoming commodity, and as video feature extraction gets to lower power levels via dedicated hardware, and other micropower sensors determine the necessity of grabbing an image frame, cameras will become even more common as generically embedded sensors. The first commercial, fully integrated CMOS camera chips came from VVL in Edinburgh (now part of ST Microelectronics) back in the early 1990s.(40) At the time, pixel density was low (e.g., the VVL “Peach” with 312 x 287 pixels), and the main commercial application of their devices was the “BarbieCam,” a toy video camera sold by Mattel. I was an early adopter of these digital cameras myself, using them in 1994 for a multi-camera precision alignment system at the Superconducting Supercollider(41) that evolved into the hardware used to continually align the forty-meter muon system at micron-level precision for the ATLAS detector at CERN’s Large Hadron Collider. This technology was poised for rapid growth: now, integrated cameras peek at us everywhere, from laptops to cellphones, with typical resolutions of scores of megapixels and bringing computational photography increasingly to the masses. ASICs for basic image processing are commonly embedded with or integrated into cameras, giving increasing video processing capability for ever-decreasing power. The mobile phone market has been driving this effort, but increasingly static situated installations (e.g., video-driven motion/context/gesture sensors in smart homes) and augmented reality will be an important consumer application, and the requisite on-device image processing will drop in power and become more agile. We already see this happening at extreme levels, such as with the recently released Microsoft HoloLens, which features six cameras, most of which are used for rapid environment mapping, position tracking, and image registration in a lightweight, battery-powered, head-mounted, self-contained AR unit. 3D cameras are also becoming ubiquitous, breaking into the mass market via the original structured-light-based Microsoft Kinect a half-decade ago. Time-of-flight 3D cameras (pioneered in CMOS in the early 2000s by researchers at Canesta(42) have evolved to recently displace structured light approaches, and developers worldwide race to bring the power and footprint of these devices down sufficiently to integrate into common mobile devices (a very small version of such a device is already embedded in the HoloLens). As pixel timing measurements become more precise, photon-counting applications in computational photography, as pursued by my Media Lab colleague Ramesh Raskar, promise to usher in revolutionary new applications that can do things like reduce diffusion and see around corners.(43)

My research group began exploring this penetration of ubiquitous cameras over a decade ago, especially applications that ground the video information with simultaneous data from wearable sensors. Our early studies were based around a platform called the “Portals”:(44) using an embedded camera feeding a TI DaVinci DSP/ARM hybrid processor, surrounded by a core of basic sensors (motion, audio, temperature/humidity, IR proximity) and coupled with a Zigbee RF transceiver, we scattered forty-five of these devices all over the Media Lab complex, interconnected through the wired building network. One application that we built atop them was “SPINNER,”(45) which labelled video from each camera with data from any wearable sensors in the vicinity. The SPINNER framework was based on the idea of being able to query the video database with higher-level parameters, lifting sensor data up into a social/affective space,(46) then trying to effectively script a sequential query as a simple narrative involving human subjects adorned with the wearables. Video clips from large databases sporting hundreds of hours of video would then be automatically selected to best fit given timeslots in the query, producing edited videos that observers deemed coherent.(47) Naively pointing to the future of reality television, this work aims further, looking to enable people to engage sensor systems via human-relevant query and interaction.

Rather than try to extract stories from passive ambient activity, a related project from our team devised an interactive camera with a goal of extracting structured stories from people.(48) Taking the form factor of a small mobile robot, “Boxie” featured an HD camera in one of its eyes: it would rove our building and get stuck, then plea for help when people came nearby. It would then ask people successive questions and request that they fulfill various tasks (e.g., bring it to another part of the building, or show it what they do in the area where it was found), making an indexed video that can be easily edited to produce something of a documentary about the people in the robot’s abode.
In the next years, as large video surfaces cost less (potentially being roll-roll printed) and are better integrated with responsive networks, we will see the common deployment of pervasive interactive displays. Information coming to us will manifest in the most appropriate fashion (e.g., in your smart eyeglasses or on a nearby display)—the days of pulling your phone out of your pocket and running an app are severely limited. To explore this, we ran a project in my team called “Gestures Everywhere”(49) that exploited the large monitors placed all over the public areas of our building complex.(50) Already equipped with RFID to identify people wearing tagged badges, we added a sensor suite and a Kinect 3D camera to each display site. As an occupant approached a display and were identified via RFID or video recognition, information most relevant to them would appear on the display. We developed a recognition framework for the Kinect that parsed a small set of generic hand gestures (e.g., signifying “next,” “more detail,” “go-away,” etc.), allowing users to interact with their own data at a basic level without touching the screen or pulling out a mobile device. Indeed, proxemic interactions(51) around ubiquitous smart displays will be common within the next decade.

The plethora of cameras that we sprinkled throughout our building during our SPINNER project produced concerns about privacy (interestingly enough, the Kinects for Gestures Everywhere did not evoke the same response—occupants either did not see them as “cameras” or were becoming used to the idea of ubiquitous vision). Accordingly, we put an obvious power switch on each portal that enabled them to be easily switched off. This is a very artificial solution, however—in the near future, there will just be too many cameras and other invasive sensors in the environment to switch off. These devices must answer verifiable and secure protocols to dynamically and appropriately throttle streaming sensor data to answer user privacy demands. We have designed a small, wireless token that controlled our portals in order to study solutions to such concerns.(52) It broadcast a beacon to the vicinity that dynamically deactivates the transmission of proximate audio, video, and other derived features according to the user’s stated privacy preferences—this device also featured a large “panic” button that can be pushed at any time when immediate privacy is desired, blocking audio and video from emanating from nearby Portals.

Rather than block the video stream entirely, we have explored just removing the privacy-desiring person from the video image. By using information from wearable sensors, we can more easily identify the appropriate person in the image,(53) and blend them into the background. We are also looking at the opposite issue—using wearable sensors to detect environmental parameters that hint at potentially hazardous conditions for construction workers and rendering that data in different ways atop real-time video, highlighting workers in situations of particular concern.(54)

Wednesday, August 30, 2017

The Weaponization of Artificial Intelligence


acq |  Recognizing that no machine—and no person—is truly autonomous in the strict sense of the word, we will sometimes speak of autonomous capabilities rather than autonomous systems.2
The primary intellectual foundation for autonomy stems from artificial intelligence (AI), the capability of computer systems to perform tasks that normally require human intelligence (e.g.,
perception, conversation, decisionmaking). 

Advances in AI are making it possible to cede to machines many tasks long regarded as impossible for machines to perform. Intelligent systems aim to apply AI to a particular problem or domain—the
implication being that the system is programmed or trained to operate within the bounds of a defined knowledge base. Autonomous function is at a system level rather than a component level. The study considered two categories of intelligent systems: those employing autonomy at rest and those employing autonomy in motion. In broad terms, systems incorporating autonomy at rest operate virtually, in software, and include planning and expert advisory systems, whereas systems incorporating autonomy in motion have a presence in the physical world and include robotics and autonomous vehicles. 

As illustrated in Figure 1, many DoD and commercial systems are already operating with varying kinds of autonomous capability. Robotics typically adds additional kinds of sensors, actuators, and mobility to intelligent systems. While early robots were largely automated, recent advances in AI are enabling increases in autonomous functionality.

One of the less well-known ways that autonomy is changing the world is in applications that include data compilation, data analysis, web search, recommendation engines, and forecasting. Given the limitations of human abilities to rapidly process the vast amounts of data available today, autonomous systems are now required to find trends and analyze patterns. There is no need to solve the long-term AI problem of general intelligence in order to build high-value applications that exploit limited-scope autonomous capabilities dedicated to specific purposes. DoD’s nascent Memex program is one of many examples in this category.3

Rapid global market expansion for robotics and other intelligent systems to address consumer and industrial applications is stimulating increasing commercial investment and delivering a diverse array of products. At the same time, autonomy is being embedded in a growing array of software systems to enhance speed and consistency of decision-making, among other benefits. Likewise, governmental entities, motivated by economic development opportunities in addition to security missions and other public sector applications, are investing in related basic and applied research.

Applications include commercial endeavors, such as IBM’s Watson, the use of robotics in ports and
mines worldwide, autonomous vehicles (from autopilot drones to self-driving cars), automated logistics and supply chain management, and many more. Japanese and U.S. companies invested more than $2 billion in autonomous systems in 2014, led by Apple, Facebook, Google, Hitachi, IBM, Intel, LinkedIn, NEC, Yahoo, and Twitter. 4

A vibrant startup ecosystem is spawning advances in response to commercial market opportunities; innovations are occurring globally, as illustrated in Figure 2 (top). Startups are targeting opportunities that drive advances in critical underlying technologies. As illustrated in Figure 2 (bottom), machine learning—both application-specific and general purpose—is of high interest. The market-pull for machine learning stems from a diverse array of applications across an equally diverse spectrum of industries, as illustrated in Figure 3.


Friday, August 04, 2017

The Search for Extraterrestrial Life and Post-Biological Intelligence


SETI |  Is it time to re-think ET?

For well over a half-century, a small number of scientists have conducted searches for artificially produced signals that would indicate the presence of intelligence elsewhere in the cosmos. This effort, known as SETI (Search for Extraterrestrial Intelligence), has yet to find any confirmed radio transmissions or pulsing lasers from other beings. But the hunt continues, recently buoyed by the discovery of thousands of exoplanets. For many, the abundance of habitable real estate makes it difficult to believe that Earth is the only world where life and intelligence have arisen.

SETI practitioners mostly busy themselves with refining their equipment and their lists of target solar systems. They seldom consider the nature of their prey – what form extraterrestrial intelligence might take. Their premise is that any technically sophisticated species will eventually develop signaling technology, irrespective of their biology or physiognomy.

This view may not seem anthropocentric, for it makes no overt assumptions about the biochemistry of extraterrestrials; only that intelligence will arise on at least some worlds with life. However, the trajectory of our own technology now suggests that within a century or two of our development of radio transmitters and lasers, we are likely to build machines with artificial, generalized intelligence. We are engineering our successors, and the next intelligent species on Earth is not only certain to dwarf our own cognitive abilities, but will be able to engineer its own, superior descendants by design, rather than counting on uncertain, Darwinian processes. Assuming that something similar happens to other technological societies, then the implications for SETI are profound.

In September, 2015, the John Templeton Foundation’s Humble Approach Initiative sponsored a three-day symposium entitled “Exploring Exoplanets: The Search for Extraterrestrial Life and Post-Biological Intelligence.” The venue for the meeting was the Royal Society’s Chicheley Hall, north of London, where a dozen researchers gave informal presentations and engaged in the type of lively dinner table conversations that such meetings inevitably spawn.

The subject matter was broad, ranging from the multi-pronged search for habitable planets and how we might detect life, to the impact of both the search and an eventual discovery. However, the matter of post-biological intelligence – briefly described above – or the possibility of non-Darwinian evolutionary processes, was an incentive for many of the symposium contributions.

We present here short write-ups of seven of these talks. They are more than simply interesting: they suggest a revolution in how we should think about, and search for, our intellectual peers. Indeed, they suggest that “peers” may be too generous to Homo sapiens. As these essays argue, the majority of the cognitive capability in the cosmos may be far beyond our own.
-- Seth Shostak

This symposium was chaired by Martin J. Rees, OM, Kt, FRS and Paul C.W. Davies, AM, and organized by Mary Ann Meyers, JTF’s Senior Fellow. Also present was B. Ashley Zauderer, Assistant Director of Math and Physical Sciences at the Templeton Foundation.

What Do You Think About Machines That Think?


netzpolitik |  It is one of the topics about which science and now also society have been discussing, researching, and arguing for decades: Artificial Intelligence. But it begins with the concept. Is not it better to call "designed intelligence"? Because unlike intelligence in humans, an "intelligent" program of a computer has been deliberately designed and created in a certain form. This is one of the suggestions that finds itself in a book that is as stimulating as it is entertaining by John Brockman , which is now available in German: "What should we think of artificial intelligence?"

Artificial Intelligence (AI) was initially a scientific research field, which wanted to investigate computer technologies in particular in order to imitate human skills with software: learning, understanding, acting. For more than sixty years, research has already been carried out. The literary professor Thomas A. Bass writes in his contribution "Mehr Funk, more Soul, more poetry and art":
We have numerous problems to tackle and find solutions. [...] We need more artist programmers and artistic programming. It is time for our mind machines to grow out of a youthful period that has lasted sixty years. (Thomas A. Bass, p. 552)
This "youth period" is certainly over. Because since the new millennium, the academic questions, which were usually only academic, have become interesting for many more people simply because they come into contact with AI in everyday life. They help with the information search, the navigation and now also with the creative cooking .
 
The most tangible is the so-called Natural Language Processing (NLP), that is, the processing of human language by software. Of course, "today's computers" do not "understand" what people have said, so they have "not such a competence at human level" (Rodney Brooks, p. 152), but they can process spoken words meaningfully.
 
Brockman's book provides a unique and multi-faceted insight into the field of AI, as a publisher of over one hundred and eighty authors, who illuminate all conceivable aspects of the subject. He raises the fundamental question: What should we think of artificial intelligence? And the authors answer this question in very different ways, each in scarce articles of only two to four pages.
In the introduction to the book, Brockman added further questions:
Should not we ask the question as to what thinkers might think about? Will they want and expect citizens' rights? Will they have consciousness? What kind of government would choose an AI for us? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we and the AIs include each other in our respective circles of empathy?
Even this short series of questions makes it clear what Brockman is talking about in the book: it is not only the topics that are being pushed on today, but also far-reaching ones.

I Don't See Taking Sides In This Intra-tribal Skirmish....,

Jessica Seinfeld, wife of Jerry Seinfeld, just donated $5,000 (more than anyone else) to the GoFundMe of the pro-Israel UCLA rally. At this ...