ScienceDirect | Genome sequencing has revealed that signal transduction in bacteria makes use of a limited number of different devices, such as two-component systems, LuxI–LuxR quorum-sensing systems, phosphodiesterases, Ser-Thr (serine-threonine) kinases, OmpR-type regulators, and sigma factor–anti-sigma factor pathways. These systems use modular proteins with a large variety of input and output domains, yet strikingly conserved transmission domains. This conservation might lead to redundancy of output function, for example, via crosstalk (i.e. phosphoryl transfer from a non-cognate sensory kinase). The number of similar devices in a single cell, particularly of the two-component type, might amount to several dozen, and most of these operate in parallel. This could bestow bacteria with cellular intelligence if the network of two-component systems in a single cell fulfils the requirements of a neural network. Testing these ideas poses a great challenge for prokaryotic systems biology.
WorldScience | To properly assess if bacterial signals constitute intelligence, whether of a social or individual brand, Hellingwerf and some other researchers work from the inside out.
Rather than focusing on the behaviors, which are open to differing interpretations, they focus on the systems of interactions followed by the molecules. These systems, it is hoped, have distinct properties that can be measured and compared against similar interactions in known intelligent beings.
For instance, if these bacterial systems operate similarly to networks in the brain, it would provide a weighty piece of evidence in favor of the bacterial intelligence.
Hellingwerf has set himself a more modest goal, comparing bacterial signaling not to the brain, but to the brain-like, human-made neural network devices. Such an effort has a simple motivation. Demonstrating that bacterial signaling possesses every important feature of neural networks would suggest at least that microbial capabilities rival those of devices with proven ability to tackle simple problems using known rules of brain function—rather than robot-like calculations, which are very different.
To understand how one could do such a comparison requires a brief explanation of how neural networks work, and how they differ from traditional computers.
Computers are good at following precise instructions, but terrible at even simple, common-sense tasks that lack definite rules, like the recognition of the difference between male and female.
Neural networks, like humans, can do this because they are more flexible, and they learn—even though they can be built using computers. They are a set of simulated “brain cells” set to pass “signals” among themselves through simulated “connections.”
Some information that can be represented as a set of numbers, such as a digitized photograph, is fed to a first set of “cells” in such a way that each cell gets a number. Each cell is then set to “transmit” all, part or none of that number to one or more other cells. How big a portion of the number is passed on to each, depends on the simulated “strength” of the connections that are programmed into the system.
Each of those cells, in turn, are set to do something with the numbers they receive, such as add them or average them—and then transmit all or part of them to yet another cell.
Numbers ricochet through the system this way until they arrive at a final set of “output” cells. These cells are set to give out a final answer—based on the numbers in them—in the form of yet another number. For example, the answer could be 0 for male, 1 for female.
Such a system, when new, will give random answers, because the connections are initially set at random. However, after each attempt at the problem, a human “tells” the system whether it was right or wrong. The system is designed to then change the strength of the connections to improve the answer for the next try.
To do this, the system calculates to what extent a change in strength of each connection previously contributed to giving a right or wrong answer. This information tells the system how to change the strengths to give better results. Over many attempts, the system’s accuracy gradually improves, often reaching nearly human-like performance on a given task.
Such systems not only work quite well for simple problems, many researchers believed they capture all the key features of real brain cells, though in a drastically simplified way.
The devices also have similarities to the messaging systems in bacteria. But how deep are the resemblances? To answer this, Hellingwerf looked at four properties that neural-network experts have identified as essential for such devices to work. He then examined whether bacterial signaling fits each of the criteria.
The four properties are as follows.
First, a neural network must have multiple sub-systems that work simultaneously, or “in parallel.” Neural networks do this, because signals follow multiple pathways at once, in effect carrying out multiple calculations at once. Traditional computers can’t do this; they conduct one at a time. Bacteria do fit the standard, though, because they can contain many messaging networks acting simultaneously, Hellingwerf observes.
Second, key components of the network must carry out logical operations. This means, in the case of a neural network, that single elements of the network combine signals from two or more other elements, and pass the result on to a third according to some mathematical rule. Regular computers also have this feature. Bacteria probably do too, Hellingwerf argues, based on the way that parts of their signaling systems add up inputs from different sources.
The third property is “auto-amplification.” This describes the way some network elements can boost the strength of their own interactions. Hellingwerf maintains that bacteria show this property, as when, for example, some of their signaling systems create more copies of themselves as they run.
The fourth property is where the rub lies for bacteria. This feature, called crosstalk, means that the system must not consist just of separate chain reactions: rather, different chain reactions have to connect, so that the way one operates can change the way another runs.
Crosstalk is believed to underlie an important form of memory called associative memory, the ability to mentally connect two things with no obvious relationship. A famous example is the Russian scientist Ivan Pavlov’s dog, who drooled at the ring of a bell because experience had taught him food invariably followed the sound.
Crosstalk has been found many times in bacteria, Hellingwerf wrote—but the strength of the crosstalk “signals” are hundreds or thousands of times weaker than those that follow the main tracks of the chain reactions. Moreover, “clear demonstrations of associative memory have not yet been detected in any single bacterial cell,” he added, and this is an area ripe for further research. If bacteria can indeed communicate, it seems they may be holding quite a bit back from us.
0 comments:
Post a Comment