Showing posts with label computationalism. Show all posts
Showing posts with label computationalism. Show all posts

Tuesday, May 17, 2016

more material than materialism...,


NYTimes |  Those who make the Very Large Mistake (of thinking they know enough about the nature of the physical to know that consciousness can’t be physical) tend to split into two groups. Members of the first group remain unshaken in their belief that consciousness exists, and conclude that there must be some sort of nonphysical stuff: They tend to become “dualists.” Members of the second group, passionately committed to the idea that everything is physical, make the most extraordinary move that has ever been made in the history of human thought. They deny the existence of consciousness: They become “eliminativists.”

This amazing phenomenon (the denial of the existence of consciousness) is a subject for another time. The present point — it’s worth repeating many times — is that no one has to react in either of these ways. All they have to do is grasp the fundamental respect in which we don’t know the intrinsic nature of physical stuff in spite of all that physics tells us. In particular, we don’t know anything about the physical that gives us good reason to think that consciousness can’t be wholly physical. It’s worth adding that one can fully accept this even if one is unwilling to agree with Russell that in having conscious experience we thereby know something about the intrinsic nature of physical reality.

So the hard problem is the problem of matter (physical stuff in general). If physics made any claim that couldn’t be squared with the fact that our conscious experience is brain activity, then I believe that claim would be false. But physics doesn’t do any such thing. It’s not the physics picture of matter that’s the problem; it’s the ordinary everyday picture of matter. It’s ironic that the people who are most likely to doubt or deny the existence of consciousness (on the ground that everything is physical, and that consciousness can’t possibly be physical) are also those who are most insistent on the primacy of science, because it is precisely science that makes the key point shine most brightly: the point that there is a fundamental respect in which ultimate intrinsic nature of the stuff of the universe is unknown to us — except insofar as it is consciousness.

Thursday, March 10, 2016

deepmind stays winning...,


NYTimes |  Computer, one. Human, zero.

A Google computer program stunned one of the world’s top players on Wednesday in a round of Go, which is believed to be the most complex board game ever created.

The match — between Google DeepMind’s AlphaGo and the South Korean Go master Lee Se-dol — was viewed as an important test of how far research into artificial intelligence has come in its quest to create machines smarter than humans.

“I am very surprised because I have never thought I would lose,” Mr. Lee said at a news conference in Seoul. “I didn’t know that AlphaGo would play such a perfect Go.”

Mr. Lee acknowledged defeat after three and a half hours of play.

Demis Hassabis, the founder and chief executive of Google’s artificial intelligence team DeepMind, the creator of AlphaGo, called the program’s victory a “historic moment.”

Does AlphaGo Mean Artificial Intelligence Is the Real Deal?




most theories of consciousness are worse than wrong..,


theatlantic |  According to medieval medicine, laziness is caused by a build-up of phlegm in the body. The reason? Phlegm is a viscous substance. Its oozing motion is analogous to a sluggish disposition.

The phlegm theory has more problems than just a few factual errors. After all, suppose you had a beaker of phlegm and injected it into a person. What exactly is the mechanism that leads to a lazy personality? The proposal resonates seductively with our intuitions and biases, but it doesn’t explain anything.

In the modern age we can chuckle over medieval naivetĂ©, but we often suffer from similar conceptual confusions. We have our share of phlegm theories, which flatter our intuitions while explaining nothing. They’re compelling, they often convince, but at a deeper level they’re empty.

One corner of science where phlegm theories proliferate is the cognitive neuroscience of consciousness. The brain is a machine that processes information, yet somehow we also have a conscious experience of at least some of that information. How is that possible? What is subjective experience? It’s one of the most important questions in science, possibly the most important, the deepest way of asking: What are we? Yet many of the current proposals, even some that are deep and subtle, are phlegm theories.

Sunday, October 04, 2015

hyparchic folding and reality mechanics at the microcosmic scale


pbs |  Zeno’s paradox is solved, but the question of whether there is a smallest unit of length hasn’t gone away. Today, some physicists think that the existence of an absolute minimum length could help avoid another kind of logical nonsense; the infinities that arise when physicists make attempts at a quantum version of Einstein’s General Relativity, that is, a theory of “quantum gravity.” When physicists attempted to calculate probabilities in the new theory, the integrals just returned infinity, a result that couldn’t be more useless. In this case, the infinities were not mistakes but demonstrably a consequence of applying the rules of quantum theory to gravity. But by positing a smallest unit of length, just like Zeno did, theorists can reduce the infinities to manageable finite numbers. And one way to get a finite length is to chop up space and time into chunks, thereby making it discrete: Zeno would be pleased.

He would also be confused. While almost all approaches to quantum gravity bring in a minimal length one way or the other, not all approaches do so by means of “discretization”—that is, by “chunking” space and time. In some theories of quantum gravity, the minimal length emerges from a “resolution limit,” without the need of discreteness. Think of studying samples with a microscope, for example. Magnify too much, and you encounter a resolution-limit beyond which images remain blurry. And if you zoom into a digital photo, you eventually see single pixels: further zooming will not reveal any more detail. In both cases there is a limit to resolution, but only in the latter case is it due to discretization. 

In these examples the limits could be overcome with better imaging technology; they are not fundamental. But a resolution-limit due to quantum behavior of space-time would be fundamental. It could not be overcome with better technology. 

So, a resolution-limit seems necessary to avoid the problem with infinities in the development of quantum gravity. But does space-time remain smooth and continuous even on the shortest distance scales, or does it become coarse and grainy? Researchers cannot agree.

Monday, September 28, 2015

CRISPieR



MIT News | A team including the scientist who first harnessed the CRISPR-Cas9 system for mammalian genome editing has now identified a different CRISPR system with the potential for even simpler and more precise genome engineering.
In a study published today in Cell, Feng Zhang and his colleagues at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT, with co-authors Eugene Koonin at the National Institutes of Health, Aviv Regev of the Broad Institute and the MIT Department of Biology, and John van der Oost at Wageningen University, describe the unexpected biological features of this new system and demonstrate that it can be engineered to edit the genomes of human cells.
“This has dramatic potential to advance genetic engineering,” says Eric Lander, director of the Broad Institute. “The paper not only reveals the function of a previously uncharacterized CRISPR system, but also shows that Cpf1 can be harnessed for human genome editing and has remarkable and powerful features. The Cpf1 system represents a new generation of genome editing technology.”
CRISPR sequences were first described in 1987, and their natural biological function was initially described in 2010 and 2011. The application of the CRISPR-Cas9 system for mammalian genome editing was first reported in 2013, by Zhang and separately by George Church at Harvard University.
In the new study, Zhang and his collaborators searched through hundreds of CRISPR systems in different types of bacteria, searching for enzymes with useful properties that could be engineered for use in human cells. Two promising candidates were the Cpf1 enzymes from bacterial species Acidaminococcus and Lachnospiraceae, which Zhang and his colleagues then showed can target genomic loci in human cells.
“We were thrilled to discover completely different CRISPR enzymes that can be harnessed for advancing research and human health,” says Zhang, the W.M. Keck Assistant Professor in Biomedical Engineering in MIT’s Department of Brain and Cognitive Sciences.
The newly described Cpf1 system differs in several important ways from the previously described Cas9, with significant implications for research and therapeutics, as well as for business and intellectual property:

some folding bits - just because...,


sciencedaily |  A new analysis supports the hypothesis that viruses are living entities that share a long evolutionary history with cells, researchers report. The study offers the first reliable method for tracing viral evolution back to a time when neither viruses nor cells existed in the forms recognized today, the researchers say.

The new findings appear in the journal Science Advances.

Until now, viruses have been difficult to classify, said University of Illinois crop sciences and Carl R. Woese Institute for Genomic Biology professor Gustavo Caetano-Anollés, who led the new analysis with graduate student Arshan Nasir. In its latest report, the International Committee on the Taxonomy of Viruses recognized seven orders of viruses, based on their shapes and sizes, genetic structure and means of reproducing.

"Under this classification, viral families belonging to the same order have likely diverged from a common ancestral virus," the authors wrote. "However, only 26 (of 104) viral families have been assigned to an order, and the evolutionary relationships of most of them remain unclear."

Part of the confusion stems from the abundance and diversity of viruses. Less than 4,900 viruses have been identified and sequenced so far, even though scientists estimate there are more than a million viral species. Many viruses are tiny -- significantly smaller than bacteria or other microbes -- and contain only a handful of genes. Others, like the recently discovered mimiviruses, are huge, with genomes bigger than those of some bacteria.

The new study focused on the vast repertoire of protein structures, called "folds," that are encoded in the genomes of all cells and viruses. Folds are the structural building blocks of proteins, giving them their complex, three-dimensional shapes. By comparing fold structures across different branches of the tree of life, researchers can reconstruct the evolutionary histories of the folds and of the organisms whose genomes code for them.

Sunday, September 27, 2015

fractally uncompressing terraforming machinery - some bits doing the unfolding, other bits waiting to be unfolded



Wiley | The nature of the role played by mobile elements in host genome evolution is reassessed considering numerous recent developments in many areas of biology. It is argued that easy popular appellations such as “selfish DNA” and “junk DNA” may be either inaccurate or misleading and that a more enlightened view of the transposable element-host relationship encompasses a continuum from extreme parasitism to mutualism. Transposable elements are potent, broad spectrum, endogenous mutators that are subject to the influence of chance as well as selection at several levels of biological organization. Of particular interest are transposable element traits that early evolve neutrally at the host level but at a later stage of evolution are co-opted for new host functions [Emphasis mine, Ed.].

where there is water, there will be a deduplicated and compressed backup fractal terraforming system...,


space |  A giant slab of ice as big as California and Texas combined lurks just beneath the surface of Mars between its equator and north pole, researchers say.

This ice may be the result of snowfall tens of millions of years ago on Mars, scientists added.

Mars is now dry and cold, but lots of evidence suggests that rivers, lakes and seas once covered the planet. Scientists have discovered life virtually wherever there is liquid water on Earth, leading some researchers to believe that life might have evolved on Mars when it was wet, and that life could be there even now, hidden in subterranean aquifers. [‪Photos: The Search for Life on Mars]

The amount of water on Mars has shifted dramatically over the eons because of the Red Planet's unstable obliquity — the degree to which the planet tilts on its axis of rotation. Unlike Earth, Mars does not have a large moon to keep it from wobbling, and so the direction its axis points wanders in a chaotic, unpredictable manner, regularly leading to ice ages.

Although researchers have long known that vast amounts of ice lie trapped in high latitudes around the Martian poles, scientists have recently begun to discover that ice also is hidden in mid-latitudes, and even at low latitudes around the Martian equator.

Learning more about past Martian climates and where its water once was "could help us understand if locations on Mars were once habitable," study lead author Ali Bramson, a planetary scientist at the University of Arizona in Tucson, told Space.com.

To look at ice hidden beneath the Martian surface, Bramson and her colleagues focused on strange craters in a region called Arcadia Planitia. This area lies in the mid-latitudes of Mars, analogous to Earthly latitudes falling between the U.S.-Canadian border and Kansas.

Sunday, May 17, 2015

quantum computing fitna _______________?


WaPo |  There is a race to build quantum computers, and (as far as we know) it isn’t the NSA that is in the lead. Competing are big tech companies such as IBM, Google, and Microsoft; start-ups; defense contractors; and universities. One Canadian start-up says that it has already developed a first version of a quantum computer. A physicist at Delft University of Technology in the Netherlands, Ronald Hanson, told Scientific American that he will be able to make the building blocks of a universal quantum computer in just five years, and a fully-functional demonstration machine in a little more than a decade.

These will change the balance of power in business and cyber-warfare.  They have profound national-security implications, because they are the technology equivalent of a nuclear weapon.
Let me first explain what a quantum computer is and where we are.

In a classical computer, information is represented in bits, binary digits, each of which can be a 0 or 1.  Because they only have only two values, long sequences of 0s and 1s are necessary to form a number or to do a calculation. A quantum bit (called a qbit), however, can hold a value of 0 or 1 or both values at the same time — a superposition denoted as “0+1.”  The power of a quantum computer increases exponentially with the number of qubits. Rather than doing computations sequentially as classical computers do, quantum computers can solve problems by laying out all of the possibilities simultaneously and measuring the results.

Imagine being able to open a combination lock by trying every possible number and sequence at the same time. Though the analogy isn’t perfect — because of the complexities in measuring the results of a quantum calculation — it gives you an idea of what is possible.

There are many complexities in building a quantum computer: challenges in finding the best materials from which to generate entangled photon pairs; new types of logic gates and their fabrication on computer chips; creation and control of qubits; designs for storage mechanisms; and error detection. But breakthroughs are being announced every month. IBM, for example, has just announced that it has found a new way to detect and measure quantum errors and has designed a new qubit circuit that, in sufficient numbers, will form the large chips that quantum computers will need.
Most researchers I have spoken to say that it is a matter of when — not whether — quantum computing will be practical. Some believe that this will be as soon as five years; others say 20 years.  IBM said in April that we’ve entered a golden era of quantum-computing research, and predicted that the company would be the first to develop a practical quantum computer.

first computational genomics, now computational ethology...,


academia |  Abstract: For the past two decades, it has widely been assumed by linguists that there is a single computational operation, Merge, which is unique to language, distinguishing it from other cognitive domains. The intention of this paper is to progress the discussion of language evolution in two ways: (i) survey what the ethological record reveals about the uniqueness of the human computational system, and (ii) explore how syntactic theories account for what ethology may determine to be human-specific. It is shown that the operation Label, not Merge, constitutes the evolutionary novelty which distinguishes human language from non-human computational systems; a proposal lending weight to a Weak Continuity Hypothesis and leading to the formation of what is termed Computational Ethology. Some directions for future ethological research are suggested.

 Keywords: Minimalism; Labeling effects; cognome; animal cognition; formal language theory; language evolution

The Tik Tok Ban Is Exclusively Intended To Censor And Control Information Available To You

Mises |   HR 7521 , called the Protecting Americans from Foreign Adversary Controlled Applications Act, is a recent development in Americ...