Showing posts with label computationalism. Show all posts
Showing posts with label computationalism. Show all posts

Tuesday, November 14, 2017

Is Something Wrong With These Interwebs?


medium  |  Here are a few things which are disturbing me:

The first is the level of horror and violence on display. Some of the times it’s troll-y gross-out stuff; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires; in fact, it’s what it seems to do best. I spend a lot of time arguing for this tendency, with regards to human sexual freedom, individual identity, and other issues. Here, and overwhelmingly it sometimes feels, that tendency is itself a violent and destructive one.

The second is the levels of exploitation, not of children because they are children but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. 

Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket.

Many of these latest examples confound any attempt to argue that nobody is actually watching these videos, that these are all bots. There are humans in the loop here, even if only on the production side, and I’m pretty worried about them too.

I’ve written enough, too much, but I feel like I actually need to justify all this raving about violence and abuse and automated systems with an example that sums it up. Maybe after everything I’ve said you won’t think it’s so bad. I don’t know what to think any more.

This video, BURIED ALIVE Outdoor Playground Finger Family Song Nursery Rhymes Animation Education Learning Video, contains all of the elements we’ve covered above, and takes them to another level. Familiar characters, nursery tropes, keyword salad, full automation, violence, and the very stuff of kids’ worst dreams. And of course there are vast, vast numbers of these videos. Channel after channel after channel of similar content, churned out at the rate of hundreds of new videos every week. Industrialised nightmare production.

For the final time: There is more violent and more sexual content like this available. I’m not going to link to it. I don’t believe in traumatising other people, but it’s necessary to keep stressing it, and not dismiss the psychological effect on children of things which aren’t overtly disturbing to adults, just incredibly dark and weird.

A friend who works in digital video described to me what it would take to make something like this: a small studio of people (half a dozen, maybe more) making high volumes of low quality content to reap ad revenue by tripping certain requirements of the system (length in particular seems to be a factor). According to my friend, online kids’ content is one of the few alternative ways of making money from 3D animation because the aesthetic standards are lower and independent production can profit through scale. It uses existing and easily available content (such as character models and motion-capture libraries) and it can be repeated and revised endlessly and mostly meaninglessly because the algorithms don’t discriminate — and neither do the kids.

These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.  Fist tap Dale.

Saturday, July 15, 2017

The Robots are Just Us


BostonGlobe  |  Even AI giants like Google can’t escape the impact of bias. In 2015, the company’s facial recognition software tagged dark skinned people as gorillas. Executives at FaceApp, a photo editing program, recently apologized for building an algorithm that whitened the users’ skin in their pictures. The company had dubbed it the “hotness” filter. 

In these cases, the error grew from data sets that didn’t have enough dark-skinned people, which limited the machine’s ability to learn variation within darker skin tones. Typically, a programmer instructs a machine with a series of commands, and the computer follows along. But if the programmer tests the design on his peer group, coworkers, and family, he’s limited what the machine can learn and imbues it with whichever biases shape his own life. 

Photo apps are one thing, but when their foundational algorithms creep into other areas of human interaction, the impacts can be as profound as they are lasting.

The faces of one in two adult Americans have been processed through facial recognition software. Law enforcement agencies across the country are using this gathered data with little oversight. Commercial facial-recognition algorithms have generally done a better job of telling white men apart than they do with women and people of other races, and law enforcement agencies offer few details indicating that their systems work substantially better. Our justice system has not decided if these sweeping programs constitute a search, which would restrict them under the Fourth Amendment. Law enforcement may end up making life-altering decisions based on biased investigatory tools with minimal safeguards.

Meanwhile, judges in almost every state are using algorithms to assist in decisions about bail, probation, sentencing, and parole. Massachusetts was sued several years ago because an algorithm it uses to predict recidivism among sex offenders didn’t consider a convict’s gender. Since women are less likely to reoffend, an algorithm that did not consider gender likely overestimated recidivism by female sex offenders. The intent of the scores was to replace human bias and increase efficiency in an overburdened judicial system. But, as mathematician Julia Angwin reported in ProPublica, these algorithms are using biased questionnaires to come to their determinations and yielding flawed results.

A ProPublica study of the recidivism algorithm used in Fort Lauderdale found that 23.5 percent of white men were labeled as being at an elevated risk for getting into trouble again, but didn’t re-offend. Meanwhile, 44.9 percent of black men were labeled higher risk for future offenses, but didn’t re-offend, showing how these scores are inaccurate and favor white men. 

While the questionnaires don’t ask specifically about skin color, data scientists say they “back into race” by asking questions like: When was your first encounter with police? 

The assumption is that someone who comes in contact with police as a young teenager is more prone to criminal activity than someone who doesn’t. But this hypothesis doesn’t take into consideration that policing practices vary and therefore so does the police’s interaction with youth. If someone lives in an area where the police routinely stop and frisk people, he will be statistically more likely to have had an early encounter with the police. Stop-and-frisk is more common in urban areas where African-Americans are more likely to live than whites.This measure doesn’t calculate guilt or criminal tendencies, but becomes a penalty when AI calculates risk. In this example, the AI is not just computing for the individual’s behavior, it is also considering the police’s behavior.

“I’ve talked to prosecutors who say, ‘Well, it’s actually really handy to have these risk scores because you don’t have to take responsibility if someone gets out on bail and they shoot someone. It’s the machine, right?’” says Joi Ito, director of the Media Lab at MIT.

Tuesday, June 27, 2017

Google "Invests" in Bitcoin


marketslant |  Right now the BitCoin group is running into what we call "floor trader fear". The  voting members are chafing at the idea of scaling their supply by adding servers and/ or server power. This would disrupt their own little empires, not unlike the trading floor fearing Globex back in the day. And so many exchanges held out and protected the floor. And in the end they died. PHLX, AMEX, COMEX, PCOAST, CSCE, all gone or absorbed because they were late to adapt new technology and protect their liquidity pools. If Bitcoin removes power  from its voting members  control by demutualizing and uses those proceeds to increase server power they will likely excel. But Google and Amazon are now playing and they are all about unlimited  server power. Plus they have the eyeballs already. This is no unlike having the "marketmakers" already trading on a screen at Globex. The "liquidity pool" ofbuyers and sellers are already on  Amazon  and Google. Bitcoin does not have that past "early adaptors". Remember Palm?

When, not if, those behemoths are up and running they will immediately have an embedded network of both customers AND service providers  at their disposal in the form of search  eyeballs (google) and buyers (Amazon). They will be set up  to crush the opposition if they choose to create their own currency. Imagine Amazon  offering amazon money for amazon purchases. Now imagine them offering 20% discounts if you use  their money. The choices at this point boggle the mind. Tactical choices thought no longer used will come  into play again. Some examples: Freemium, Coupons, Customer Loyalty, Vertical Client Integration (P.O.S.), Travelers checks and more.
To be fair, Google has invested in Bitcoin as well. What smart trader would not hedge himself. But just like Netflix is Amazon's biggest cloud customer, but will eventually put Netflix out of business (after NetFlix kills Hollywood's distribution network); So will Google/ Amazon/ Apple attempt to obviate the need for any currency but their own. 

Blockchain is  the railroad. Amazon and Google have the oil. Like Rockefeller  before, The railroad will be made "exclusive" to their products.


Saturday, April 22, 2017

Secret Societies, Ancient Ciphers, Machine Translation


wired |  The master wears an amulet with a blue eye in the center. Before him, a candidate kneels in the candlelit room, surrounded by microscopes and surgical implements. The year is roughly 1746. The initiation has begun.

The master places a piece of paper in front of the candidate and orders him to put on a pair of eyeglasses. “Read,” the master commands. The candidate squints, but it’s an impossible task. The page is blank.

 The candidate is told not to panic; there is hope for his vision to improve. The master wipes the candidate’s eyes with a cloth and orders preparation for the surgery to commence. He selects a pair of tweezers from the table. The other members in attendance raise their candles.

The master starts plucking hairs from the candidate’s eyebrow. This is a ritualistic procedure; no flesh is cut. But these are “symbolic actions out of which none are without meaning,” the master assures the candidate. The candidate places his hand on the master’s amulet. Try reading again, the master says, replacing the first page with another. This page is filled with handwritten text. Congratulations, brother, the members say. Now you can see.

For more than 260 years, the contents of that page—and the details of this ritual—remained a secret. They were hidden in a coded manuscript, one of thousands produced by secret societies in the 18th and 19th centuries. At the peak of their power, these clandestine organizations, most notably the Freemasons, had hundreds of thousands of adherents, from colonial New York to imperial St. Petersburg. Dismissed today as fodder for conspiracy theorists and History Channel specials, they once served an important purpose: Their lodges were safe houses where freethinkers could explore everything from the laws of physics to the rights of man to the nature of God, all hidden from the oppressive, authoritarian eyes of church and state. But largely because they were so secretive, little is known about most of these organizations. Membership in all but the biggest died out over a century ago, and many of their encrypted texts have remained uncracked, dismissed by historians as impenetrable novelties.

It was actually an accident that brought to light the symbolic “sight-restoring” ritual. The decoding effort started as a sort of game between two friends that eventually engulfed a team of experts in disciplines ranging from machine translation to intellectual history. Its significance goes far beyond the contents of a single cipher. Hidden within coded manuscripts like these is a secret history of how esoteric, often radical notions of science, politics, and religion spread underground. At least that’s what experts believe. The only way to know for sure is to break the codes.

In this case, as it happens, the cracking began in a restaurant in Germany.

Thirteen years later, in January 2011, Schaefer attended an Uppsala conference on computational linguistics. Ordinarily talks like this gave her a headache. She preferred musty books to new technologies and didn’t even have an Internet connection at home. But this lecture was different. The featured speaker was Kevin Knight, a University of Southern California specialist in machine translation—the use of algorithms to automatically translate one language into another. With his stylish rectangular glasses, mop of prematurely white hair, and wiry surfer’s build, he didn’t look like a typical quant. Knight spoke in a near whisper yet with intensity and passion. His projects were endearingly quirky too. He built an algorithm that would translate Dante’s Inferno based on the user’s choice of meter and rhyme scheme. Soon he hoped to cook up software that could understand the meaning of poems and even generate verses of its own.

Knight was part of an extremely small group of machine-translation researchers who treated foreign languages like ciphers—as if Russian, for example, were just a series of cryptological symbols representing English words. In code-breaking, he explained, the central job is to figure out the set of rules for turning the cipher’s text into plain words: which letters should be swapped, when to turn a phrase on its head, when to ignore a word altogether. Establishing that type of rule set, or “key,” is the main goal of machine translators too. Except that the key for translating Russian into English is far more complex. Words have multiple meanings, depending on context. Grammar varies widely from language to language. And there are billions of possible word combinations.


Sunday, April 16, 2017

Artificial Intelligence Will Disclose Cetacean Souls



Scientists have struggled to understand dolphin vocalizations, but new computer tools to both track dolphins and decode their complex vocalizations are now emerging. Dr. Denise Herzing has been studying Atlantic spotted dolphins, Stenella frontalis, in the Bahamas for over three decades. Her video and acoustic database encompasses a myriad of complex vocalizations and dolphin behavior. Dr. Thad Starner works on mining this dataset and decoding dolphin sounds, and has created a wearable underwater computer, CHAT (Cetacean Hearing and Telemetry), to help establish a bridge for communication between humans and dolphins. Starner and Herzing will present this cutting-edge work and recent results, including perspectives on the challenges of studying this aquatic society, and decoding their communication signals using the latest technology.

qz |  The possibility of talking to animals has tickled popular imaginations for years, and with good reason. Who wouldn’t want to live in a Dr. Dolittle world where we could understand what our pets and animal neighbors are saying?

Animal cognition researchers have also been fascinated by the topic. Their work typically focuses on isolating animal communication to see if language is uniquely human, or if it could have evolved in other species as well. One of their top candidates is an animal known to communicate with particularly high intelligence: dolphins.

Dolphins—like many animals including monkeys, birds, cats, and dogs—clearly do relay messages to one another. They emit sounds (paywall) in three broad categories: clicks, whistles, and more complex chirps used for echolocation (paywall), a technique they use to track prey and other objects by interpreting ricocheting sound waves. Researchers believe these sounds can help dolphins communicate: Whistles can serve as unique identifiers, similar to names, and can alert the pod to sources of food or danger.

Communication is most certainly a part of what helps these animals live in social pods. But proving that dolphins use language—the way that you’re reading this article, or how you might talk to your friends about it later—is a whole different kettle of fish.

Physical Basis for Morphogenesis: On Growth and Form


nature |  Still in print, On Growth and Form was more than a decade in the planning. Thompson would regularly tell colleagues and students — he taught at what is now the University of Dundee, hence the local media interest — about his big idea before he wrote it all down. In part, he was reacting against one of the biggest ideas in scientific history. Thompson used his book to argue that Charles Darwin’s natural selection was not the only major influence on the origin and development of species and their unique forms: “In general no organic forms exist save such as are in conformity with physical and mathematical laws.”

Biological response to physical forces remains a live topic for research. In a research paper, for example, researchers report how physical stresses generated at defects in the structures of epithelial cell layers cause excess cells to be extruded.

In a separate online publication (K. Kawaguchi et al. Nature http://dx.doi.org/10.1038/nature22321; 2017), other scientists show that topological defects have a role in cell dynamics, as a result of the balance of forces. In high-density cultures of neural progenitor cells, the direction in which cells travel around defects affects whether cells become more densely packed (leading to pile-ups) or spread out (leading to a cellular fast-lane where travel speeds up).

A Technology Feature investigates in depth the innovative methods developed to detect and measure forces generated by cells and proteins. Such techniques help researchers to understand how force is translated into biological function.

Thompson’s influence also flourishes in other active areas of interdisciplinary research. A research paper offers a mathematical explanation for the colour changes that appear in the scales of ocellated lizards (Timon lepidus) during development (also featured on this week’s cover). It suggests that the patterns are generated by a system called a hexagonal cellular automaton, and that such a discrete system can emerge from the continuous reaction-diffusion framework developed by mathematician Alan Turing to explain the distinctive patterning on animals, such as spots and stripes. (Some of the research findings are explored in detail in the News and Views section.) To complete the link to Thompson, Turing cited On Growth and Form in his original work on reaction-diffusion theory in living systems.

Finally, we have also prepared an online collection of research and comment from Nature and the Nature research journals in support of the centenary, some of which we have made freely available to view for one month.

Saturday, April 15, 2017

Hacking and Reprogramming Cells Like Computers


wired |  Cells are basically tiny computers: They send and receive inputs and output accordingly. If you chug a Frappuccino, your blood sugar spikes, and your pancreatic cells get the message. Output: more insulin.

But cellular computing is more than just a convenient metaphor. In the last couple of decades, biologists have been working to hack the cells’ algorithm in an effort to control their processes. They’ve upended nature’s role as life’s software engineer, incrementally editing a cell’s algorithm—its DNA—over generations. In a paper published today in Nature Biotechnology, researchers programmed human cells to obey 109 different sets of logical instructions. With further development, this could lead to cells capable of responding to specific directions or environmental cues in order to fight disease or manufacture important chemicals.

Their cells execute these instructions by using proteins called DNA recombinases, which cut, reshuffle, or fuse segments of DNA. These proteins recognize and target specific positions on a DNA strand—and the researchers figured out how to trigger their activity. Depending on whether the recombinase gets triggered, the cell may or may not produce the protein encoded in the DNA segment.

A cell could be programmed, for example, with a so-called NOT logic gate. This is one of the simplest logic instructions: Do NOT do something whenever you receive the trigger. This study’s authors used this function to create cells that light up on command. Biologist Wilson Wong of Boston University, who led the research, refers to these engineered cells as “genetic circuits.”

A Programming Language For Living Cells?



MIT |  MIT biological engineers have created a programming language that allows them to rapidly design complex, DNA-encoded circuits that give new functions to living cells.
Using this language, anyone can write a program for the function they want, such as detecting and responding to certain environmental conditions. They can then generate a DNA sequence that will achieve it.
“It is literally a programming language for bacteria,” says Christopher Voigt, an MIT professor of biological engineering. “You use a text-based language, just like you’re programming a computer. Then you take that text and you compile it and it turns it into a DNA sequence that you put into the cell, and the circuit runs inside the cell.”
Voigt and colleagues at Boston University and the National Institute of Standards and Technology have used this language, which they describe in the April 1 issue of Science, to build circuits that can detect up to three inputs and respond in different ways. Future applications for this kind of programming include designing bacterial cells that can produce a cancer drug when they detect a tumor, or creating yeast cells that can halt their own fermentation process if too many toxic byproducts build up.
The researchers plan to make the user design interface available on the Web.

Friday, March 31, 2017

Wikileaks Vault 7 Marble Framework


wikileaks | Today, March 31st 2017, WikiLeaks releases Vault 7 "Marble" -- 676 source code files for the CIA's secret anti-forensic Marble Framework. Marble is used to hamper forensic investigators and anti-virus companies from attributing viruses, trojans and hacking attacks to the CIA.

Marble does this by hiding ("obfuscating") text fragments used in CIA malware from visual inspection. This is the digital equivallent of a specalized CIA tool to place covers over the english language text on U.S. produced weapons systems before giving them to insurgents secretly backed by the CIA.

Marble forms part of the CIA's anti-forensics approach and the CIA's Core Library of malware code. It is "[D]esigned to allow for flexible and easy-to-use obfuscation" as "string obfuscation algorithms (especially those that are unique) are often used to link malware to a specific developer or development shop."

The Marble source code also includes a deobfuscator to reverse CIA text obfuscation. Combined with the revealed obfuscation techniques, a pattern or signature emerges which can assist forensic investigators attribute previous hacking attacks and viruses to the CIA. Marble was in use at the CIA during 2016. It reached 1.0 in 2015.

The source code shows that Marble has test examples not just in English but also in Chinese, Russian, Korean, Arabic and Farsi. This would permit a forensic attribution double game, for example by pretending that the spoken language of the malware creator was not American English, but Chinese, but then showing attempts to conceal the use of Chinese, drawing forensic investigators even more strongly to the wrong conclusion, --- but there are other possibilities, such as hiding fake error messages.

The Marble Framework is used for obfuscation only and does not contain any vulnerabilties or exploits by itself.

Sunday, March 12, 2017

From Bacteria to Bach and Back


nybooks |  As for the underlying mechanisms, we now have a general idea of how they might work because of another strange inversion of reasoning, due to Alan Turing, the creator of the computer, who saw how a mindless machine could do arithmetic perfectly without knowing what it was doing. This can be applied to all kinds of calculation and procedural control, in natural as well as in artificial systems, so that their competence does not depend on comprehension. Dennett’s claim is that when we put these two insights together, we see that
all the brilliance and comprehension in the world arises ultimately out of uncomprehending competences compounded over time into ever more competent—and hence comprehending—systems. This is indeed a strange inversion, overthrowing the pre-Darwinian mind-first vision of Creation with a mind-last vision of the eventual evolution of us, intelligent designers at long last.
And he adds:
Turing himself is one of the twigs on the Tree of Life, and his artifacts, concrete and abstract, are indirectly products of the blind Darwinian processes in the same way spider webs and beaver dams are….
An essential, culminating stage of this process is cultural evolution, much of which, Dennett believes, is as uncomprehending as biological evolution. He quotes Peter Godfrey-Smith’s definition, from which it is clear that the concept of evolution can apply more widely:
Evolution by natural selection is change in a population due to (i) variation in the characteristics of members of the population, (ii) which causes different rates of reproduction, and (iii) which is heritable.
In the biological case, variation is caused by mutations in DNA, and it is heritable through reproduction, sexual or otherwise. But the same pattern applies to variation in behavior that is not genetically caused, and that is heritable only in the sense that other members of the population can copy it, whether it be a game, a word, a superstition, or a mode of dress.
This is the territory of what Richard Dawkins memorably christened “memes,” and Dennett shows that the concept is genuinely useful in describing the formation and evolution of culture. He defines “memes” thus:
They are a kind of way of behaving (roughly) that can be copied, transmitted, remembered, taught, shunned, denounced, brandished, ridiculed, parodied, censored, hallowed.
They include such things as the meme for wearing your baseball cap backward or for building an arch of a certain shape; but the best examples of memes are words. A word, like a virus, needs a host to reproduce, and it will survive only if it is eventually transmitted to other hosts, people who learn it by imitation:
Like a virus, it is designed (by evolution, mainly) to provoke and enhance its own replication, and every token it generates is one of its offspring. The set of tokens descended from an ancestor token form a type, which is thus like a species.

Sunday, March 05, 2017

Artificial Intelligence and Creative Work



cnbc |  To the average person, the billboard on the bus stop on London's Oxford Street was a standard coffee-brand ad.  Every few seconds, the digital poster would change. Sometimes, it would feature a wide range of drab grays and blocks of text. Other times, it was a minimalistic image with a short saying.

What was unique about this particular poster, which ran in two locations at the end of July 2015, wasn't the fact that people were looking at it. Rather, it was looking at them — and learning. Using facial tracking technology and genetics-based algorithms, the poster took the aspects that people looked at the longest and then incorporated that into the next design evolution.

"We were surprised how quickly it learned," said Sam Ellis, business director of innovation at M&C Saatchi. "It got to a state of where it felt like it was in the right place a bit faster than we thought."

In less than 72 hours, the M&C Saatchi advertisement was creating posters in line with the current best practices in the advertising industry, which had been developed over decades of human trial and error like realizing three to five word slogans work best.

"We thought [our employees] would be nervous about it: Is this going to kill off creative?" Ellis said. "What they started to realize is that it could be really, really useful based on its insight."

M&C Saatchi's Ellis believes eventually ad agencies will be smaller, because AI will be able to accomplish tasks with a high degree of accuracy — for much less money than now — and will make outsourcing tasks a lot more effective.

As our machines become more sophisticated and more details about our lives are recorded as data points, AI is getting to the point where it knows a tremendous amount about humans. It can tell what a person is feeling. It knows the difference between a truth and a lie. It can go through millions of permutations in a second, coming up with more variations than a human could think of. It knows your daily routine, like when you're most likely going to want a cold beer on a hot summer day.

Saturday, March 04, 2017

"Computer Virus" Will Cease Being a Metaphor During My Lifetime


thescientist |  Yaniv Erlich and colleagues encoded large media files in DNA, copied the DNA multiple times, and still managed to retrieve the files without any errors, they reported in Science today (March 2). Compared with cassette tapes and 8 mm film, DNA is far less likely to become obsolete, and its storage density is roughly 215 petabytes of data per gram of genetic material, the researchers noted.

To test DNA’s media-storage capabilities, Erlich, an assistant professor of computer science at Columbia University in New York City, and Dina Zielinski, a senior associate scientist at the New York Genome Center, encoded six large files—including a French film and a computer operating system (OS), complete with word-processing software—into DNA. They then recovered the data from PCR-generated copies of that DNA. The Scientist spoke with Erlich about the study, and other potential data-storage applications for DNA.

The Scientist: Why is DNA a good place to store information?

Yaniv Erlich: First, we’re starting to reach the physical limits of hard drives. DNA is much more compact than magnetic media—about 1 million times more compact. Second, it can last for a much longer time. Think about your CDs from the 90s, they’re probably scratched by now. [Today] we can read DNA from a skeleton [that is] 4,000 years old. Third, one of the nice features about DNA is that it is not subject to digital obsoleteness. Think about videocassettes or 8 mm movies. It’s very hard these days to watch these movies because the hardware changes so fast. DNA—that hardware isn’t going anywhere. It’s been around for the last 3 billion years. If humanity loses its ability to read DNA, we have much bigger problems than data storage.

TS: Have other researchers tried to store information in DNA?

YE: There are several groups that have already done this process, and they inspired us, but our approach has several advantages. Ours is 60 percent more efficient than previous strategies and our results are very immune to noise and error. Most previous studies reported some issues getting the data back from the DNA, some gaps [in the information retrieved], but we show it’s easy. We even tried to make it harder for ourselves . . . so we tried to copy the data, and the enzymatic reaction [involved in copying DNA] introduces errors. We copied the data, and then copied that copy, and then copied a copy of that copy—nine times—and we were still able to recover the data without one error. We also . . . achieved a density of 215 petabytes per one gram of DNA. Your laptop has probably one terabyte. Multiply that by 200,000, and we could fit all that information into one gram of DNA.

Tuesday, February 28, 2017

J.P. Morgan Chase & Co. Embracing AI and Shedding Useless Humans



bloomberg |  At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone and never asks for vacation.

While the financial industry has long touted its technological innovations, a new era of automation is now in overdrive as cheap computing power converges with fears of losing customers to startups. Made possible by investments in machine learning and a new private cloud network, COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specializing in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients -- a growing part of the firm’s $9.6 billion technology budget -- is a core theme as the company hosts its annual investor day on Tuesday.

Behind the strategy, overseen by Chief Operating Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.

Sunday, December 11, 2016

GOD Cares Not About Your Suffering, Only Your Ability To Overcome It


NYTimes |  The point is that delivering deep and lasting reductions in inequality may be impossible absent catastrophic events beyond anything any of us would wish for. 

History — from Ancient Rome through the Gilded Age; from the Russian Revolution to the Great Compression of incomes across the West in the middle of the 20th century — suggests that reversing the trend toward greater concentrations of income, in the United States and across the world, might be, in fact, nearly impossible.

That’s the bleak argument of Walter Scheidel, a professor of history at Stanford, whose new book, “The Great Leveler” (Princeton University Press), is due out next month. He goes so far as to state that “only all-out thermonuclear war might fundamentally reset the existing distribution of resources.” If history is anything to go by, he writes, “peaceful policy reform may well prove unequal to the growing challenges ahead.”

Professor Scheidel does not offer a grand unified theory of inequality. But scouring through the historical record, he detects a pattern: From the Stone Age to the present, ever since humankind produced a surplus to hoard, economic development has almost always led to greater inequality. There is one big thing with the power to stop this dynamic, but it’s not pretty: violence.

The big equalizing moments in history may not have always have the same cause, he writes, “but they shared one common root: massive and violent disruptions of the established order.”

Saturday, December 10, 2016

Distances Between Nucleotide Sequences Contain Biologically Relevant Information


g3journal |  Enhancers physically interact with transcriptional promoters, looping over distances that can span multiple regulatory elements. Given that enhancer-promoter (EP) interactions generally occur via common protein complexes, it is unclear whether EP pairing is predominantly deterministic or proximity guided. Here we present cross-organismic evidence suggesting that most EP pairs are compatible, largely determined by physical proximity rather than specific interactions. By re-analyzing transcriptome datasets, we find that the transcription of gene neighbors is correlated over distances that scale with genome size. We experimentally show that non-specific EP interactions can explain such correlation, and that EP distance acts as a scaling factor for the transcriptional influence of an enhancer. We propose that enhancer sharing is commonplace among eukaryotes, and that EP distance is an important layer of information in gene regulation.

Friday, December 09, 2016

Like Genomics - Reality is Computational


edgarlowen |  A computational model is by far the most reasonable and fruitful approach to reality. The computational model of Universal Reality is both internally consistent and consistent with science and the scientific method. This may initially seem counter intuitive but there all sorts of convincing reasons supporting it.

There is overwhelming evidence that everything in the universe is its information or data only and that the observable universe is a computational system:

1. To be comprehensible, which it self-evidently is, reality must be a logically consistent structure. To be logical and to continually happen it must be computable. To be computable it must consist of data because only data is computable. Therefore the content of the observable universe must consist only of programs computing data.

2. The laws of science which best describe reality are themselves logico-mathematical information forms. Why would the equations of science be the best description of reality if reality itself didn’t also consist of similar information structures? This explains the so-called “unreasonable effectiveness of mathematics” in describing the universe (Wigner, 1960).

3. By recognizing that reality is a logico-mathematical structure the laws of nature immediately assume their natural place as an intrinsic part of reality. No longer do they somehow stand outside a physical world while mysteriously controlling it. A physical model of the universe is unable to explain where the laws of nature reside or what their status is (Penrose, 2005).

4. Physical mechanisms to produce effects become unnecessary in a purely computational world. It’s enough to have a consistent logico-mathematical program that computes them in accordance with experimental evidence.

5. When everything that mind adds to our perception of reality is recognized and subtracted all that remains of reality is a computational data structure. This is explained in detail below and can actually be confirmed by carefully analyzed direct experience.

6. We know that our internal simulation of reality exists as neurochemical data in the circuits of our brain. Yet this world appears perfectly real to us. If our cognitive model of reality consists only of data and seems completely real then it’s reasonable to assume that the actual external world could also consist only of data. How else could it be so effectively modeled as data in our brains if it weren’t data itself?

7. This view of reality is tightly consistent with the other insights of Universal Reality, which are cross-consistent with modern science. Total consistency across maximum scope is the test of validity, truth and knowledge (Owen, 2016).

8. This view of reality leads to simple elegant solutions of many of the perennial problems of science and the nature of reality and leads directly to many new insights. Specifically it leads to a clear understanding of the nature of consciousness and also enables a new understanding of spacetime that conceptually unifies quantum theory and general relativity and resolves the paradoxical nature of the quantum world (Owen, 2016).

9. These insights complete the progress of science itself in reducing everything to data by revealing how both mass-energy and spacetime, the last remaining bastions of physicality, can be reduced to data as explained in Universal Reality (Owen, 2016).

10. Viewing the universe as running programs computing its data changes nothing about the universe which continues exactly as before. It merely completes the finer and finer analysis of all things including us into their most elemental units. It’s simply a new way of looking at what already exists in which even the elementary particles themselves consist entirely of data while everything around us remains the same. Reality remained exactly the same when everything was reduced to its elementary particles, and it continues to remain the same when those particles are further reduced to their data.

Tuesday, November 29, 2016

The Mysterious Interlingua


slashdot |  After a little over a month of learning more languages to translate beyond Spanish, Google's recently announced Neural Machine Translation system has used deep learning to develop its own internal language. TechCrunch reports:GNMT's creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa... could it translate Korean to Japanese, without resorting to English as a bridge between them? They made this helpful gif to illustrate the idea of what they call "zero-shot translation" (it's the orange one). As it turns out -- yes! It produces "reasonable" translations between two languages that it has not explicitly linked in any way. Remember, no English allowed. But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked... does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another? In other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google's language and AI boffins think that it has. The paper describing the researchers' work (primarily on efficient multi-language translation but touching on the mysterious interlingua) can be read at Arxiv.

Friday, June 03, 2016

The Genome Project - Write


NYTimes |  “By focusing on building the 3Gb of human DNA, HGP-write would push current conceptual and technical limits by orders of magnitude and deliver important scientific advances,” they write, referring to three gigabases, the three billion letters in the human genome.

Scientists already can change DNA in organisms or add foreign genes, as is done to make medicines like insulin or genetically modified crops. New “genome editing” tools, like one called Crispr, are making it far easier to re-engineer an organism’s DNA blueprint.

But George Church, a professor of genetics at Harvard Medical School and one of the organizers of the new project, said that if the changes desired are extensive, at some point it becomes easier to synthesize the needed DNA from scratch.

“Editing doesn’t scale very well,” he said. “When you have to make changes to every gene in the genome it may be more efficient to do it in large chunks.”

Besides Dr. Church, the other organizers of the project are Jef Boeke, director of the Institute for Systems Genetics at NYU Langone Medical Center; Andrew Hessel, a futurist at the software company Autodesk; and Nancy J. Kelley, who works raising money for projects. The paper in Science lists a total of 25 authors, many of them involved in DNA engineering.

Autodesk, which has given $250,000 to the project, is interested in selling software to help biologists design DNA sequences to make organisms perform particular functions. Dr. Church is a founder of Gen9, a company that sells made-to-order strands of DNA.

Dr. Boeke of N.Y.U. is leading an international project to synthesize the complete genome of yeast, which has 12 million base pairs. It would be the largest genome synthesized to date, though still much smaller than the human genome.

Tuesday, May 31, 2016

the minecraft generation


NYTimes |  Since its release seven years ago, Minecraft has become a global sensation, captivating a generation of children. There are over 100 million registered players, and it’s now the third-best-­selling video game in history, after Tetris and Wii Sports. In 2014, Microsoft bought Minecraft — and Mojang, the Swedish game studio behind it — for $2.5 billion.

There have been blockbuster games before, of course. But as Jordan’s experience suggests — and as parents peering over their children’s shoulders sense — Minecraft is a different sort of phenomenon.
For one thing, it doesn’t really feel like a game. It’s more like a destination, a technical tool, a cultural scene, or all three put together: a place where kids engineer complex machines, shoot videos of their escapades that they post on YouTube, make art and set up servers, online versions of the game where they can hang out with friends. It’s a world of trial and error and constant discovery, stuffed with byzantine secrets, obscure text commands and hidden recipes. And it runs completely counter to most modern computing trends. Where companies like Apple and Microsoft and Google want our computers to be easy to manipulate — designing point-and-click interfaces under the assumption that it’s best to conceal from the average user how the computer works — Minecraft encourages kids to get under the hood, break things, fix them and turn mooshrooms into random-­number generators. It invites them to tinker.

In this way, Minecraft culture is a throwback to the heady early days of the digital age. In the late ’70s and ’80s, the arrival of personal computers like the Commodore 64 gave rise to the first generation of kids fluent in computation. They learned to program in Basic, to write software that they swapped excitedly with their peers. It was a playful renaissance that eerily parallels the embrace of Minecraft by today’s youth. As Ian Bogost, a game designer and professor of media studies at Georgia Tech, puts it, Minecraft may well be this generation’s personal computer.

At a time when even the president is urging kids to learn to code, Minecraft has become a stealth gateway to the fundamentals, and the pleasures, of computer science. Those kids of the ’70s and ’80s grew up to become the architects of our modern digital world, with all its allures and perils. What will the Minecraft generation become?

“Children,” the social critic Walter Benjamin wrote in 1924, “are particularly fond of haunting any site where things are being visibly worked on. They are irresistibly drawn by the detritus generated by building, gardening, housework, tailoring or carpentry.”

Wednesday, May 25, 2016

the great transbiological leap of digitalization...,



sciencenews |   Before anybody even had a computer, Claude Shannon figured out how to make computers worth having.

As an electrical engineering graduate student at MIT, Shannon played around with a “differential analyzer,” a crude forerunner to computers. But for his master’s thesis, he was more concerned with relays and switches in electrical circuits, the sorts of things found in telephone exchange networks. In 1937 he produced, in the words of mathematician Solomon Golomb, “one of the greatest master’s theses ever,” establishing the connection between symbolic logic and the math for describing such circuitry. Shannon’s math worked not just for telephone exchanges or other electrical devices, but for any circuits, including the electronic circuitry that in subsequent decades would make digital computers so powerful.

It’s now conveniently a good time to celebrate Shannon’s achievements, on the occasion of the centennial of his birth (April 30) in Petoskey, Michigan, in 1916. Based on the pervasive importance of computing in society today, it wouldn’t be crazy to call the time since then “Shannon’s Century.”

“It is no exaggeration,” wrote Golomb, “to refer to Claude Shannon as the ‘father of the information age,’ and his intellectual achievement as one of the greatest of the twentieth century.”

Shannon is most well-known for creating an entirely new scientific field — information theory — in a pair of papers published in 1948. His foundation for that work, though, was built a decade earlier, in his thesis. There he devised equations that represented the behavior of electrical circuitry. How a circuit behaves depends on the interactions of relays and switches that can connect (or not) one terminal to another. Shannon sought a “calculus” for mathematically representing a circuit’s connections, allowing scientists to be able to design circuits effectively for various tasks. (He provided examples of the circuit math for an electronic combination lock and some other devices.)

H.R. 6408 Terminating The Tax Exempt Status Of Organizations We Don't Like

nakedcapitalism  |   This measures is so far under the radar that so far, only Friedman and Matthew Petti at Reason seem to have noticed it...