Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, January 16, 2020

Do We REALLY Have a Long Way to Go and a Short Time to Get There?


bbc |  At the start of the 2010s, one of the world leaders in AI, DeepMind, often referred to something called AGI, or "artificial general intelligence" being developed at some point in the future. 

Machines that possess AGI - widely thought of as the holy grail in AI - would be just as smart as humans across the board, it promised. 

DeepMind's lofty AGI ambitions caught the attention of Google, who paid around £400m for the London-based AI lab in 2014 when it had the following mission statement splashed across its website: "Solve intelligence, and then use that to solve everything else."

Several others started to talk about AGI becoming a reality, including Elon Musk's $1bn AI lab, OpenAI, and academics like MIT professor Max Tegmark. 

In 2014, Nick Bostrom, a philosopher at Oxford University, went one step further with his book Superintelligence. It predicts a world where machines are firmly in control.

But those conversations were taken less and less seriously as the decade went on. At the end of 2019, the smartest computers could still only excel at a "narrow" selection of tasks. 

Gary Marcus, an AI researcher at New York University, said: "By the end of the decade there was a growing realisation that current techniques can only carry us so far."

He thinks the industry needs some "real innovation" to go further.

"There is a general feeling of plateau," said Verena Rieser, a professor in conversational AI at Edinburgh's Herriot Watt University. 

One AI researcher who wishes to remain anonymous said we're entering a period where we are especially sceptical about AGI. 

"The public perception of AI is increasingly dark: the public believes AI is a sinister technology," they said. 

For its part, DeepMind has a more optimistic view of AI's potential, suggesting that as yet "we're only just scratching the surface of what might be possible".

"As the community solves and discovers more, further challenging problems open up," explained Koray Kavukcuoglu, its vice president of research.

"This is why AI is a long-term scientific research journey.

"We believe AI will be one of the most powerful enabling technologies ever created - a single invention that could unlock solutions to thousands of problems. The next decade will see renewed efforts to generalise the capabilities of AI systems to help achieve that potential - both building on methods that have already been successful and researching how to build general-purpose AI that can tackle a wide range of tasks."

Friday, August 03, 2018

The Modeling Religion Project


theatlantic |  Another project, Forecasting Religiosity and Existential Security with an Agent-Based Model, examines questions about nonbelief: Why aren’t there more atheists? Why is America secularizing at a slower rate than Western Europe? Which conditions would speed up the process of secularization—or, conversely, make a population more religious?

Shults’s team tackled these questions using data from the International Social Survey Program conducted between 1991 and 1998. They initialized the model in 1998 and then allowed it to run all the way through 2008. “We were able to predict from that 1998 data—in 22 different countries in Europe, and Japan—whether and how belief in heaven and hell, belief in God, and religious attendance would go up and down over a 10-year period. We were able to predict this in some cases up to three times more accurately than linear regression analysis,” Shults said, referring to a general-purpose method of prediction that prior to the team’s work was the best alternative.


Using a separate model, Future of Religion and Secular Transitions (FOREST), the team found that people tend to secularize when four factors are present: existential security (you have enough money and food), personal freedom (you’re free to choose whether to believe or not), pluralism (you have a welcoming attitude to diversity), and education (you’ve got some training in the sciences and humanities). If even one of these factors is absent, the whole secularization process slows down. This, they believe, is why the U.S. is secularizing at a slower rate than Western and Northern Europe.

“The U.S. has found ways to limit the effects of education by keeping it local, and in private schools, anything can happen,” said Shults’s collaborator, Wesley Wildman, a professor of philosophy and ethics at Boston University. “Lately, there’s been encouragement from the highest levels of government to take a less than welcoming cultural attitude to pluralism. These are forms of resistance to secularization.”

Tuesday, March 27, 2018

Governance Threat Is Not Russians, Cambridge Analytica, Etc, But Surveillance Capitalism Itself...,


newstatesman |  It’s been said in some more breathless quarters of the internet that this is the “data breach” that could have “caused Brexit”. Given it was a US-focused bit of harvesting, that would be the most astonishing piece of political advertising success in history – especially as among the big players in the political and broader online advertising world, Cambridge Analytica are not well regarded: some of the people who are best at this regard them as little more than “snake oil salesmen”. 

One of the key things this kind of data would be useful for – and what the original academic study it came from looked into – is finding what Facebook Likes correlate with personality traits, or other Facebook likes. 

The dream scenario for this would be to find that every woman in your sample who liked “The Republican Party” also liked “Chick-Fil-A”, “Taylor Swift” and “Nascar racing”. That way, you could target ads at people who liked the latter three – but not the former – knowing you had a good chance of reaching people likely to appreciate the message you’ve got. This is a pretty widely used, but crude, bit of Facebook advertising. 

When people talk about it being possible Cambridge Analytica used this information to build algorithms which could still be useful after all the original data was deleted, this is what they’re talking about – and that’s possible, but missing a much, much bigger bit of the picture.

So, everything’s OK then?

No. Look at it this way: the data we’re all getting excited about here is a sample of public profile information from 50 million users, harvested from 270,000 people. 

Facebook itself, daily, has access to all of that public information, and much more, from a sample of two billion people – a sample around 7,000 times larger than the Cambridge Analytica one, and one much deeper and richer thanks to its real-time updating status. 

If Facebook wants to offer sales based on correlations – for advertisers looking for an audience open to their message, its data would be infinitely more powerful and useful than a small (in big data terms) four-year-out-of-date bit of Cambridge Analytica data. 

Facebook aren’t anywhere near alone in this world: every day your personal information is bought and sold, bundled and retraded. You won’t know the name of the brands, but the actual giants in this company don’t deal in the tens of millions with data, they deal with hundreds of millions, or even billions of records – one advert I saw today referred to a company which claimed real-world identification of 340 million people. 

This is how lots of real advertising targeting works: people can buy up databases of thousands or millions of users, from all sorts of sources, and turn them into the ultimate custom audience – match the IDs of these people and show them this advert. Or they can do the tricks Cambridge Analytica did, but refined and with much more data behind them (there’s never been much evidence Cambridge Analytica’s model worked very well, despite their sales pitch boasts). 

The media has a model when reporting on “hacks” or on “breaches” – and on reporting on when companies in the spotlight have given evidence to public authorities, and most places have been following those well-trod routes. 

But doing so is like doing forensics on the burning of a twig, in the middle of a raging forest fire. You might get some answers – but they’ll do you no good. We need to think bigger. 

Friday, March 23, 2018

Facebook the Surveillance and Social Control Grail NOT Under Deep State Control


NewYorker |  Twelve years later, the fixation on data as the key to political persuasion has exploded into scandal. For the past several days, the Internet has been enveloped in outrage over Facebook and Cambridge Analytica, the shadowy firm that supposedly helped Donald Trump win the White House. As with the Maoist rebels, this appears to be a tale of data-lust gone bad. In order to fulfill the promises that Cambridge Analytica made to its clients—it claimed to possess cutting-edge “psychographic profiles” that could judge voters’ personalities better than their own friends could—the company had to harvest huge amounts of information. It did this in an ethically suspicious way, by contracting with Aleksandr Kogan, a psychologist at the University of Cambridge, who built an app that collected demographic data on tens of millions of Facebook users, largely without their knowledge. “This was a scam—and a fraud,” Paul Grewal, Facebook’s deputy general counsel, told the Times over the weekend. Kogan has said that he was assured by Cambridge Analytica that the data collection was “perfectly legal and within the limits of the terms of service.”

Despite Facebook’s performance of victimization, it has endured a good deal of blowback and blame. Even before the story broke, Trump’s critics frequently railed at the company for contributing to his victory by failing to rein in fake news and Russian propaganda. To them, the Cambridge Analytica story was another example of Facebook’s inability, or unwillingness, to control its platform, which allowed bad actors to exploit people on behalf of authoritarian populism. Democrats have demanded that Mark Zuckerberg, the C.E.O. of Facebook, testify before Congress. Antonio Tajani, the President of the European Parliament, wants to talk to him, too. “Facebook needs to clarify before the representatives of five hundred million Europeans that personal data is not being used to manipulate democracy,” he said. On Wednesday afternoon, after remaining conspicuously silent since Friday night, Zuckerberg pledged to restrict third-party access to Facebook data in an effort to win back user trust. “We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you,” he wrote on Facebook.

But, as some have noted, the furor over Cambridge Analytica is complicated by the fact that what the firm did wasn’t unique or all that new. In 2012, Barack Obama’s reĆ«lection campaign used a Facebook app to target users for outreach, giving supporters the option to share their friend lists with the campaign. These efforts, compared with those of Kogan and Cambridge Analytica, were relatively transparent, but users who never gave their consent had their information sucked up anyway. (Facebook has since changed its policies.) As the sociologist Zeynep Tufekci has written, Facebook itself is a giant “surveillance machine”: its business model demands that it gather as much data about its users as possible, then allow advertisers to exploit the information through a system so complex and opaque that misuse is almost guaranteed.

Thursday, February 01, 2018

MIT Intelligence Quest


IQ.MIT |  We are setting out to answer two big questions: How does human intelligence work, in engineering terms? And how can we use that deep grasp of human intelligence to build wiser and more useful machines, to the benefit of society?

Drawing on MIT’s deep strengths and signature values, culture, and history, MIT IQ promises to make important contributions to understanding the nature of intelligence, and to harnessing it to make a better world.

This is our quest.
Sixty years ago, at MIT and elsewhere, big minds lit the fuse on a big question: What is intelligence, and how does it work? The result was an explosion of new fields — artificial intelligence, cognitive science, neuroscience, linguistics, and more. They all took off at MIT and have produced remarkable offshoots, from computational neuroscience, to neural nets, to empathetic robots.

And today, by tapping the united strength of these and other interlocking fields and capitalizing on what they can teach each other, we seek to answer the deepest questions about intelligence — and to deliver transformative new gifts for humankind.

Some of these advances may be foundational in nature, involving new insight into human intelligence, and new methods to allow machines to learn effectively. Others may be practical tools for use in a wide array of research endeavors, such as disease diagnosis, drug discovery, materials and manufacturing design, automated systems, synthetic biology, and finance.

Along with developing and advancing the technologies of intelligence, MIT IQ researchers will also investigate the societal and ethical implications of advanced analytical and predictive tools. There are already active projects and groups at the Institute investigating autonomous systems, media and information quality, labor markets and the work of the future, innovation and the digital economy, and the role of AI in the legal system.

In all its activities, MIT IQ is intended to take advantage of — and strengthen — the Institute’s culture of collaboration. MIT IQ will connect and amplify existing excellence across labs and centers already engaged in intelligence research.

Join our quest.

Still Not Decoded...,


Smithsonian | The Voynich Manuscript has baffled cryptographers ever since the early 15th-century document was rediscovered by a Polish book dealer in 1912. The handwritten, 240-page screed, now housed in Yale University’s Beinecke Rare Book & Manuscript Library, is written from left to right in an unknown language. On top of that, the text itself is likely to have been scrambled by an unknown code. Despite numerous attempts to crack the code by some of the world’s best cryptographers, including Alan Turing and the Bletchley Park team, the contents of the enigmatic book have long remained a mystery. But that hasn’t stopped people from trying. The latest to give it a stab? The Artificial Intelligence Lab at the University of Alberta.

Bob Weber at the Canadian Press reports that natural language processing expert Greg Kondrak and grad student Bradley Hauer have attempted to identify the language the manuscript was written in using AI. According to a press release, the team originally believed that the manuscript was written in Arabic. But after feeding it to an AI trained to recognize 380 languages with 97 percent accuracy, its analysis of the letter frequency suggested the text was likely written in Hebrew. 

“That was surprising,” Kondrak says. They then hypothesized that the words were alphagrams, in which the letters are shuffled and vowels are dropped. When they unscrambled the first line of text using that method they found that 80 percent of the words created were found in the Hebrew dictionary. The research appears in the journal Transactions of the Association of Computational Linguistics.

Neither of the researchers are schooled in ancient Hebrew, so George Dvorsky at Gizmodo reports they took their deciphered first line to computer scientist Moshe Koppel, a colleague and native Hebrew speaker. He said it didn’t form a coherent sentence. After the team fixed some funky spelling errors and ran it through Google Translate, they came up with something readable, even if it doesn’t make much sense: “She made recommendations to the priest, man of the house and me and people.”

Thursday, December 14, 2017

Backpropagation: The Beginning of a Revolution or the End of One?


technologyreview |  I’m standing in what is soon to be the center of the world, or is perhaps just a very large room on the seventh floor of a gleaming tower in downtown Toronto. Showing me around is Jordan Jacobs, who cofounded this place: the nascent Vector Institute, which opens its doors this fall and which is aiming to become the global epicenter of artificial intelligence.

We’re in Toronto because Geoffrey Hinton is in Toronto, and Geoffrey Hinton is the father of “deep learning,” the technique behind the current excitement about AI. “In 30 years we’re going to look back and say Geoff is Einstein—of AI, deep learning, the thing that we’re calling AI,” Jacobs says. Of the researchers at the top of the field of deep learning, Hinton has more citations than the next three combined. His students and postdocs have gone on to run the AI labs at Apple, Facebook, and OpenAI; Hinton himself is a lead scientist on the Google Brain AI team. In fact, nearly every achievement in the last decade of AI—in translation, speech recognition, image recognition, and game playing—traces in some way back to Hinton’s work.

The Vector Institute, this monument to the ascent of ­Hinton’s ideas, is a research center where companies from around the U.S. and Canada—like Google, and Uber, and Nvidia—will sponsor efforts to commercialize AI technologies. Money has poured in faster than Jacobs could ask for it; two of his cofounders surveyed companies in the Toronto area, and the demand for AI experts ended up being 10 times what Canada produces every year. Vector is in a sense ground zero for the now-worldwide attempt to mobilize around deep learning: to cash in on the technique, to teach it, to refine and apply it. Data centers are being built, towers are being filled with startups, a whole generation of students is going into the field.

The impression you get standing on the Vector floor, bare and echoey and about to be filled, is that you’re at the beginning of something. But the peculiar thing about deep learning is just how old its key ideas are. Hinton’s breakthrough paper, with colleagues David Rumelhart and Ronald Williams, was published in 1986. The paper elaborated on a technique called backpropagation, or backprop for short. Backprop, in the words of Jon Cohen, a computational psychologist at Princeton, is “what all of deep learning is based on—literally everything.”

When you boil it down, AI today is deep learning, and deep learning is backprop—which is amazing, considering that backprop is more than 30 years old. It’s worth understanding how that happened—how a technique could lie in wait for so long and then cause such an explosion—because once you understand the story of backprop, you’ll start to understand the current moment in AI, and in particular the fact that maybe we’re not actually at the beginning of a revolution. Maybe we’re at the end of one.


Sunday, December 10, 2017

Next Up: Ekmanized Pre-Cog Face-Reading AI...,

https://s3-us-west-1.amazonaws.com/emogifs/map.html


berkeley |  Using novel statistical models to analyze the responses of more than 800 men and women to over 2,000 emotionally evocative video clips, UC Berkeley researchers identified 27 distinct categories of emotion and created a multidimensional, interactive map to show how they’re connected.

Their findings are published this week in the early edition of the Proceedings of the National Academy of Sciences journal.

“We found that 27 distinct dimensions, not six, were necessary to account for the way hundreds of people reliably reported feeling in response to each video,” said study senior author Dacher Keltner, a UC Berkeley psychology professor and expert on the science of emotions.

Moreover, in contrast to the notion that each emotional state is an island, the study found that “there are smooth gradients of emotion between, say, awe and peacefulness, horror and sadness, and amusement and adoration,” Keltner said.

“We don’t get finite clusters of emotions in the map because everything is interconnected,” said study lead author Alan Cowen, a doctoral student in neuroscience at UC Berkeley. “Emotional experiences are so much richer and more nuanced than previously thought.”

“Our hope is that our findings will help other scientists and engineers more precisely capture the emotional states that underlie moods, brain activity and expressive signals, leading to improved psychiatric treatments, an understanding of the brain basis of emotion and technology responsive to our emotional needs,” he added.

Tuesday, November 28, 2017

Knowledge Engineering: Human "Intelligence" Mirrors That of Eusocial Insects


Cambridge |  The World Wide Web has had a notable impact on a variety of epistemically-relevant activities, many of which lie at the heart of the discipline of knowledge engineering. Systems like Wikipedia, for example, have altered our views regarding the acquisition of knowledge, while citizen science systems such as Galaxy Zoo have arguably transformed our approach to knowledge discovery. Other Web-based systems have highlighted the ways in which the human social environment can be used to support the development of intelligent systems, either by contributing to the provision of epistemic resources or by helping to shape the profile of machine learning. In the present paper, such systems are referred to as ‘knowledge machines’. In addition to providing an overview of the knowledge machine concept, the present paper reviews a number of issues that are associated with the scientific and philosophical study of knowledge machines. These include the potential impact of knowledge machines on the theory and practice of knowledge engineering, the role of social participation in the realization of intelligent systems, and the role of standardized, semantically enriched data formats in supporting the ad hoc assembly of special-purpose knowledge systems and knowledge processing pipelines.

Knowledge machines are a specific form of social machine that is concerned with the sociotechnical
realization of a broad range of knowledge processes. These include processes that are thetraditional focus of the discipline of knowledge engineering, for example, knowledge acquisition, knowledge modeling and the development of knowledge-based systems.

In the present paper, I have sought to provide an initial overview of the knowledge machine concept, and I have highlighted some of the ways in which the knowledge machine concept can be applied to existing areas of research. In particular, the present paper has identified a number of examples of knowledge machines (see Section 3), discussed some of the mechanisms that underlie their operation (see Section 5), and highlighted the role of Web technologies in supporting the emergence of ever-larger knowledge processing organizations (see Section 8). The paper has also highlighted a number of opportunities for collaboration between a range of disciplines. These include the disciplines of knowledge engineering, WAIS, sociology, philosophy, cognitive science, data science, and machine learning.

Given that our success as a species is, at least to some extent, predicated on our ability to manufacture, represent, communicate and exploit knowledge (see Gaines 2013), there can be little doubt about the importance and relevance of knowledge machines as a focus area for future scientific and philosophical enquiry. In addition to their ability to harness the cognitive and epistemic capabilities of the human social environment, knowledge machines provide us with a potentially important opportunity to scaffold the development of new forms of machine intelligence. Just as much of our own human intelligence may be rooted in the fact that we are born into a superbly structured and deliberately engineered environment (see Sterelny 2003), so too the next generation of synthetic intelligent systems may benefit from a rich and structured informational environment that houses the sum total of human knowledge. In this sense, knowledge machines are important not just with respect to the potential transformation of our own (human) epistemic capabilities, they are also important with respect to the attempt to create the sort of environments that enable future forms of intelligent system to press maximal benefit from the knowledge that our species has managed to create and codify.

Monday, November 20, 2017

Slaughterbots


sfgate |   In the video above, the technology is initially developed with the intention of combating crime and terrorism, but the drones are taken over by an unknown forces who use the powerful weapons to murder a group of senators and college students. The video does contain some graphic content.

Russell, an expert on artificial intelligence, appears at the end of the video and warns against humanity's development of autonomous weapons.

"This short film is just more than speculation," Russell says. "It shows the results of integrating and militarizing technologies that we already have."  Fist tap Big Don.

The Human Strategy


edge |  The big question that I'm asking myself these days is how can we make a human artificial intelligence? Something that is not a machine, but rather a cyber culture that we can all live in as humans, with a human feel to it. I don't want to think small—people talk about robots and stuff—I want this to be global. Think Skynet. But how would you make Skynet something that's really about the human fabric?

The first thing you have to ask is what's the magic of the current AI? Where is it wrong and where is it right?

The good magic is that it has something called the credit assignment function. What that lets you do is take stupid neurons, these little linear functions, and figure out, in a big network, which ones are doing the work and encourage them more. It's a way of taking a random bunch of things that are all hooked together in a network and making them smart by giving them feedback about what works and what doesn't. It sounds pretty simple, but it's got some complicated math around it. That's the magic that makes AI work.

The bad part of that is, because those little neurons are stupid, the things that they learn don't generalize very well. If it sees something that it hasn't seen before, or if the world changes a little bit, it's likely to make a horrible mistake. It has absolutely no sense of context. In some ways, it's as far from Wiener's original notion of cybernetics as you can get because it's not contextualized: it's this little idiot savant.

But imagine that you took away these limitations of current AI. Instead of using dumb neurons, you used things that embedded some knowledge. Maybe instead of linear neurons, you used neurons that were functions in physics, and you tried to fit physics data. Or maybe you put in a lot of stuff about humans and how they interact with each other, the statistics and characteristics of that. When you do that and you add this credit assignment function, you take your set of things you know about—either physics or humans, and a bunch of data—in order to reinforce the functions that are working, then you get an AI that works extremely well and can generalize.

In physics, you can take a couple of noisy data points and get something that's a beautiful description of a phenomenon because you're putting in knowledge about how physics works. That's in huge contrast to normal AI, which takes millions of training examples and is very sensitive to noise. Or the things that we've done with humans, where you can put in things about how people come together and how fads happen. Suddenly, you find you can detect fads and predict trends in spectacularly accurate and efficient ways.

Human behavior is determined as much by the patterns of our culture as by rational, individual thinking. These patterns can be described mathematically, and used to make accurate predictions. We’ve taken this new science of “social physics” and expanded upon it, making it accessible and actionable by developing a predictive platform that uses big data to build a predictive, computational theory of human behavior.

The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren't?

Way of the Future


wired |  The new religion of artificial intelligence is called Way of the Future. It represents an unlikely next act for the Silicon Valley robotics wunderkind at the center of a high-stakes legal battle between Uber and Waymo, Alphabet’s autonomous-vehicle company. Papers filed with the Internal Revenue Service in May name Levandowski as the leader (or “Dean”) of the new religion, as well as CEO of the nonprofit corporation formed to run it.

The documents state that WOTF’s activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” That includes funding research to help create the divine AI itself. The religion will seek to build working relationships with AI industry leaders and create a membership through community outreach, initially targeting AI professionals and “laypersons who are interested in the worship of a Godhead based on AI.” The filings also say that the church “plans to conduct workshops and educational programs throughout the San Francisco/Bay Area beginning this year.”

That timeline may be overly ambitious, given that the Waymo-Uber suit, in which Levandowski is accused of stealing self-driving car secrets, is set for an early December trial. But the Dean of the Way of the Future, who spoke last week with Backchannel in his first comments about the new religion and his only public interview since Waymo filed its suit in February, says he’s dead serious about the project.

“What is going to be created will effectively be a god,” Levandowski tells me in his modest mid-century home on the outskirts of Berkeley, California. “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”

During our three-hour interview, Levandowski made it absolutely clear that his choice to make WOTF a church rather than a company or a think tank was no prank.

“I wanted a way for everybody to participate in this, to be able to shape it. If you’re not a software engineer, you can still help,” he says. “It also removes the ability for people to say, ‘Oh, he’s just doing this to make money.’” Levandowski will receive no salary from WOTF, and while he says that he might consider an AI-based startup in the future, any such business would remain completely separate from the church.

“The idea needs to spread before the technology,” he insists. “The church is how we spread the word, the gospel. If you believe [in it], start a conversation with someone else and help them understand the same things.”

Levandowski believes that a change is coming—a change that will transform every aspect of human existence, disrupting employment, leisure, religion, the economy, and possibly decide our very survival as a species.

“If you ask people whether a computer can be smarter than a human, 99.9 percent will say that’s science fiction,” he says. “ Actually, it’s inevitable. It’s guaranteed to happen.”

Monday, October 09, 2017

How Can You Harness Machine Intelligence If Cognitive Elites Struggle to Understand It?


Booz-Allen  |  On Thursday, July 20, China’s State Council released the New Generation of Artificial Intelligence Development Plan. Numbering nearly 40 pages, the plan lays out China’s aspirations in impressive detail. It introduces massive investment that aims to position China at the forefront of technological achievement by cultivating the governmental, economic, and academic ecosystems to drive breakthroughs in machine intelligence. To achieve these goals, the Council aims to harness the data produced by more the internet-connected devices of more than a billion Chinese citizens, a vast web of “intelligent things.”

The plan also details the strategic situation precipitating the need for a bold new vision: “Machine intelligence [is] the strategic technology that will lead in the future; the world’s major developed countries are taking the development of AI as a major strategy… [We] must, looking at the world, take the initiative to plan [and] firmly seize the [technology] in this new stage of international competition.[i]

The China State Council’s plan evokes a document that marked the beginning of the defining global technological competition of the last century—the space race. In August 1958, ten months after watching the Soviet Union launch Sputnik 1, President Dwight Eisenhower’s administration released the U.S. Policy on Outer Space. In it, the U.S. National Security Council (NSC) urged massive investment to cultivate the talent and technology base necessary to exceed the Soviet Union’s achievements in space.
The United States is now at the precipice of another defining moment in history.
The NSC included an urgent mandate to act, declaring, “The starkest facts which confront the United States in the immediate and foreseeable future are [that] the USSR has surpassed the U.S. and the Free World in scientific and technological accomplishments in outer space, [and] if it maintains its present superiority…will be able to use [it] as a means of undermining the prestige and leadership of the United States.”[ii]

The United States is now at the precipice of another defining moment in history. The world’s greatest powers are entering a technological contest that will parallel or exceed the space race in the magnitude of its economic, geopolitical, and cultural consequences. Maintaining our role as a global superpower requires us to achieve parity, and ideally dominance, in the race to a future powered by intelligent machines. Moreover, we must develop a comprehensive national strategy for maintaining this technological advantage while also advancing our economy, preserving our social norms and values, and protecting our citizens’ dignity, privacy, and equality.

Friday, October 06, 2017

Some Will Be Human, Others Will Just Do What The "Machine" Suggests...,



bloombergview |  "Politics" might even be the wrong word for it. As Bishop describes it in his book (and Andrew Sullivan does in a new essay in New York magazine), the divide is really more about identities, or tribes. Policy disputes aren't what separates us so much as differences in attitude and language. Which is why Donald Trump, despite his New York County address and all-over-the-political-map statements on the campaign trail, seems to be mainly just deepening the divide. Attitude and language -- and loyalty -- are what he cares about most, so those are the buttons that he keeps pushing.

How do we counter this growing tribalism, and growing conviction that those outside our tribes are motivated by Satanic impulses? I don't know! If I did, I would have told you already. Still, as a starting place, I recommend this: Just assume that those who disagree with you politically are really, really stupid.

Wednesday, October 04, 2017

Access the Guardian Through a Raspberry Pi? Of Course...,


wikipedia |  Main article: Wolfram Language

In June 2014, Wolfram officially announced the Wolfram Language as a new general multi-paradigm programming language.[65] The documentation for the language was pre-released in October 2013 to coincide with the bundling of Mathematica and the Wolfram Language on every Raspberry Pi computer. While the Wolfram Language has existed for over 25 years as the primary programming language used in Mathematica, it was not officially named until 2014.[66] Wolfram's son, Christopher Wolfram, appeared on the program of SXSW giving a live-coding demonstration using Wolfram Language[67] and has blogged about Wolfram Language for Wolfram Research.[68]

On 8 December 2015, Wolfram published the book "An Elementary Introduction to the Wolfram Language" to introduce people, with no knowledge of programming, to the Wolfram Language and the kind of computational thinking it allows.[69] The release of the second edition of the book[70] coincided with a "CEO for hire" competition during the 2017 Collision tech conference.[71]

Both Stephen Wolfram and Christopher Wolfram were involved in helping create the alien language for the film Arrival, for which they used the Wolfram Language.[72][73][74]

An Introduction to the Wolfram Language Online

A New Kind of Science


wikipedia |  The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world. Since its nascent beginnings in the 1930s, computation has been primarily approached from two traditions: engineering, which seeks to build practical systems using computations; and mathematics, which seeks to prove theorems about computation. However, as recently as the 1970s, computing has been described as being at the crossroads of mathematical, engineering, and empirical traditions.[2][3]

Wolfram introduces a third tradition that seeks to empirically investigate computation for its own sake: He argues that an entirely new method is needed to do so because traditional mathematics fails to meaningfully describe complex systems, and that there is an upper limit to complexity in all systems.[4]

Simple programs

The basic subject of Wolfram's "new kind of science" is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases (after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules). This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include, amongst others, cellular automata in one, two, and three dimensions; mobile automata; Turing machines in 1 and 2 dimensions; several varieties of substitution and network systems; primitive recursive functions; nested recursive functions; combinators; tag systems; register machines; reversal-addition. For a program to qualify as simple, there are several requirements:
  1. Its operation can be completely explained by a simple graphical illustration.
  2. It can be completely explained in a few sentences of human language.
  3. It can be implemented in a computer language using just a few lines of code.
  4. The number of its possible variations is small enough so that all of them can be computed.
Generally, simple programs tend to have a very simple abstract framework. Simple cellular automata, Turing machines, and combinators are examples of such frameworks, while more complex cellular automata do not necessarily qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems. The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.

Simple programs are capable of a remarkable range of behavior. Some have been proven to be universal computers. Others exhibit properties familiar from traditional science, such as thermodynamic behavior, continuum behavior, conserved quantities, percolation, sensitive dependence on initial conditions, and others. They have been used as models of traffic, material fracture, crystal growth, biological growth, and various sociological, geological, and ecological phenomena. Another feature of simple programs is that, according to the book, making them more complicated seems to have little effect on their overall complexity. A New Kind of Science argues that this is evidence that simple programs are enough to capture the essence of almost any complex system.

Mapping and mining the computational universe

In order to study simple rules and their often complex behaviour, Wolfram argues that it is necessary to systematically explore all of these computational systems and document what they do. He further argues that this study should become a new branch of science, like physics or chemistry. The basic goal of this field is to understand and characterize the computational universe using experimental methods.

The proposed new branch of scientific exploration admits many different forms of scientific production. For instance, qualitative classifications are often the results of initial forays into the computational jungle. On the other hand, explicit proofs that certain systems compute this or that function are also admissible. There are also some forms of production that are in some ways unique to this field of study. For example, the discovery of computational mechanisms that emerge in different systems but in bizarrely different forms.

Another kind of production involves the creation of programs for the analysis of computational systems. In the NKS framework, these themselves should be simple programs, and subject to the same goals and methodology. An extension of this idea is that the human mind is itself a computational system, and hence providing it with raw data in as effective a way as possible is crucial to research. Wolfram believes that programs and their analysis should be visualized as directly as possible, and exhaustively examined by the thousands or more. Since this new field concerns abstract rules, it can in principle address issues relevant to other fields of science. However, in general Wolfram's idea is that novel ideas and mechanisms can be discovered in the computational universe, where they can be represented in their simplest forms, and then other fields can choose among these discoveries for those they find relevant.

Wolfram has since expressed "A central lesson of A New Kind of Science is that there’s a lot of incredible richness out there in the computational universe. And one reason that’s important is that it means that there’s a lot of incredible stuff out there for us to 'mine' and harness for our purposes."[5]

Stephen Wolfram


wikipedia |  As a young child, Wolfram initially struggled in school and had difficulties learning arithmetic.[28] At the age of 12, he wrote a dictionary on physics.[29] By 13 or 14, he had written three books on particle physics.[30][31][32] They have not been published.

Particle physics 
Wolfram was a wunderkind. By age 15, he began research in applied quantum field theory and particle physics and published scientific papers. Topics included matter creation and annihilation, the fundamental interactions, elementary particles and their currents, hadronic and leptonic physics, and the parton model, published in professional peer-reviewed scientific journals including Nuclear Physics B, Australian Journal of Physics, Nuovo Cimento, and Physical Review D.[33] Working independently, Wolfram published a widely cited paper on heavy quark production at age 18[2] and nine other papers,[18] and continued research and to publish on particle physics into his early twenties. Wolfram's work with Geoffrey C. Fox on the theory of the strong interaction is still used in experimental particle physics.[34]

He was educated at Eton College, but left prematurely in 1976.[35] He entered St. John's College, Oxford at age 17 but found lectures "awful",[18] and left in 1978[36] without graduating[37][38] to attend the California Institute of Technology, the following year, where he received a PhD[39] in particle physics on November 19, 1979 at age 20.[40] Wolfram's thesis committee was composed of Richard Feynman, Peter Goldreich, Frank J. Sciulli and Steven Frautschi, and chaired by Richard D. Field.[40][41]

A 1981 letter from Feynman to Gerald Freund giving reference for Wolfram for the MacArthur grant appears in Feynman's collective letters, Perfectly Reasonable Deviations from the Beaten Track. Following his PhD, Wolfram joined the faculty at Caltech and became the youngest recipient[42] of the MacArthur Fellowships in 1981, at age 21.[37]

Sunday, October 01, 2017

Quantum Criticality in Living Systems


phys.org  |  Stuart Kauffman, from the University of Calgary, and several of his colleagues have recently published a paper on the Arxiv server titled 'Quantum Criticality at the Origins of Life'. The idea of a quantum criticality, and more generally quantum critical states, comes perhaps not surprisingly, from solid state physics. It describes unusual electronic states that are are balanced somewhere between conduction and insulation. More specifically, under certain conditions, current flow at the critical point becomes unpredictable. When it does flow, it tends to do so in avalanches that vary by several orders of magnitude in size. 

Ferroelectric metals, like iron, are one familiar example of a material that has classical critical point. Above a of 1043 degrees K the magnetization of iron is completely lost. In the narrow range approaching this point, however, thermal fluctuations in the electron spins that underly the magnetic behavior extend over all length scales of the sample—that's the scale invariance we mentioned. In this case we have a continuous phase transition that is thermally driven, as opposed to being driven by something else like external pressure, magnetic field, or some kind of chemical influence.

Quantum criticality, on the other hand, is usually associated with stranger electronic behaviors—things like high-temperature superconductivity or so-called heavy fermion metals like CeRhIn5. One strange behavior in the case of heavy fermions, for example, is the observation of large 'effective mass'—mass up to 1000 times normal—for the conduction electrons as a consequence of their narrow electronic bands. These kinds of phenomena can only be explained in terms of the collective behavior of highly correlated electrons, as opposed to more familiar theory based on decoupled electrons. 

Experimental evidence for critical points in of materials like CeRhIn5 has only recently been found. In this case the so-called "Fermi surface," a three-dimensional map representing the collective energy states of all electrons in the material, was seen to have large instantaneous shifts at the critical points. When electrons across the entire Fermi surface are strongly coupled, unusual physics like superconductivity is possible.

The potential existence of in proteins is a new idea that will need some experimental evidence to back it up. Kauffman and his group eloquently describe the major differences between current flow in proteins as compared to metallic conductors. They note that in metals charges 'float' due to voltage differences. Here, an electric fields accelerate electrons while scattering on impurities dissipates their energy fixing a constant average propagation velocity.
By contrast, this kind of a mechanism would appear to be uncommon in biological systems. The authors note that charges entering a critically conducting biomolecule will be under the joint influence of the quantum Hamiltonian and the excessive decoherence caused by the environment. Currently a huge focus in Quantum biology, this kind of conductance has been seen for example, for excitons in the light-harvesting systems. As might already be apparent here, the logical flow of the paper, at least to nonspecialists, quickly devolves into the more esoteric world of quantum Hamiltonians and niche concepts like 'Anderson localization.' 

To try to catch a glimpse of what might be going on without becoming steeped in formalism I asked Luca Turin, who actually holds the patent for semiconductor structures using proteins as their active element, for his take on the paper. He notes that the question of how electrons get across proteins is one of the great unsolved problems in biophysics, and that the Kauffman paper points in a novel direction to possibly explain conduction. Quantum tunnelling (which is an essential process, for example, in the joint special ops of proteins of the respiratory chain) works fine over small distances. However, rates fall precipitously with distance. Traditional hole and electron transport mechanisms butt against the high bandgap and absence of obvious acceptor impurities. Yet at rest our body's fuel cell generates 100 amps of electron current.
 
In suggesting that biomolecules, or at least most of them, are quantum critical conductors, Kauffman and his group are claiming that their electronic properties are precisely tuned to the transition point between a metal and an insulator. An even stronger reading of this would have that there is a universal mechanism of charge transport in living matter which can exist only in highly evolved systems. To back all this up the group took a closer look at the electronic structure of a few of our standard issue proteins like myoglobin, profilin, and apolipoprotein E.

In particular, they selected NMR spectra from the Protein Data Bank and used a technique known as the extended Huckel Hamiltonion method to calculate HOMO/LUMO orbitals for the proteins. For more comments on HOMO/LUMO orbital calculations you might look at our post on Turin's experiments on electron spin changes as a potential general mechanism of anesthesia. To fully appreciate what such calculations might imply in this case, we have to toss out another fairly abstract concept, namely, Hofstadter's butterfly as seen in the picture below.

What is Life?


scribd |  Schrodinger unleashed modern molecular biology with his “What Is Life?”.[1] The order in biology must be due, not to statistical processes attributable to statistical mechanics, but due to the stability of the chemical bond. In one brilliant intuition, he said, “It will not be a periodic crystal, for these are dull. “Genes” will be an aperiodic crystal containing a microcode for the organism.” (my quotes around “genes”.) He was brilliantly right, but insufficient. 

The structure of DNA followed, the code and genes turning one another on and off in some vast genetic regulatory network. Later work, including my own,[2] showed that such networks could behave with sufficient order for ontogeny or be enormously chaotic and no life could survive that chaos.

We biologists continue to think largely in terms of classical physics and chemistry, even about the origins of life, and life itself, despite Schrodinger’s clear message that life depends upon quantum mechanics.
 
In this short article, I wish to explore current “classical physics” ideas about the origin of life then introduce the blossoming field of quantum biology and within a newly discovered state of matter, The Poised Realm, hovering reversibly between quantum and “classical” worlds that may be fundamental to life. Life may be lived in the Poised Realm, with wide implications.

The widest implications are a hope for a union of the objective and subjective poles; the latter lost since Descartes’ Res cogitans failed and Newton triumphed with classical physics and Descartes’ Res extensa. What I shall say here is highly speculative.

2 Classical Physics and Chemistry Ideas about the Origin of Life
There are four broad views about the origin of life:
1) The RNA world view, dominant in the USA.
2) The spontaneous emergence of “collectively autocatalytic set”, which might be RNA, peptides, both, or other molecular species.
3) Budding liposomes or other self-reproducing vesicles.
4) Metabolism first, with linked sets of chemical reaction cycles, which are autocatalytic in the sense that each produces an extra copy of at least one product per cycle. 

Almost all workers agree that however molecular reproduction may have occurred, it is plausibly the case that housing such a system in a liposome or similar vesicle is one way to confine reactants. Recent work suggests that a dividing liposome and reproducing molecular system will synchronize divisions, so could form a protocell, hopefully able to evolve to some extent.[3]


Friday, September 29, 2017

Why the Future Doesn't Need Us


ecosophia |  Let’s start with the concept of the division of labor. One of the great distinctions between a modern industrial society and other modes of human social organization is that in the former, very few activities are taken from beginning to end by the same person. A woman in a hunter-gatherer community, as she is getting ready for the autumn tuber-digging season, chooses a piece of wood, cuts it, shapes it into a digging stick, carefully hardens the business end in hot coals, and then puts it to work getting tubers out of the ground. Once she carries the tubers back to camp, what’s more, she’s far more likely than not to take part in cleaning them, roasting them, and sharing them out to the members of the band.

A woman in a modern industrial society who wants to have potatoes for dinner, by contrast, may do no more of the total labor involved in that process than sticking a package in the microwave. Even if she has potatoes growing in a container garden out back, say, and serves up potatoes she grew, harvested, and cooked herself, odds are she didn’t make the gardening tools, the cookware, or the stove she uses. That’s division of labor: the social process by which most members of an industrial society specialize in one or another narrow economic niche, and use the money they earn from their work in that niche to buy the products of other economic niches.

Let’s say it up front: there are huge advantages to the division of labor.  It’s more efficient in almost every sense, whether you’re measuring efficiency in terms of output per person per hour, skill level per dollar invested in education, or what have you. What’s more, when it’s combined with a social structure that isn’t too rigidly deterministic, it’s at least possible for people to find their way to occupational specialties for which they’re actually suited, and in which they will be more productive than otherwise. Yet it bears recalling that every good thing has its downsides, especially when it’s pushed to extremes, and the division of labor is no exception.

Crackpot realism is one of the downsides of the division of labor. It emerges reliably whenever two conditions are in effect. The first condition is that the task of choosing goals for an activity is assigned to one group of people and the task of finding means to achieve those goals is left to a different group of people. The second condition is that the first group needs to be enough higher in social status than the second group that members of the first group need pay no attention to the concerns of the second group.

Consider, as an example, the plight of a team of engineers tasked with designing a flying car.  People have been trying to do this for more than a century now, and the results are in: it’s a really dumb idea. It so happens that a great many of the engineering features that make a good car make a bad aircraft, and vice versa; for instance, an auto engine needs to be optimized for torque rather than speed, while an aircraft engine needs to be optimized for speed rather than torque. Thus every flying car ever built—and there have been plenty of them—performed just as poorly as a car as it did as a plane, and cost so much that for the same price you could buy a good car, a good airplane, and enough fuel to keep both of them running for a good long time.

Engineers know this. Still, if you’re an engineer and you’ve been hired by some clueless tech-industry godzillionaire who wants a flying car, you probably don’t have the option of telling your employer the truth about his pet project—that is, that no matter how much of his money he plows into the project, he’s going to get a clunker of a vehicle that won’t be any good at either of its two incompatible roles—because he’ll simply fire you and hire someone who will tell him what he wants to hear. Nor do you have the option of sitting him down and getting him to face what’s behind his own unexamined desires and expectations, so that he might notice that his fixation on having a flying car is an emotionally charged hangover from age eight, when he daydreamed about having one to help him cope with the miserable, bully-ridden public school system in which he was trapped for so many wretched years. So you devote your working hours to finding the most rational, scientific, and utilitarian means to accomplish a pointless, useless, and self-defeating end. That’s crackpot realism.

You can make a great party game out of identifying crackpot realism—try it sometime—but I’ll leave that to my more enterprising readers. What I want to talk about right now is one of the most glaring examples of crackpot realism in contemporary industrial society. Yes, we’re going to talk about space travel again.

Leaving Labels Aside For A Moment - Netanyahu's Reality Is A Moral Abomination

This video will be watched in schools and Universities for generations to come, when people will ask the question: did we know what was real...