Showing posts with label FRANK. Show all posts
Showing posts with label FRANK. Show all posts

Tuesday, June 12, 2018

"Privacy" Isn't What's Really At Stake...,


NewYorker  |  The question about national security and personal convenience is always: At what price? What do we have to give up? On the criminal-justice side, law enforcement is in an arms race with lawbreakers. Timothy Carpenter was allegedly able to orchestrate an armed-robbery gang in two states because he had a cell phone; the law makes it difficult for police to learn how he used it. Thanks to lobbying by the National Rifle Association, federal law prohibits the National Tracing Center from using a searchable database to identify the owners of guns seized at crime scenes. Whose privacy is being protected there?

Most citizens feel glad for privacy protections like the one in Griswold, but are less invested in protections like the one in Katz. In “Habeas Data,” Farivar analyzes ten Fourth Amendment cases; all ten of the plaintiffs were criminals. We want their rights to be observed, but we also want them locked up.

On the commercial side, are the trade-offs equivalent? The market-theory expectation is that if there is demand for greater privacy then competition will arise to offer it. Services like Signal and WhatsApp already do this. Consumers will, of course, have to balance privacy with convenience. The question is: Can they really? The General Data Protection Regulation went into effect on May 25th, and privacy-advocacy groups in Europe are already filing lawsuits claiming that the policy updates circulated by companies like Facebook and Google are not in compliance. How can you ever be sure who is eating your cookies?

Possibly the discussion is using the wrong vocabulary. “Privacy” is an odd name for the good that is being threatened by commercial exploitation and state surveillance. Privacy implies “It’s nobody’s business,” and that is not really what Roe v. Wade is about, or what the E.U. regulations are about, or even what Katz and Carpenter are about. The real issue is the one that Pollak and Martin, in their suit against the District of Columbia in the Muzak case, said it was: liberty. This means the freedom to choose what to do with your body, or who can see your personal information, or who can monitor your movements and record your calls—who gets to surveil your life and on what grounds.

As we are learning, the danger of data collection by online companies is not that they will use it to try to sell you stuff. The danger is that that information can so easily fall into the hands of parties whose motives are much less benign. A government, for example. A typical reaction to worries about the police listening to your phone conversations is the one Gary Hart had when it was suggested that reporters might tail him to see if he was having affairs: “You’d be bored.” They were not, as it turned out. We all may underestimate our susceptibility to persecution. “We were just talking about hardwood floors!” we say. But authorities who feel emboldened by the promise of a Presidential pardon or by a Justice Department that looks the other way may feel less inhibited about invading the spaces of people who belong to groups that the government has singled out as unpatriotic or undesirable. And we now have a government that does that. 


Wednesday, May 23, 2018

Afrikan Liberation Movement - Amazon Giving RealTime Facial Rekognition To Law Enforcement


WaPo |  Amazon has been essentially giving away facial recognition tools to law enforcement agencies in Oregon and Orlando, according to documents obtained by American Civil Liberties Union of Northern California, paving the way for a rollout of technology that is causing concern among civil rights groups.

Amazon is providing the technology, known as Rekognition, as well as consulting services, according to the documents, which the ACLU obtained through a Freedom of Information Act request.

A coalition of civil rights groups, in a letter released Tuesday, called on Amazon to stop selling the program to law enforcement because it could lead to the expansion of surveillance of vulnerable communities.

“We demand that Amazon stop powering a government surveillance infrastructure that poses a grave threat to customers and communities across the country,” the groups wrote in the letter.
Amazon spokeswoman Nina Lindsey did not directly address the concerns of civil rights groups. “Amazon requires that customers comply with the law and be responsible when they use AWS services,” she said, referring to Amazon Web Services, the company’s cloud software division that houses the facial recognition program. “When we find that AWS services are being abused by a customer, we suspend that customer’s right to use our services.”

She said that the technology has many useful purposes, including finding abducted people.  Amusement parks have used it to locate lost children. During the royal wedding this past weekend, clients used Rekognition to identify wedding attendees, she said. (Amazon founder Jeffrey P. Bezos is the owner of The Washington Post.)

The details about Amazon’s program illustrate the proliferation of cutting-edge technologies deep into American society — often without public vetting or debate. Axon, the maker of Taser electroshock weapons and the wearable body cameras for police, has voiced interest in pursuing face recognition for its body-worn cameras, prompting a similar backlash from civil rights groups.  Hundreds of Google employees protested last month to demand that the company stop providing artificial intelligence to the Pentagon to help analyze drone footage.

Urban Reconnaissance Through Supervised Autonomy (URSA)


DARPA |  DARPA’s Tactical Technology Office is hosting a Proposers Day to provide information to potential applicants on the structure and objectives of the new Urban Reconnaissance through Supervised Autonomy (URSA) program. URSA aims to develop technology to enable autonomous systems operated and supervised by U.S. ground forces to detect hostile forces and establish positive identification of combatants before U.S. troops encounter them. The URSA program seeks to overcome the inherent complexity of the urban environment by combining new knowledge about human behaviors, autonomy algorithms, integrated sensors, multiple sensor modalities, and measurable human responses to discriminate the subtle differences between hostile individuals and noncombatants. Additional details are available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-18-48/listing.html

To register, visit https://www.client-meeting.net/Proposers-Day-May-2018. Registration closes at 4:00 PM ET on April 25, 2018.

Please address administrative questions to DARPA-SN-18-48@darpa.mil, and refer to the URSA Proposers Day (DARPA-SN-18-48) in all correspondence. 

DARPA hosts Proposers Days to provide potential performers with information on whether and how they might respond to the Government’s research and development solicitations and to increase efficiency in proposal preparation and evaluation. Therefore, the URSA Proposers Day is open only to registered potential applicants, and not to the media or general public.

Full URSA program details will be made available in a forthcoming Broad Agency Announcement posted to the Federal Business Opportunities website. 



Saturday, April 28, 2018

Silly Peasants, Open Facebook Got NOTHING On Open "Consumer" DNA...,



NYTimes |  The California police had the Golden State Killer’s DNA and recently found an unusually well-preserved sample from one of the crime scenes. The problem was finding a match.

But these days DNA is stored in many places, and a near-match ultimately was found in a genealogy website beloved by hobbyists called GEDmatch, created by two volunteers in 2011.

Anyone can set up a free profile on GEDmatch. Many customers upload to the site DNA profiles they have already generated on larger commercial sites like 23andMe.

The detectives in the Golden State Killer case uploaded the suspect’s DNA sample. But they would have had to check a box online certifying that the DNA was their own or belonged to someone for whom they were legal guardians, or that they had “obtained authorization” to upload the sample.

“The purpose was to make these connections and to find these relatives,” said Blaine Bettinger, a lawyer affiliated with GEDmatch. “It was not intended to be used by law enforcement to identify suspects of crimes.”

But joining for that purpose does not technically violate site policy, he added.

Erin Murphy, a law professor at New York University and expert on DNA searches, said that using a fake identity might raise questions about the legality of the evidence.

The matches found in GEDmatch were to relatives of the suspect, not the suspect himself.

Since the site provides family trees, detectives also were able to look for relatives who might not have uploaded genetic data to the site themselves. 

Tuesday, March 27, 2018

Governance Threat Is Not Russians, Cambridge Analytica, Etc, But Surveillance Capitalism Itself...,


newstatesman |  It’s been said in some more breathless quarters of the internet that this is the “data breach” that could have “caused Brexit”. Given it was a US-focused bit of harvesting, that would be the most astonishing piece of political advertising success in history – especially as among the big players in the political and broader online advertising world, Cambridge Analytica are not well regarded: some of the people who are best at this regard them as little more than “snake oil salesmen”. 

One of the key things this kind of data would be useful for – and what the original academic study it came from looked into – is finding what Facebook Likes correlate with personality traits, or other Facebook likes. 

The dream scenario for this would be to find that every woman in your sample who liked “The Republican Party” also liked “Chick-Fil-A”, “Taylor Swift” and “Nascar racing”. That way, you could target ads at people who liked the latter three – but not the former – knowing you had a good chance of reaching people likely to appreciate the message you’ve got. This is a pretty widely used, but crude, bit of Facebook advertising. 

When people talk about it being possible Cambridge Analytica used this information to build algorithms which could still be useful after all the original data was deleted, this is what they’re talking about – and that’s possible, but missing a much, much bigger bit of the picture.

So, everything’s OK then?

No. Look at it this way: the data we’re all getting excited about here is a sample of public profile information from 50 million users, harvested from 270,000 people. 

Facebook itself, daily, has access to all of that public information, and much more, from a sample of two billion people – a sample around 7,000 times larger than the Cambridge Analytica one, and one much deeper and richer thanks to its real-time updating status. 

If Facebook wants to offer sales based on correlations – for advertisers looking for an audience open to their message, its data would be infinitely more powerful and useful than a small (in big data terms) four-year-out-of-date bit of Cambridge Analytica data. 

Facebook aren’t anywhere near alone in this world: every day your personal information is bought and sold, bundled and retraded. You won’t know the name of the brands, but the actual giants in this company don’t deal in the tens of millions with data, they deal with hundreds of millions, or even billions of records – one advert I saw today referred to a company which claimed real-world identification of 340 million people. 

This is how lots of real advertising targeting works: people can buy up databases of thousands or millions of users, from all sorts of sources, and turn them into the ultimate custom audience – match the IDs of these people and show them this advert. Or they can do the tricks Cambridge Analytica did, but refined and with much more data behind them (there’s never been much evidence Cambridge Analytica’s model worked very well, despite their sales pitch boasts). 

The media has a model when reporting on “hacks” or on “breaches” – and on reporting on when companies in the spotlight have given evidence to public authorities, and most places have been following those well-trod routes. 

But doing so is like doing forensics on the burning of a twig, in the middle of a raging forest fire. You might get some answers – but they’ll do you no good. We need to think bigger. 

Monday, November 20, 2017

The Human Strategy


edge |  The big question that I'm asking myself these days is how can we make a human artificial intelligence? Something that is not a machine, but rather a cyber culture that we can all live in as humans, with a human feel to it. I don't want to think small—people talk about robots and stuff—I want this to be global. Think Skynet. But how would you make Skynet something that's really about the human fabric?

The first thing you have to ask is what's the magic of the current AI? Where is it wrong and where is it right?

The good magic is that it has something called the credit assignment function. What that lets you do is take stupid neurons, these little linear functions, and figure out, in a big network, which ones are doing the work and encourage them more. It's a way of taking a random bunch of things that are all hooked together in a network and making them smart by giving them feedback about what works and what doesn't. It sounds pretty simple, but it's got some complicated math around it. That's the magic that makes AI work.

The bad part of that is, because those little neurons are stupid, the things that they learn don't generalize very well. If it sees something that it hasn't seen before, or if the world changes a little bit, it's likely to make a horrible mistake. It has absolutely no sense of context. In some ways, it's as far from Wiener's original notion of cybernetics as you can get because it's not contextualized: it's this little idiot savant.

But imagine that you took away these limitations of current AI. Instead of using dumb neurons, you used things that embedded some knowledge. Maybe instead of linear neurons, you used neurons that were functions in physics, and you tried to fit physics data. Or maybe you put in a lot of stuff about humans and how they interact with each other, the statistics and characteristics of that. When you do that and you add this credit assignment function, you take your set of things you know about—either physics or humans, and a bunch of data—in order to reinforce the functions that are working, then you get an AI that works extremely well and can generalize.

In physics, you can take a couple of noisy data points and get something that's a beautiful description of a phenomenon because you're putting in knowledge about how physics works. That's in huge contrast to normal AI, which takes millions of training examples and is very sensitive to noise. Or the things that we've done with humans, where you can put in things about how people come together and how fads happen. Suddenly, you find you can detect fads and predict trends in spectacularly accurate and efficient ways.

Human behavior is determined as much by the patterns of our culture as by rational, individual thinking. These patterns can be described mathematically, and used to make accurate predictions. We’ve taken this new science of “social physics” and expanded upon it, making it accessible and actionable by developing a predictive platform that uses big data to build a predictive, computational theory of human behavior.

The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren't?

Way of the Future


wired |  The new religion of artificial intelligence is called Way of the Future. It represents an unlikely next act for the Silicon Valley robotics wunderkind at the center of a high-stakes legal battle between Uber and Waymo, Alphabet’s autonomous-vehicle company. Papers filed with the Internal Revenue Service in May name Levandowski as the leader (or “Dean”) of the new religion, as well as CEO of the nonprofit corporation formed to run it.

The documents state that WOTF’s activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” That includes funding research to help create the divine AI itself. The religion will seek to build working relationships with AI industry leaders and create a membership through community outreach, initially targeting AI professionals and “laypersons who are interested in the worship of a Godhead based on AI.” The filings also say that the church “plans to conduct workshops and educational programs throughout the San Francisco/Bay Area beginning this year.”

That timeline may be overly ambitious, given that the Waymo-Uber suit, in which Levandowski is accused of stealing self-driving car secrets, is set for an early December trial. But the Dean of the Way of the Future, who spoke last week with Backchannel in his first comments about the new religion and his only public interview since Waymo filed its suit in February, says he’s dead serious about the project.

“What is going to be created will effectively be a god,” Levandowski tells me in his modest mid-century home on the outskirts of Berkeley, California. “It’s not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?”

During our three-hour interview, Levandowski made it absolutely clear that his choice to make WOTF a church rather than a company or a think tank was no prank.

“I wanted a way for everybody to participate in this, to be able to shape it. If you’re not a software engineer, you can still help,” he says. “It also removes the ability for people to say, ‘Oh, he’s just doing this to make money.’” Levandowski will receive no salary from WOTF, and while he says that he might consider an AI-based startup in the future, any such business would remain completely separate from the church.

“The idea needs to spread before the technology,” he insists. “The church is how we spread the word, the gospel. If you believe [in it], start a conversation with someone else and help them understand the same things.”

Levandowski believes that a change is coming—a change that will transform every aspect of human existence, disrupting employment, leisure, religion, the economy, and possibly decide our very survival as a species.

“If you ask people whether a computer can be smarter than a human, 99.9 percent will say that’s science fiction,” he says. “ Actually, it’s inevitable. It’s guaranteed to happen.”

Sunday, November 19, 2017

Weaponization of Monetization? Nah, Just Look In the Mirror For the Problem...,


DailyMail  |  The wildly popular Toy Freaks YouTube channel featuring a single dad and his two daughters has been deleted, amid a broader crackdown on disturbing children's content on the video streaming platform.

Toy Freaks, founded two years ago by landscaper Greg Chism of Granite City, Illinois, had 8.53million subscribers and was among the 100 most-viewed YouTube channels before it was shutdown on Friday. 

Though it's unclear what exact policy the channel violated, the videos showed the girls in unusual situations that often involved gross-out food play and simulated vomiting. The channel invented the 'bad baby' genre, and some videos showed the girls pretending to urinate on each other or fishing pacifiers out of the toilet.

Another series of videos showed the younger daughter Annabelle wiggling her loose teeth out while shrieking and spitting blood. 

'He is profiting off of his children's pain and suffering,' one indignant Reddit user wrote about the channel last year. 'It's barf inducing and no mentally stable person or child should ever have to watch it.'

A YouTube spokesperson said in a statement: 'It's not always clear that the uploader of the content intends to break our rules, but we may still remove their videos to help protect viewers, uploaders and children. We've terminated the Toy Freaks channel for violation of our policies.

Wednesday, November 15, 2017

CIA Blog Agrees - Something Indeed Wrong With These Interwebs..,


WaPo |  “Something is wrong on the internet,” declares an essay trending in tech circles. But the issue isn’t Russian ads or Twitter harassers. It’s children’s videos. 

The piece, by tech writer James Bridle, was published on the heels of a report from the New York Times that described disquieting problems with the popular YouTube Kids app. Parents have been handing their children an iPad to watch videos of Peppa Pig or Elsa from “Frozen,” only for the supposedly family-friendly platform to offer up some disturbing versions of the same. In clips camouflaged among more benign videos, Peppa drinks bleach instead of naming vegetables. Elsa might appear as a gore-covered zombie or even in a sexually compromising position with Spider-Man. 

The phenomenon is alarming, to say the least, and YouTube has said that it’s in the process of implementing new filtering methods. But the source of the problem will remain. In fact, it’s the site’s most important tool — and increasingly, ours. 

YouTube suggests search results and “up next” videos using proprietary algorithms: computer programs that, based on a particular set of guidelines and trained on vast sets of user data, determine what content to recommend or to hide from a particular user. They work well enough — the company claims that in the past 30 days, only 0.005 percent of YouTube Kids videos have been flagged as inappropriate. But as these latest reports show, no piece of code is perfect.

Local Grandstanding Blowhard Whoops Gums About Googol...,


WaPo  |  Missouri’s attorney general said Monday that he has launched an investigation into whether Google has mishandled private customer data and manipulated its search results to favor its own products, a further sign that Silicon Valley’s political fortunes may be on the descent.

The probe comes after European antitrust regulators levied a $2.7 billion fine against Google in June and as Washington is taking a harder look into the influence of dominant tech companies in American society.

Attorney General Josh Hawley said that the investigation will focus on three issues: the scope of Google's data collection, whether it has abused its market position as a dominant search engine and whether the company used its competitors content as its own in search results. The state has issued Google a subpoena seeking information about its business practices.

Hawley, who recently announced his candidacy for the U.S. Senate, said that the investigation was prompted in part by the fine levied against Google by European officials for favoring its own search results, as well as concerns that Google was engaging in similar behavior in the United States. Hawley said that a preliminary  investigation suggests that Google may not be accurately disclosing how much data it collects about customers and that people don't have a meaningful choice to opt out of Google's data collection.

Tuesday, November 14, 2017

Is Something Wrong With These Interwebs?


medium  |  Here are a few things which are disturbing me:

The first is the level of horror and violence on display. Some of the times it’s troll-y gross-out stuff; most of the time it seems deeper, and more unconscious than that. The internet has a way of amplifying and enabling many of our latent desires; in fact, it’s what it seems to do best. I spend a lot of time arguing for this tendency, with regards to human sexual freedom, individual identity, and other issues. Here, and overwhelmingly it sometimes feels, that tendency is itself a violent and destructive one.

The second is the levels of exploitation, not of children because they are children but of children because they are powerless. Automated reward systems like YouTube algorithms necessitate exploitation in the same way that capitalism necessitates exploitation, and if you’re someone who bristles at the second half of that equation then maybe this should be what convinces you of its truth. 

Exploitation is encoded into the systems we are building, making it harder to see, harder to think and explain, harder to counter and defend against. Not in a future of AI overlords and robots in the factories, but right here, now, on your screen, in your living room and in your pocket.

Many of these latest examples confound any attempt to argue that nobody is actually watching these videos, that these are all bots. There are humans in the loop here, even if only on the production side, and I’m pretty worried about them too.

I’ve written enough, too much, but I feel like I actually need to justify all this raving about violence and abuse and automated systems with an example that sums it up. Maybe after everything I’ve said you won’t think it’s so bad. I don’t know what to think any more.

This video, BURIED ALIVE Outdoor Playground Finger Family Song Nursery Rhymes Animation Education Learning Video, contains all of the elements we’ve covered above, and takes them to another level. Familiar characters, nursery tropes, keyword salad, full automation, violence, and the very stuff of kids’ worst dreams. And of course there are vast, vast numbers of these videos. Channel after channel after channel of similar content, churned out at the rate of hundreds of new videos every week. Industrialised nightmare production.

For the final time: There is more violent and more sexual content like this available. I’m not going to link to it. I don’t believe in traumatising other people, but it’s necessary to keep stressing it, and not dismiss the psychological effect on children of things which aren’t overtly disturbing to adults, just incredibly dark and weird.

A friend who works in digital video described to me what it would take to make something like this: a small studio of people (half a dozen, maybe more) making high volumes of low quality content to reap ad revenue by tripping certain requirements of the system (length in particular seems to be a factor). According to my friend, online kids’ content is one of the few alternative ways of making money from 3D animation because the aesthetic standards are lower and independent production can profit through scale. It uses existing and easily available content (such as character models and motion-capture libraries) and it can be repeated and revised endlessly and mostly meaninglessly because the algorithms don’t discriminate — and neither do the kids.

These videos, wherever they are made, however they come to be made, and whatever their conscious intention (i.e. to accumulate ad revenue) are feeding upon a system which was consciously intended to show videos to children for profit. The unconsciously-generated, emergent outcomes of that are all over the place.

To expose children to this content is abuse. We’re not talking about the debatable but undoubtedly real effects of film or videogame violence on teenagers, or the effects of pornography or extreme images on young minds, which were alluded to in my opening description of my own teenage internet use. Those are important debates, but they’re not what is being discussed here. What we’re talking about is very young children, effectively from birth, being deliberately targeted with content which will traumatise and disturb them, via networks which are extremely vulnerable to exactly this form of abuse. It’s not about trolls, but about a kind of violence inherent in the combination of digital systems and capitalist incentives. It’s down to that level of the metal.  Fist tap Dale.

Friday, November 03, 2017

Corporatist Conformity Was Normative For Big Social Media To Begin With...,


Counterpunch |  The depressing fact of the matter is, in our brave new Internet-dominated world, corporations like Google, Twitter, and Facebook (not to mention Amazon), are, for elitist wankers like me, in the immortal words of Colonel Kurz, “either friends or they are truly enemies to be feared.” If you are in the elitist wanker business, regardless of whether you’re Jonathan Franzen, Garth Risk Hallberg, Margaret Atwood, or some “mid-list” or “emerging” author, there is no getting around these corporations. So it’s kind of foolish, professionally speaking, to write a bunch of essays that will piss them off, and then publish these essays in CounterPunch. Literary agents advise against this. Other elitist literary wankers, once they discover what you’ve been doing, will avoid you like the bubonic plague. Although it’s perfectly fine to write books and movies about fictional evil corporations, writing about how real corporations are using their power to mold societies into self-policing virtual prisons of politically-correct, authoritarian consumers is … well, it’s something that is just not done in professional elitist wanker circles.

Normally, all this goes without saying, as these days most elitist wankers are trained how to write, and read, and think, in MFA conformity factories, where they screen out any unstable weirdos with unhealthy interests in political matters. This is to avoid embarrassing episodes like Harold Pinter’s Nobel Prize lecture (which, if you haven’t read it, you probably should), and is why so much of contemporary literature is so well-behaved and instantly forgettable. This institutionalized screening system is also why the majority of journalists employed by mainstream media outlets understand, without having to be told, what they are, and are not, allowed to report. Chomsky explains how this system operates in What Makes Mainstream Media Mainstream. It isn’t a question of censorship … the system operates on rewards and punishments, financial and emotional coercion, and subtler forms of intimidation. Making examples of non-cooperators is a particularly effective tactic. Ask any one of the countless women whose careers have been destroyed by Harvey Weinstein, or anyone who’s been to graduate school, or worked at a major corporation.

Thursday, August 10, 2017

Google Is Not What It Seems


wikileaks |  There was nothing politically hapless about Eric Schmidt. I had been too eager to see a politically unambitious Silicon Valley engineer, a relic of the good old days of computer science graduate culture on the West Coast. But that is not the sort of person who attends the Bilderberg conference four years running, who pays regular visits to the White House, or who delivers “fireside chats” at the World Economic Forum in Davos.43 Schmidt’s emergence as Google’s “foreign minister”—making pomp and ceremony state visits across geopolitical fault lines—had not come out of nowhere; it had been presaged by years of assimilation within US establishment networks of reputation and influence.   
 
On a personal level, Schmidt and Cohen are perfectly likable people. But Google's chairman is a classic “head of industry” player, with all of the ideological baggage that comes with that role.44 Schmidt fits exactly where he is: the point where the centrist, liberal, and imperialist tendencies meet in American political life. By all appearances, Google's bosses genuinely believe in the civilizing power of enlightened multinational corporations, and they see this mission as continuous with the shaping of the world according to the better judgment of the “benevolent superpower.” They will tell you that open-mindedness is a virtue, but all perspectives that challenge the exceptionalist drive at the heart of American foreign policy will remain invisible to them. This is the impenetrable banality of “don’t be evil.” They believe that they are doing good. And that is a problem.

Google is "different". Google is "visionary". Google is "the future". Google is "more than just a company". Google "gives back to the community". Google is "a force for good".

Even when Google airs its corporate ambivalence publicly, it does little to dislodge these items of faith.45 The company’s reputation is seemingly unassailable. Google’s colorful, playful logo is imprinted on human retinas just under six billion times each day, 2.1 trillion times a year—an opportunity for respondent conditioning enjoyed by no other company in history.46 Caught red-handed last year making petabytes of personal data available to the US intelligence community through the PRISM program, Google nevertheless continues to coast on the goodwill generated by its “don’t be evil” doublespeak. A few symbolic open letters to the White House later and it seems all is forgiven. Even anti-surveillance campaigners cannot help themselves, at once condemning government spying but trying to alter Google’s invasive surveillance practices using appeasement strategies.47

Nobody wants to acknowledge that Google has grown big and bad. But it has. Schmidt’s tenure as CEO saw Google integrate with the shadiest of US power structures as it expanded into a geographically invasive megacorporation. But Google has always been comfortable with this proximity. Long before company founders Larry Page and Sergey Brin hired Schmidt in 2001, their initial research upon which Google was based had been partly funded by the Defense Advanced Research Projects Agency (DARPA).48 And even as Schmidt’s Google developed an image as the overly friendly giant of global tech, it was building a close relationship with the intelligence community.

In 2003 the US National Security Agency (NSA) had already started systematically violating the Foreign Intelligence Surveillance Act (FISA) under its director General Michael Hayden.49 These were the days of the “Total Information Awareness” program.50 Before PRISM was ever dreamed of, under orders from the Bush White House the NSA was already aiming to “collect it all, sniff it all, know it all, process it all, exploit it all.”51 During the same period, Google—whose publicly declared corporate mission is to collect and “organize the world’s information and make it universally accessible and useful”52was accepting NSA money to the tune of $2 million to provide the agency with search tools for its rapidly accreting hoard of stolen knowledge.53

In 2004, after taking over Keyhole, a mapping tech startup cofunded by the National Geospatial-Intelligence Agency (NGA) and the CIA, Google developed the technology into Google Maps, an enterprise version of which it has since shopped to the Pentagon and associated federal and state agencies on multimillion-dollar contracts.54 In 2008, Google helped launch an NGA spy satellite, the GeoEye-1, into space. Google shares the photographs from the satellite with the US military and intelligence communities.55 In 2010, NGA awarded Google a $27 million contract for “geospatial visualization services.”56

In 2010, after the Chinese government was accused of hacking Google, the company entered into a “formal information-sharing” relationship with the NSA, which was said to allow NSA analysts to “evaluate vulnerabilities” in Google’s hardware and software.57 Although the exact contours of the deal have never been disclosed, the NSA brought in other government agencies to help, including the FBI and the Department of Homeland Security.

Censorship is for Losers...,


NYTimes |  Silicon Valley’s politics have long skewed left, with a free-market’s philosophy and a dash of libertarianism. But that goes only so far, with recent episodes putting the tech industry under the microscope for how it penalizes people for expressing dissenting opinions. Mr. Damore’s firing has now plunged the nation’s technology capital into some of the same debates that have engulfed the rest of the country.

Such fractures have been building in Silicon Valley for some time, reaching even into its highest echelons. The tensions became evident last year with the rise of Donald J. Trump, when a handful of people from the industry who publicly supported the then-presidential candidate faced blowback for their political decisions.

At Facebook, Peter Thiel, an investor and member of the social network’s board of directors, was told he would receive a negative evaluation of his board performance for supporting Mr. Trump by a peer, Reed Hastings, the chief executive of Netflix. And Palmer Luckey, a founder of Oculus VR, a virtual reality start-up owned by Facebook, was pressured to leave the company after it was revealed that he had secretly funded a pro-Trump organization.

Julian Assange, the founder of WikiLeaks, said on Twitter that “censorship is for losers” and offered to hire Mr. Damore. Steven Pinker, a Harvard University cognitive scientist, said on Twitter that Google’s actions could increase support for Mr. Trump in the tech industry.

“Google drives a big sector of tech into the arms of Trump: fires employee who wrote memo about women in tech jobs,” Dr. Pinker wrote.

One of the most outspoken supporters of Mr. Trump in Silicon Valley has been Mr. Thiel, a founder of PayPal, who has since faced derision from other people working in tech for his political stance. In a sign of how deep that ill feeling runs, Netflix’s Mr. Hastings warned Mr. Thiel last August, a few weeks after Mr. Trump had accepted the Republican nomination for president, that he would face consequences for backing Mr. Trump.

Mr. Thiel, also one of the original investors in Facebook, had given a prime-time speech supporting Mr. Trump at the Republican convention. In contrast, Mr. Hastings, a supporter of Hillary Clinton, said earlier last year that Mr. Trump, if elected, “would destroy much of what is great about America.”

Mr. Hastings, the chairman of a committee that evaluates Facebook’s board members, told Mr. Thiel in an email dated Aug. 14 that the advocacy would reflect badly on Mr. Thiel during a review of Facebook directors scheduled for the next day.

“I see our board being about great judgment, particularly in unlikely disaster where we have to pick new leaders,” Mr. Hastings wrote in the email to Mr. Thiel, a copy of which was obtained by The New York Times. "I’m so mystified by your endorsement of Trump for our President, that for me it moves from ‘different judgment’ to ‘bad judgment.’ Some diversity in views is healthy, but catastrophically bad judgment (in my view) is not what anyone wants in a fellow board member.”
Mr. Thiel and Mr. Hastings declined to comment through their spokesmen; neither challenged the authenticity of the email. Both of the men remain on Facebook’s board.

Another prominent Trump supporter affiliated with Facebook, Mr. Luckey, did not last at the company.



Sunday, July 23, 2017

Race, Surveillance, and Empire


isreview |  The following month, Jeremy Scahill and Ryan Devereaux published another story for The Intercept, which revealed that under the Obama administration the number of people on the National Counterterrorism Center’s no-fly list had increased tenfold to 47,000. Leaked classified documents showed that the NCC maintains a database of terrorism suspects worldwide—the Terrorist Identities Datamart Environment—which contained a million names by 2013, double the number four years earlier, and increasingly includes biometric data. This database includes 20,800 persons within the United States who are disproportionately concentrated in Dearborn, Michigan, with its significant Arab American population.2

By any objective standard, these were major news stories that ought to have attracted as much attention as the earlier revelations. Yet the stories barely registered in the corporate media landscape. The “tech community,” which had earlier expressed outrage at the NSA’s mass digital surveillance, seemed to be indifferent when details emerged of the targeted surveillance of Muslims. The explanation for this reaction is not hard to find. While many object to the US government collecting private data on “ordinary” people, Muslims tend to be seen as reasonable targets of suspicion. A July 2014 poll for the Arab American Institute found that 42 percent of Americans think it is justifiable for law enforcement agencies to profile Arab Americans or American Muslims.3

In what follows, we argue that the debate on national security surveillance that has emerged in the United States since the summer of 2013 is woefully inadequate, due to its failure to place questions of race and empire at the center of its analysis. It is racist ideas that form the basis for the ways national security surveillance is organized and deployed, racist fears that are whipped up to legitimize this surveillance to the American public, and the disproportionately targeted racialized groups that have been most effective in making sense of it and organizing opposition. This is as true today as it has been historically: race and state surveillance are intertwined in the history of US capitalism. Likewise, we argue that the history of national security surveillance in the United States is inseparable from the history of US colonialism and empire. 

The argument is divided into two parts. The first identifies a number of moments in the history of national security surveillance in North America, tracing its imbrication with race, empire, and capital, from the settler-colonial period through to the neoliberal era. Our focus here is on how race as a sociopolitical category is produced and reproduced historically in the United States through systems of surveillance. We show how throughout the history of the United States the systematic collection of information has been interwoven with mechanisms of racial oppression. From Anglo settler-colonialism, the establishment of the plantation system, the post–Civil War reconstruction era, the US conquest of the Philippines, and the emergence of the national security state in the post-World War II era, to neoliberalism in the post-Civil Rights era, racialized surveillance has enabled the consolidation of capital and empire.  

It is, however, important to note that the production of the racial “other” at these various moments is conjunctural and heterogenous. That is, the racialization of Native Americans, for instance, during the settler-colonial period took different forms from the racialization of African Americans. Further, the dominant construction of Blackness under slavery is different from the construction of Blackness in the neoliberal era; these ideological shifts are the product of specific historic conditions. In short, empire and capital, at various moments, determine who will be targeted by state surveillance, in what ways, and for how long.

Friday, July 14, 2017

You Know It's True...,


theantimedia |  Already, the Department of Defense has created the Sentient World Simulation, a real-time “synthetic mirror of the real world with automated continuous calibration with respect to current real-world information, such as major events, opinion polls, demographic statistics, economic reports, and shifts in trends,” according to a working paper on the system.

In recent years, other scientists have conducted research and even experimentation in attempts to show actual evidence of the Simulation. Heads turned last year when theoretical physicist S. James Gate announced he had found strange computer code in his String Theory research. Bound inside the equations we use to describe our universe, he says, is peculiar self-dual linear binary error-correcting block code.

team of German physicists has also set out to show that the numerical constraints we see in our universe are consistent with the kinds of limitations we would see in a simulated universe. These physicists have invoked a non-perturbative approach known as lattice quantum chromodynamics to try to discover whether there is an underlying grid to the space/time continuum.

So far their efforts have recreated a minuscule region of the known universe, a sliver of a corner that is but a few femtometers across. But this corner simulates the hypothetical lattice of the universal grid, and their search for a corresponding physical restraint turned up a theoretical upper limit on high-energy particles known as the Greisen–Zatsepin–Kuzmin, or GZK cut off. In other words, there are aspects of our universe that look and behave as a simulation might.

With news that there are two anonymous tech billionaires working on a secret project to break us out of the Matrix, it’s hard to know whether we should laugh or scream in horror. Simulation talk is great epistemological fun and metaphysical amusement of the highest order, but it may speak to an underlying anxiety regarding the merging of our reality with machines, or an underlying existential loneliness. It’s even been posited as a solution to the Fermi ParadoxWhy haven’t we met aliens? Well, because we live inside a world they built.

Tuesday, June 27, 2017

Google "Invests" in Bitcoin


marketslant |  Right now the BitCoin group is running into what we call "floor trader fear". The  voting members are chafing at the idea of scaling their supply by adding servers and/ or server power. This would disrupt their own little empires, not unlike the trading floor fearing Globex back in the day. And so many exchanges held out and protected the floor. And in the end they died. PHLX, AMEX, COMEX, PCOAST, CSCE, all gone or absorbed because they were late to adapt new technology and protect their liquidity pools. If Bitcoin removes power  from its voting members  control by demutualizing and uses those proceeds to increase server power they will likely excel. But Google and Amazon are now playing and they are all about unlimited  server power. Plus they have the eyeballs already. This is no unlike having the "marketmakers" already trading on a screen at Globex. The "liquidity pool" ofbuyers and sellers are already on  Amazon  and Google. Bitcoin does not have that past "early adaptors". Remember Palm?

When, not if, those behemoths are up and running they will immediately have an embedded network of both customers AND service providers  at their disposal in the form of search  eyeballs (google) and buyers (Amazon). They will be set up  to crush the opposition if they choose to create their own currency. Imagine Amazon  offering amazon money for amazon purchases. Now imagine them offering 20% discounts if you use  their money. The choices at this point boggle the mind. Tactical choices thought no longer used will come  into play again. Some examples: Freemium, Coupons, Customer Loyalty, Vertical Client Integration (P.O.S.), Travelers checks and more.
To be fair, Google has invested in Bitcoin as well. What smart trader would not hedge himself. But just like Netflix is Amazon's biggest cloud customer, but will eventually put Netflix out of business (after NetFlix kills Hollywood's distribution network); So will Google/ Amazon/ Apple attempt to obviate the need for any currency but their own. 

Blockchain is  the railroad. Amazon and Google have the oil. Like Rockefeller  before, The railroad will be made "exclusive" to their products.


Don't Comprehend "Real" Currency But Steady Yapping About Cryptocurrency


paecon |  Despite the fact that the goal of capitalists is to accumulate evermore money, the classical political economists largely took the analysis of money for granted.4 To be sure, from Adam Smith to Karl Marx, we can certainly find passages on money but two things are of general note. First, the classical political economists as well as Karl Marx thought gold and silver were “real” money. In other words, money was understood as “commodity money” and therefore to expand the money supply meant finding new mines, plundering it from others, or selling goods or services on the world market to obtain it from others who possessed it. Indeed, a considerable portion of the history of slavery and colonial violence can be traced back to the elite concern for acquiring gold and silver (Di Muzio and Robbins, 2016; Graeber, 2012; Kwarteng, 2014; Vilar, 1986). Second, because gold and silver were thought to be money, the classics failed to understand the scale or level of credit creation that began with the institutionalization of the Bank of England in 1694. Many argue that the Bank of England was inspired by the Bank of Amsterdam and the success of Dutch finance. But this is not the case. While the Bank of Amsterdam did make loans from time to time, its primary function was to maintain the quality of the paper notes in circulation that represented coin. Moreover, the bank was owned by the city, not private social forces as came to be the case with the Bank of England (Wennerlind, 2011: 69; Vilar, 1986: 206; Zarlenga, 2002: 238ff). Whereas the notes issued by the Bank of Amsterdam mostly reflected the exact value of gold and silver in the city’s vault, the Bank of England expanded the English money supply by extending paper notes as credit (Desan, 2014: 311ff). 

The Bank of England’s largest customer was the Crown in Parliament who used the initial loan of £1,200,000 to finance war with France. Indeed, the main reason why the Royal Charter was granted to the Bank of England’s 1509 investors was to provide the finance for organized violence against a dynastic rival (Davies, 2002: 261). The slave trade, colonization and continuous wars in the next two centuries lead to a mounting and unpayable “national” debt that solidified the Bank’s role as the government’s permanent debt manager. But the investors in the Bank of England did not only profit from war and debt, they also benefited from the interest received on loans to individuals and companies. As Wennerlind underscores, the Bank of England’s notes became “Europe’s first widely circulating credit currency” (2011: 109). Theoretically, however, the issued notes remained tethered to a metallic hoard of silver, and later only gold from 1861 (Davies, 2002: 315). No one knows for certain how much metal coin backed up the notes in circulation at any one time. In one study, Rubini argued that the Bank of England had a shifting reserve of silver for all notes in circulation of about 2.8 percent to 14.2 percent (1970: 696). Another study by Wennerlind argued that the founder of the Bank, William Paterson, proposed that 15 to 20 percent in silver for all notes outstanding would suffice to assure sufficient confidence in the Bank of England (2011: 128).5 This ambiguity and the fact that the Bank of England was privileged by the government, likely helped the Bank gain confidence among the users of its notes. As long as citizens thought they could eventually cash in their notes for silver/gold coins, faith in this system of money creation could continue (Kim, 2011). This uncertainty need not delay us, for what is definite is that the notes in circulation were of a far higher value than the actual metallic hoard at the Bank. To sum up this brief history of the world’s first widely circulating credit currency we can argue that new money was created as loans to customers – primarily to the British Crown in Parliament and primarily to finance an apparatus of international violence and Empire. 

By the early 19th century, the British politician, Samson Ricardo, realized the absurdity of granting private social forces the power to create money:
“It is evident therefore that if the Government itself were to be the sole issuer of paper money instead of borrowing it of the bank, the only difference would be with respect to interest: the Bank would no longer receive interest and the government would no longer pay it…It is said that Government could not with safety be entrusted with the power of issuing paper money – that it would most certainly abuse it... I propose to place this trust in the hands of three Commissioners” (Ricardo, 1838: 50). 
Ricardo’s proposal that the public take control of new money creation was ignored. In the 1844 Bank Charter Act, the Bank of England was given the exclusive right to issue banknotes in London. Country banks that were already issuing notes could continue to do so provided they were outside London (by a 65 mile radius) and backed their notes with some kind of credible security. Under this Act, the Bank of England was also divided into two distinct units, the Issue Department and the Banking Department. Davies highlights this important provision of the Act:
“The Issue Department was to receive from the Banking Department some £14 million of government securities to back its fiduciary issue of notes, any  issue above that [was] to be fully backed by gold and silver, the latter not to exceed one quarter of the gold” (2002: 315). 
Thus, while the Bank of England had the exclusive right to issue banknotes in London, its ability to create new money appeared to be circumscribed by the new laws. Existing banks outside of London were also seemingly bounded in their ability to create money. However, while official note issuance was restricted, this did not stop the Bank of England and other provincial banks from merely recording new loans on their balance sheets and issuing cheques to borrowers (Davies, 2002: 317). In other words, the bankers found a convenient way around the legislation and continued to expand the money supply regardless of gold reserves which were never publically known anyway. This changed the nature of banking in Britain and as we shall discuss, its legacy largely remains with us today. With this in mind, we now move to examine two theories of money creation: the heavily taught fractional reserve theory known popularly as the money multiplier model and the underappreciated credit creation theory. 

Tuesday, May 02, 2017

Automating Suspicion


theintercept |  When civil liberties advocates discuss the dangers of new policing technologies, they often point to sci-fi films like “RoboCop” and “Minority Report” as cautionary tales. In “RoboCop,” a massive corporation purchases Detroit’s entire police department. After one of its officers gets fatally shot on duty, the company sees an opportunity to save on labor costs by reanimating the officer’s body with sleek weapons, predictive analytics, facial recognition, and the ability to record and transmit live video.

Although intended as a grim allegory of the pitfalls of relying on untested, proprietary algorithms to make lethal force decisions, “RoboCop” has long been taken by corporations as a roadmap. And no company has been better poised than Taser International, the world’s largest police body camera vendor, to turn the film’s ironic vision into an earnest reality.

In 2010, Taser’s longtime vice president Steve Tuttle “proudly predicted” to GQ that once police can search a crowd for outstanding warrants using real-time face recognition, “every cop will be RoboCop.” Now Taser has announced that it will provide any police department in the nation with free body cameras, along with a year of free “data storage, training, and support.” The company’s goal is not just to corner the camera market, but to dramatically increase the video streaming into its servers.

With an estimated one-third of departments using body cameras, police officers have been generating millions of hours of video footage. Taser stores terabytes of such video on Evidence.com, in private servers, operated by Microsoft, to which police agencies must continuously subscribe for a monthly fee. Data from these recordings is rarely analyzed for investigative purposes, though, and Taser — which recently rebranded itself as a technology company and renamed itself “Axon” — is hoping to change that.

Taser has started to get into the business of making sense of its enormous archive of video footage by building an in-house “AI team.” In February, the company acquired a computer vision startup called Dextro and a computer vision team from Fossil Group Inc. Taser says the companies will allow agencies to automatically redact faces to protect privacy, extract important information, and detect emotions and objects — all without human intervention. This will free officers from the grunt work of manually writing reports and tagging videos, a Taser spokesperson wrote in an email. “Our prediction for the next few years is that the process of doing paperwork by hand will begin to disappear from the world of law enforcement, along with many other tedious manual tasks.” 

Analytics will also allow departments to observe historical patterns in behavior for officer training, the spokesperson added. “Police departments are now sitting on a vast trove of body-worn footage that gives them insight for the first time into which interactions with the public have been positive versus negative, and how individuals’ actions led to it.”

But looking to the past is just the beginning: Taser is betting that its artificial intelligence tools might be useful not just to determine what happened, but to anticipate what might happen in the future.
“We’ve got all of this law enforcement information with these videos, which is one of the richest treasure troves you could imagine for machine learning,” Taser CEO Rick Smith told PoliceOne in an interview about the company’s AI acquisitions. “Imagine having one person in your agency who would watch every single one of your videos — and remember everything they saw — and then be able to process that and give you the insight into what crimes you could solve, what problems you could deal with. Now, that’s obviously a little further out, but based on what we’re seeing in the artificial intelligence space, that could be within five to seven years.”

As video analytics and machine vision have made rapid gains in recent years, the future long dreaded by privacy experts and celebrated by technology companies is quickly approaching. No longer is the question whether artificial intelligence will transform the legal and lethal limits of policing, but how and for whose profits.

The Stigma of Systemic Racism Handed Over to "Machine Intelligence"...,


NYTimes |  When Chief Justice John G. Roberts Jr. visited Rensselaer Polytechnic Institute last month, he was asked a startling question, one with overtones of science fiction.

“Can you foresee a day,” asked Shirley Ann Jackson, president of the college in upstate New York, “when smart machines, driven with artificial intelligences, will assist with courtroom fact-finding or, more controversially even, judicial decision-making?”

The chief justice’s answer was more surprising than the question. “It’s a day that’s here,” he said, “and it’s putting a significant strain on how the judiciary goes about doing things.”

He may have been thinking about the case of a Wisconsin man, Eric L. Loomis, who was sentenced to six years in prison based in part on a private company’s proprietary software. Mr. Loomis says his right to due process was violated by a judge’s consideration of a report generated by the software’s secret algorithm, one Mr. Loomis was unable to inspect or challenge.

In March, in a signal that the justices were intrigued by Mr. Loomis’s case, they asked the federal government to file a friend-of-the-court brief offering its views on whether the court should hear his appeal.

The report in Mr. Loomis’s case was produced by a product called Compas, sold by Northpointe Inc. It included a series of bar charts that assessed the risk that Mr. Loomis would commit more crimes.
The Compas report, a prosecutor told the trial judge, showed “a high risk of violence, high risk of recidivism, high pretrial risk.” The judge agreed, telling Mr. Loomis that “you’re identified, through the Compas assessment, as an individual who is a high risk to the community.”

The Wisconsin Supreme Court ruled against Mr. Loomis. The report added valuable information, it said, and Mr. Loomis would have gotten the same sentence based solely on the usual factors, including his crime — fleeing the police in a car — and his criminal history.

At the same time, the court seemed uneasy with using a secret algorithm to send a man to prison. Justice Ann Walsh Bradley, writing for the court, discussed, for instance, a report from ProPublica about Compas that concluded that black defendants in Broward County, Fla., “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism.”

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...