Showing posts with label Minority Report. Show all posts
Showing posts with label Minority Report. Show all posts

Monday, April 01, 2024

Online Dissent Is PreCrime

nakedcapitalism  |  Philip K. Dick’s 1956 novella The Minority Report created “precrime,” the clairvoyant foreknowledge of criminal activity as forecast by mutant “precogs.” The book was a dystopian nightmare, but a 2015 Fox television series transforms the story into one in which a precog works with a cop and shows that data is actually effective at predicting future crime.

Canada just tried to enact a precrime law along the lines of the 2015 show, but it was panned about as much as the television series. Ottawa’s now-tabled online harms bill included a provision to impose house arrest on someone who is feared to commit a hate crime in the future. From The Globe and Mail:

The person could be made to wear an electronic tag, if the attorney-general requests it, or ordered by a judge to remain at home, the bill says. Mr. Virani, who is Attorney-General as well as Justice Minister, said it is important that any peace bond be “calibrated carefully,” saying it would have to meet a high threshold to apply.

But he said the new power, which would require the attorney-general’s approval as well as a judge’s, could prove “very, very important” to restrain the behaviour of someone with a track record of hateful behaviour who may be targeting certain people or groups…

People found guilty of posting hate speech could have to pay victims up to $20,000 in compensation. But experts including internet law professor Michael Geist have said even a threat of a civil complaint – with a lower burden of proof than a court of law – and a fine could have a chilling effect on freedom of expression.

While the Canadian bill is shelved for now, it wouldn’t be surprising to see it resurface after some future hate crime. I wonder if this is where burgeoning “anti-hate” programs across the US are headed. The Canadian bill would have also allowed “people to file complaints to the Canadian Human Rights Commission over what they perceive as hate speech online – including, for example, off-colour jokes by comedians.”

There are now programs in multiple US states to do just that –  encourage people to snitch on anyone doing anything perceived as “hateful.”

The 2021 federal COVID-19 Hate Crimes Act began to dole out money to states to help them respond to hate incidents. Oregon now has its Bias Response Hotline to track “bias incidents.” In December of 2022, New York launched its Hate and Bias Prevention Unit. Maryland, too, has its system – its hate incidents examples include “offensive jokes” and “malicious complaints of smell or noise.”

Maryland also has its Emmett Till Alert System that sends out three levels of alerts for specific acts of hate. For now, they only go to black lawmakers, civil rights activists, the media and other approved outlets, but expansion to the general populace is under consideration.

California vs. Hate, a multilingual statewide hotline and website that encourages people to report all acts of “hate,” is coming up on its one-year anniversary, reportedly receiving a mere 823 calls from 79% of California’s 58 counties during its first nine months of operation. It looks like the program is rolling out even more social media graphics in a bid to get more reports:

Thursday, August 13, 2020

Cautionary Submission In The Context Of The Vulnerable World Hypothesis Predictive Panopticon Proposal


I don't believe it's controversial to state that President Donald John Trump is one of THE WHYTEST WHYDTE MEN IN AMERICA. He's like an exemplar. Whatever else one might opine about the man, he's also a low-level baller, something at least approaching billionaire, and not a No Lives Matter, Left Behind, Little Man like you and I.  That said, these 9% muhuggahs here done put DJT through the ringer and then some, seriously.  The level of sustained, public ni****ization to which he has been subject is unprecedented in U.S. history. If what has been done to Trump is any indication of what the panopticon is willing to do to a political adversary, then TRUST and BELIEVE that you and I don't have even the barest iota of a prayer.

Sally Yates, Rod Rosenstein, Jim Comey and everyone who signed the Carter Page FISA application also be indicted for perjury? They signed a FISA application and made representations to the secret FISC on the basis of false information. Shouldn't representations to FISC need double verification since the accused has no opportunity to defend themselves or confront their accuser?

An average American doesn't get the option of saying I signed under penalty of perjury but I didn't know what I was signing.

What about James Clapper who lied under oath to Congress? The same crime for which Roger Stone was indicted and convicted.

And the United States Foreign Intelligence Surveillance Court had no idea that they were involved in anything out of the ordinary? As long as they crossed the i's and dotted the t's this was just a routine case like hundreds of others and how could they have known the thing was a fix? Poor trusting souls, misled so badly by such bad people. 

Utter bullshit. They were only dealing with what must have been the most explosively sensitive issue ever to come before them. We're expected to believe they were innocents misled? 

Sometimes not asking the right questions, and searching questions too in such a high profile case as this, shows complicity just as much as if they'd been assisting.

McCabe's wife was an out-of-the-blue candidate who ran for public office (VA State Senator) in 2015, during which she reportedly received over $650,000 in support from Clinton crony, then VA Gov. Terry McAuliffe. Her candidacy was suspicious in that she had no previous political experience (she's a physician who was on record as having voted in a Republican primary!) and it was promoted over the local VA Democratic Party's recommended candidate, a well-known retired Army colonel, attorney and party activist.

And yet McCabe, during this same time, was rapidly promoted to #3 in the FBI and didn't recuse himself from the Hillary Clinton email scandal investigation until one week before the 2016 election (and months after the infamous Comey press briefing in July when he declared Clinton would not be prosecuted), after the $650,000 donation came to light.

It's obvious why there are some who would think the very generous political contribution to McCabe's wife was in fact a backdoor bribe to her husband.

turcopelier |  I will be very clear up front--I have no inside information about what John Durham is going to do. But if he is simply following the facts and the evidence, Andrew McCabe will be one of the first to fall in the probe into the failed coup to destroy the Presidency of Donald Trump. The record on this is indisputable. He lied in three separate instances--1) He lied to FBI investigators, according to Michael Horowitz, 2) He lied to the House Permanent Select Committee on Intelligence, and 3) He lied to the Senate Select Committee on Intelligence.

McCabe's record of lying starts with questions put to him by FBI investigators about leaks of sensitive FBI evidence to the media in the fall of 2016:

Former FBI deputy director Andrew McCabe faced scorching criticism and potential criminal prosecution for changing his story about a conversation he had with a Wall Street Journal reporter. Now newly released interview transcripts show McCabe expressed remorse to internal FBI investigators when they pressed him on the about-face. 

In the final weeks of the 2016 presidential campaign, the Journal broke news about an FBI investigation involving then-candidate Hillary Clinton, describing internal discussions among senior FBI officials.

The apparent leak drew scrutiny from the bureau’s internal investigation team, which interviewed McCabe on May 9, 2017, the day President Donald Trump fired James Comey from his post as FBI director. The agents interviewed him as part of an investigation regarding a different media leak to the online publication Circa, and also asked him about the Journal story. 

In that interview, McCabe said he did not know how the Journal story came to be. But a few months later, his story changed after he reviewed his answer. 

McCabe's actions as an Artful Liar did not result in a prosecution. The Trump Justice Department reportedly decided to take a pass on that front, conceding that McCabe might prevail by insisting he just misremembered.

But subsequent statements by McCabe before the House and Senate Intelligence Committees expose him as a terminal liar.

Monday, January 01, 2018

Is Ideology The Original Augmented Reality?


nautil.us |  Released in July 2016, Pokémon Go is a location-based, augmented-reality game for mobile devices, typically played on mobile phones; players use the device’s GPS and camera to capture, battle, and train virtual creatures (“Pokémon”) who appear on the screen as if they were in the same real-world location as the player: As players travel the real world, their avatar moves along the game’s map. Different Pokémon species reside in different areas—for example, water-type Pokémon are generally found near water. When a player encounters a Pokémon, AR (Augmented Reality) mode uses the camera and gyroscope on the player’s mobile device to display an image of a Pokémon as though it were in the real world.* This AR mode is what makes Pokémon Go different from other PC games: Instead of taking us out of the real world and drawing us into the artificial virtual space, it combines the two; we look at reality and interact with it through the fantasy frame of the digital screen, and this intermediary frame supplements reality with virtual elements which sustain our desire to participate in the game, push us to look for them in a reality which, without this frame, would leave us indifferent. Sound familiar? Of course it does. What the technology of Pokémon Go externalizes is simply the basic mechanism of ideology—at its most basic, ideology is the primordial version of “augmented reality.”

The first step in this direction of technology imitating ideology was taken a couple of years ago by Pranav Mistry, a member of the Fluid Interfaces Group at the Massachusetts Institute of Technology Media Lab, who developed a wearable “gestural interface” called “SixthSense.”** The hardware—a small webcam that dangles from one’s neck, a pocket projector, and a mirror, all connected wirelessly to a smartphone in one’s pocket—forms a wearable mobile device. The user begins by handling objects and making gestures; the camera recognizes and tracks the user’s hand gestures and the physical objects using computer vision-based techniques. The software processes the video stream data, reading it as a series of instructions, and retrieves the appropriate information (texts, images, etc.) from the Internet; the device then projects this information onto any physical surface available—all surfaces, walls, and physical objects around the wearer can serve as interfaces. Here are some examples of how it works: In a bookstore, I pick up a book and hold it in front of me; immediately, I see projected onto the book’s cover its reviews and ratings. I can navigate a map displayed on a nearby surface, zoom in, zoom out, or pan across, using intuitive hand movements. I make a sign of @ with my fingers and a virtual PC screen with my email account is projected onto any surface in front of me; I can then write messages by typing on a virtual keyboard. And one could go much further here—just think how such a device could transform sexual interaction. (It suffices to concoct, along these lines, a sexist male dream: Just look at a woman, make the appropriate gesture, and the device will project a description of her relevant characteristics—divorced, easy to seduce, likes jazz and Dostoyevsky, good at fellatio, etc., etc.) In this way, the entire world becomes a “multi-touch surface,” while the whole Internet is constantly mobilized to supply additional data allowing me to orient myself.

Mistry emphasized the physical aspect of this interaction: Until now, the Internet and computers have isolated the user from the surrounding environment; the archetypal Internet user is a geek sitting alone in front of a screen, oblivious to the reality around him. With SixthSense, I remain engaged in physical interaction with objects: The alternative “either physical reality or the virtual screen world” is replaced by a direct interpenetration of the two. The projection of information directly onto the real objects with which I interact creates an almost magical and mystifying effect: Things appear to continuously reveal—or, rather, emanate—their own interpretation. This quasi-animist effect is a crucial component of the IoT: “Internet of things? These are nonliving things that talk to us, although they really shouldn’t talk. A rose, for example, which tells us that it needs water.”1 (Note the irony of this statement. It misses the obvious fact: a rose is alive.) But, of course, this unfortunate rose does not do what it “shouldn’t” do: It is merely connected with measuring apparatuses that let us know that it needs water (or they just pass this message directly to a watering machine). The rose itself knows nothing about it; everything happens in the digital big Other, so the appearance of animism (we communicate with a rose) is a mechanically generated illusion.

Thursday, August 31, 2017

IoT Extended Sensoria


bbvaopenmind |  In George Orwell’s 1984,(39) it was the totalitarian Big Brother government who put the surveillance cameras on every television—but in the reality of 2016, it is consumer electronics companies who build cameras into the common set-top box and every mobile handheld. Indeed, cameras are becoming commodity, and as video feature extraction gets to lower power levels via dedicated hardware, and other micropower sensors determine the necessity of grabbing an image frame, cameras will become even more common as generically embedded sensors. The first commercial, fully integrated CMOS camera chips came from VVL in Edinburgh (now part of ST Microelectronics) back in the early 1990s.(40) At the time, pixel density was low (e.g., the VVL “Peach” with 312 x 287 pixels), and the main commercial application of their devices was the “BarbieCam,” a toy video camera sold by Mattel. I was an early adopter of these digital cameras myself, using them in 1994 for a multi-camera precision alignment system at the Superconducting Supercollider(41) that evolved into the hardware used to continually align the forty-meter muon system at micron-level precision for the ATLAS detector at CERN’s Large Hadron Collider. This technology was poised for rapid growth: now, integrated cameras peek at us everywhere, from laptops to cellphones, with typical resolutions of scores of megapixels and bringing computational photography increasingly to the masses. ASICs for basic image processing are commonly embedded with or integrated into cameras, giving increasing video processing capability for ever-decreasing power. The mobile phone market has been driving this effort, but increasingly static situated installations (e.g., video-driven motion/context/gesture sensors in smart homes) and augmented reality will be an important consumer application, and the requisite on-device image processing will drop in power and become more agile. We already see this happening at extreme levels, such as with the recently released Microsoft HoloLens, which features six cameras, most of which are used for rapid environment mapping, position tracking, and image registration in a lightweight, battery-powered, head-mounted, self-contained AR unit. 3D cameras are also becoming ubiquitous, breaking into the mass market via the original structured-light-based Microsoft Kinect a half-decade ago. Time-of-flight 3D cameras (pioneered in CMOS in the early 2000s by researchers at Canesta(42) have evolved to recently displace structured light approaches, and developers worldwide race to bring the power and footprint of these devices down sufficiently to integrate into common mobile devices (a very small version of such a device is already embedded in the HoloLens). As pixel timing measurements become more precise, photon-counting applications in computational photography, as pursued by my Media Lab colleague Ramesh Raskar, promise to usher in revolutionary new applications that can do things like reduce diffusion and see around corners.(43)

My research group began exploring this penetration of ubiquitous cameras over a decade ago, especially applications that ground the video information with simultaneous data from wearable sensors. Our early studies were based around a platform called the “Portals”:(44) using an embedded camera feeding a TI DaVinci DSP/ARM hybrid processor, surrounded by a core of basic sensors (motion, audio, temperature/humidity, IR proximity) and coupled with a Zigbee RF transceiver, we scattered forty-five of these devices all over the Media Lab complex, interconnected through the wired building network. One application that we built atop them was “SPINNER,”(45) which labelled video from each camera with data from any wearable sensors in the vicinity. The SPINNER framework was based on the idea of being able to query the video database with higher-level parameters, lifting sensor data up into a social/affective space,(46) then trying to effectively script a sequential query as a simple narrative involving human subjects adorned with the wearables. Video clips from large databases sporting hundreds of hours of video would then be automatically selected to best fit given timeslots in the query, producing edited videos that observers deemed coherent.(47) Naively pointing to the future of reality television, this work aims further, looking to enable people to engage sensor systems via human-relevant query and interaction.

Rather than try to extract stories from passive ambient activity, a related project from our team devised an interactive camera with a goal of extracting structured stories from people.(48) Taking the form factor of a small mobile robot, “Boxie” featured an HD camera in one of its eyes: it would rove our building and get stuck, then plea for help when people came nearby. It would then ask people successive questions and request that they fulfill various tasks (e.g., bring it to another part of the building, or show it what they do in the area where it was found), making an indexed video that can be easily edited to produce something of a documentary about the people in the robot’s abode.
In the next years, as large video surfaces cost less (potentially being roll-roll printed) and are better integrated with responsive networks, we will see the common deployment of pervasive interactive displays. Information coming to us will manifest in the most appropriate fashion (e.g., in your smart eyeglasses or on a nearby display)—the days of pulling your phone out of your pocket and running an app are severely limited. To explore this, we ran a project in my team called “Gestures Everywhere”(49) that exploited the large monitors placed all over the public areas of our building complex.(50) Already equipped with RFID to identify people wearing tagged badges, we added a sensor suite and a Kinect 3D camera to each display site. As an occupant approached a display and were identified via RFID or video recognition, information most relevant to them would appear on the display. We developed a recognition framework for the Kinect that parsed a small set of generic hand gestures (e.g., signifying “next,” “more detail,” “go-away,” etc.), allowing users to interact with their own data at a basic level without touching the screen or pulling out a mobile device. Indeed, proxemic interactions(51) around ubiquitous smart displays will be common within the next decade.

The plethora of cameras that we sprinkled throughout our building during our SPINNER project produced concerns about privacy (interestingly enough, the Kinects for Gestures Everywhere did not evoke the same response—occupants either did not see them as “cameras” or were becoming used to the idea of ubiquitous vision). Accordingly, we put an obvious power switch on each portal that enabled them to be easily switched off. This is a very artificial solution, however—in the near future, there will just be too many cameras and other invasive sensors in the environment to switch off. These devices must answer verifiable and secure protocols to dynamically and appropriately throttle streaming sensor data to answer user privacy demands. We have designed a small, wireless token that controlled our portals in order to study solutions to such concerns.(52) It broadcast a beacon to the vicinity that dynamically deactivates the transmission of proximate audio, video, and other derived features according to the user’s stated privacy preferences—this device also featured a large “panic” button that can be pushed at any time when immediate privacy is desired, blocking audio and video from emanating from nearby Portals.

Rather than block the video stream entirely, we have explored just removing the privacy-desiring person from the video image. By using information from wearable sensors, we can more easily identify the appropriate person in the image,(53) and blend them into the background. We are also looking at the opposite issue—using wearable sensors to detect environmental parameters that hint at potentially hazardous conditions for construction workers and rendering that data in different ways atop real-time video, highlighting workers in situations of particular concern.(54)

Tuesday, May 02, 2017

Automating Suspicion


theintercept |  When civil liberties advocates discuss the dangers of new policing technologies, they often point to sci-fi films like “RoboCop” and “Minority Report” as cautionary tales. In “RoboCop,” a massive corporation purchases Detroit’s entire police department. After one of its officers gets fatally shot on duty, the company sees an opportunity to save on labor costs by reanimating the officer’s body with sleek weapons, predictive analytics, facial recognition, and the ability to record and transmit live video.

Although intended as a grim allegory of the pitfalls of relying on untested, proprietary algorithms to make lethal force decisions, “RoboCop” has long been taken by corporations as a roadmap. And no company has been better poised than Taser International, the world’s largest police body camera vendor, to turn the film’s ironic vision into an earnest reality.

In 2010, Taser’s longtime vice president Steve Tuttle “proudly predicted” to GQ that once police can search a crowd for outstanding warrants using real-time face recognition, “every cop will be RoboCop.” Now Taser has announced that it will provide any police department in the nation with free body cameras, along with a year of free “data storage, training, and support.” The company’s goal is not just to corner the camera market, but to dramatically increase the video streaming into its servers.

With an estimated one-third of departments using body cameras, police officers have been generating millions of hours of video footage. Taser stores terabytes of such video on Evidence.com, in private servers, operated by Microsoft, to which police agencies must continuously subscribe for a monthly fee. Data from these recordings is rarely analyzed for investigative purposes, though, and Taser — which recently rebranded itself as a technology company and renamed itself “Axon” — is hoping to change that.

Taser has started to get into the business of making sense of its enormous archive of video footage by building an in-house “AI team.” In February, the company acquired a computer vision startup called Dextro and a computer vision team from Fossil Group Inc. Taser says the companies will allow agencies to automatically redact faces to protect privacy, extract important information, and detect emotions and objects — all without human intervention. This will free officers from the grunt work of manually writing reports and tagging videos, a Taser spokesperson wrote in an email. “Our prediction for the next few years is that the process of doing paperwork by hand will begin to disappear from the world of law enforcement, along with many other tedious manual tasks.” 

Analytics will also allow departments to observe historical patterns in behavior for officer training, the spokesperson added. “Police departments are now sitting on a vast trove of body-worn footage that gives them insight for the first time into which interactions with the public have been positive versus negative, and how individuals’ actions led to it.”

But looking to the past is just the beginning: Taser is betting that its artificial intelligence tools might be useful not just to determine what happened, but to anticipate what might happen in the future.
“We’ve got all of this law enforcement information with these videos, which is one of the richest treasure troves you could imagine for machine learning,” Taser CEO Rick Smith told PoliceOne in an interview about the company’s AI acquisitions. “Imagine having one person in your agency who would watch every single one of your videos — and remember everything they saw — and then be able to process that and give you the insight into what crimes you could solve, what problems you could deal with. Now, that’s obviously a little further out, but based on what we’re seeing in the artificial intelligence space, that could be within five to seven years.”

As video analytics and machine vision have made rapid gains in recent years, the future long dreaded by privacy experts and celebrated by technology companies is quickly approaching. No longer is the question whether artificial intelligence will transform the legal and lethal limits of policing, but how and for whose profits.

The Stigma of Systemic Racism Handed Over to "Machine Intelligence"...,


NYTimes |  When Chief Justice John G. Roberts Jr. visited Rensselaer Polytechnic Institute last month, he was asked a startling question, one with overtones of science fiction.

“Can you foresee a day,” asked Shirley Ann Jackson, president of the college in upstate New York, “when smart machines, driven with artificial intelligences, will assist with courtroom fact-finding or, more controversially even, judicial decision-making?”

The chief justice’s answer was more surprising than the question. “It’s a day that’s here,” he said, “and it’s putting a significant strain on how the judiciary goes about doing things.”

He may have been thinking about the case of a Wisconsin man, Eric L. Loomis, who was sentenced to six years in prison based in part on a private company’s proprietary software. Mr. Loomis says his right to due process was violated by a judge’s consideration of a report generated by the software’s secret algorithm, one Mr. Loomis was unable to inspect or challenge.

In March, in a signal that the justices were intrigued by Mr. Loomis’s case, they asked the federal government to file a friend-of-the-court brief offering its views on whether the court should hear his appeal.

The report in Mr. Loomis’s case was produced by a product called Compas, sold by Northpointe Inc. It included a series of bar charts that assessed the risk that Mr. Loomis would commit more crimes.
The Compas report, a prosecutor told the trial judge, showed “a high risk of violence, high risk of recidivism, high pretrial risk.” The judge agreed, telling Mr. Loomis that “you’re identified, through the Compas assessment, as an individual who is a high risk to the community.”

The Wisconsin Supreme Court ruled against Mr. Loomis. The report added valuable information, it said, and Mr. Loomis would have gotten the same sentence based solely on the usual factors, including his crime — fleeing the police in a car — and his criminal history.

At the same time, the court seemed uneasy with using a secret algorithm to send a man to prison. Justice Ann Walsh Bradley, writing for the court, discussed, for instance, a report from ProPublica about Compas that concluded that black defendants in Broward County, Fla., “were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism.”

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...