Monday, January 01, 2018

Is Ideology The Original Augmented Reality?


nautil.us |  Released in July 2016, Pokémon Go is a location-based, augmented-reality game for mobile devices, typically played on mobile phones; players use the device’s GPS and camera to capture, battle, and train virtual creatures (“Pokémon”) who appear on the screen as if they were in the same real-world location as the player: As players travel the real world, their avatar moves along the game’s map. Different Pokémon species reside in different areas—for example, water-type Pokémon are generally found near water. When a player encounters a Pokémon, AR (Augmented Reality) mode uses the camera and gyroscope on the player’s mobile device to display an image of a Pokémon as though it were in the real world.* This AR mode is what makes Pokémon Go different from other PC games: Instead of taking us out of the real world and drawing us into the artificial virtual space, it combines the two; we look at reality and interact with it through the fantasy frame of the digital screen, and this intermediary frame supplements reality with virtual elements which sustain our desire to participate in the game, push us to look for them in a reality which, without this frame, would leave us indifferent. Sound familiar? Of course it does. What the technology of Pokémon Go externalizes is simply the basic mechanism of ideology—at its most basic, ideology is the primordial version of “augmented reality.”

The first step in this direction of technology imitating ideology was taken a couple of years ago by Pranav Mistry, a member of the Fluid Interfaces Group at the Massachusetts Institute of Technology Media Lab, who developed a wearable “gestural interface” called “SixthSense.”** The hardware—a small webcam that dangles from one’s neck, a pocket projector, and a mirror, all connected wirelessly to a smartphone in one’s pocket—forms a wearable mobile device. The user begins by handling objects and making gestures; the camera recognizes and tracks the user’s hand gestures and the physical objects using computer vision-based techniques. The software processes the video stream data, reading it as a series of instructions, and retrieves the appropriate information (texts, images, etc.) from the Internet; the device then projects this information onto any physical surface available—all surfaces, walls, and physical objects around the wearer can serve as interfaces. Here are some examples of how it works: In a bookstore, I pick up a book and hold it in front of me; immediately, I see projected onto the book’s cover its reviews and ratings. I can navigate a map displayed on a nearby surface, zoom in, zoom out, or pan across, using intuitive hand movements. I make a sign of @ with my fingers and a virtual PC screen with my email account is projected onto any surface in front of me; I can then write messages by typing on a virtual keyboard. And one could go much further here—just think how such a device could transform sexual interaction. (It suffices to concoct, along these lines, a sexist male dream: Just look at a woman, make the appropriate gesture, and the device will project a description of her relevant characteristics—divorced, easy to seduce, likes jazz and Dostoyevsky, good at fellatio, etc., etc.) In this way, the entire world becomes a “multi-touch surface,” while the whole Internet is constantly mobilized to supply additional data allowing me to orient myself.

Mistry emphasized the physical aspect of this interaction: Until now, the Internet and computers have isolated the user from the surrounding environment; the archetypal Internet user is a geek sitting alone in front of a screen, oblivious to the reality around him. With SixthSense, I remain engaged in physical interaction with objects: The alternative “either physical reality or the virtual screen world” is replaced by a direct interpenetration of the two. The projection of information directly onto the real objects with which I interact creates an almost magical and mystifying effect: Things appear to continuously reveal—or, rather, emanate—their own interpretation. This quasi-animist effect is a crucial component of the IoT: “Internet of things? These are nonliving things that talk to us, although they really shouldn’t talk. A rose, for example, which tells us that it needs water.”1 (Note the irony of this statement. It misses the obvious fact: a rose is alive.) But, of course, this unfortunate rose does not do what it “shouldn’t” do: It is merely connected with measuring apparatuses that let us know that it needs water (or they just pass this message directly to a watering machine). The rose itself knows nothing about it; everything happens in the digital big Other, so the appearance of animism (we communicate with a rose) is a mechanically generated illusion.

0 comments:

When Zakharova Talks Men Of Culture Listen...,

mid.ru  |   White House spokesman John Kirby’s statement, made in Washington shortly after the attack, raised eyebrows even at home, not ...