theatlantic | But the raw data that Facebook uses to create user-interest inferences is not available to users. It’s data about them, but it’s not their data. One European Facebook user has been petitioning to see this data—and Facebook acknowledged that it exists—but so far, has been unable to obtain it.
When he responded to Kennedy, Zuckerberg did not acknowledge any of this, but he did admit that Facebook has other types of data that it uses to increase the efficiency of its ads. He said:
My understanding is that the targeting options that are available for advertisers are generally things that are based on what people share. Now once an advertiser chooses how they want to target something, Facebook also does its own work to help rank and determine which ads are going to be interesting to which people. So we may use metadata or other behaviors of what you’ve shown that you’re interested in News Feed or other places in order to make our systems more relevant to you, but that’s a little bit different from giving that as an option to an advertiser.
Kennedy responded: “I don’t understand how users then own that data.”
This apparent contradiction relies on the company’s distinction between the content someone has intentionally shared—which Facebook mines for valuable targeting information—and the data that Facebook quietly collects around the web, gathers from physical locations, and infers about users based on people who have a similar digital profile. As the journalist Rob Horning put it, that second set of data is something of a “product” that Facebook makes, a “synthetic” mix of actual data gathered, data purchased from outsiders, and data inferred by machine intelligence.
With Facebook, the concept of owning your data begins to verge on meaningless if it doesn’t include that second, more holistic concept: not just the data users create and upload explicitly, but all the other information that has become attached to their profiles by other means.
But one can see, from Facebook’s perspective, how complicated that would be. Their techniques for placing users into particular buckets or assigning them certain targeting parameters are literally the basis for the company’s valuation. In a less techno-pessimistic time, Zuckerberg described people’s data in completely different terms. In October 2013, he told investors that this data helps Facebook “build the clearest models of everything there is to know in the world.”
Facebook puts out a series of interests for users to peruse or turn off, but it keeps the models to itself. The models make Facebook ads work well, and that means it helps small and medium-size businesses compete more effectively with megacorporations on this one particular score. Yet they introduce new asymmetries into the world. Gullible people can be targeted over and over with ads for businesses that stop just short of scams. People prone to believing hoaxes and conspiracies can be hit with ads that reinforce their most corrosive beliefs. Politicians can use blizzards of ads to precisely target different voter types.
As with all advertising, one has to ask: When does persuasion become manipulation or coercion? If Facebook advertisers crossed that line, would the company even know it? Dozens of times throughout the proceedings, Zuckerberg testified that he wasn’t sure about the specifics of his own service. It seemed preposterous, but with billions of users and millions of advertisers, who exactly could know what was happening?
Most of the ways that people think they protect their privacy can’t account for this new and more complex reality, which Kennedy recognized in his closing remark.
“You focus a lot of your testimony ... on the individual privacy aspects of this, but we haven’t talked about the societal implications of it ... The underlying issue here is that your platform has become a mix of ... news, entertainment, and social media that is up for manipulation,” he said. “The changes to individual privacy don’t seem to be sufficient to address that underlying issue.”
0 comments:
Post a Comment