RollingStone | For years, YouTube profited off all kinds of extremist content; its
three-strike policy was directed at copyright infringement. Its current
and newly aggressive posture towards content stems from the advertiser
revolt that erupted following Trump’s surprise victory. Within weeks of
the 2016 election, brands like Johnson & Johnson, and ad-tech
companies like AppNexus, began taking steps
to distance themselves from Breitbart and other purveyors of "fake
news" and extremist content. In early 2017, companies like Starbucks and
Walmart started pulling their ads from YouTube, worried that their marketing was sandwiched between clips featuring foaming-at-the-mouth racists and child abusers. In a watershed moment, the global buying agency Havas pulled its ads from Google/YouTube U.K., after the Times of London
detailed how ads for well-known charities were supporting Neo-Nazi
articles and videos. When the influential research group Pivotal
downgraded Google stock from a buy to a hold, Google suddenly grew
concerned about the kind of content its proprietary algorithms had been
promoting for years – intentionally and by design.
This is not a
conspiracy theory worthy of a "strike," but the testimony of a former
YouTube engineer named Guillaume Chaslot, who was profiled by the Guardian
in early February. Chaslot, a Ph.D. in artificial intelligence,
explained how his team at YouTube was tasked with designing algorithms
that prioritized “watch time” alone. “Watch time was the priority,” he
told the paper. “Everything else was considered a distraction… There are
many ways YouTube can change its algorithms to suppress fake news and
improve the quality and diversity of videos people see… I tried to
change YouTube from the inside, but it didn’t work.”
When Chaslot
conducted an independent study of how his algorithms worked in the real
world, he found that during recent elections in France, Germany and the
U.K., YouTube "systematically amplifie[d] videos that are divisive,
sensational and conspiratorial." (His findings can be seen at Algotransparency.org.) At the height of the advertising revolt, in March of last year, YouTube announced that
it was "taking a hard look at our existing community guidelines to
determine what content is allowed on the platform – not just what
content can be monetized." CEO Susan Wojcicki announced the company
would hire thousands of human moderators to watch and judge all content
on the site.
YouTube's new policies were part of an industry-wide
course correction. Over the past year, under the banner of combatting
hate speech and fake news, Google and Facebook began to cut off search
traffic and monetized content-creator accounts, not only to dangerous
scam-artists like Jones, but to any site that garnered complaints or
didn't meet newly enforced enforced and vaguely defined criteria of
"credible" and "quality."
0 comments:
Post a Comment