NYTimes | Responding to complaints that not enough is being done to keep extremist content off social media platforms, Facebook said Thursday that it would begin using artificial intelligence to help remove inappropriate content.
Artificial intelligence
will largely be used in conjunction with human moderators who review
content on a case-by-case basis. But developers hope its use will be
expanded over time, said Monika Bickert, the head of global policy
management at Facebook.
One
of the first applications for the technology is identifying content
that clearly violates Facebook’s terms of use, such as photos and videos
of beheadings or other gruesome images, and stopping users from
uploading them to the site.
“Tragically,
we have seen more terror attacks recently,” Ms. Bickert said. “As we
see more attacks, we see more people asking what social media companies
are doing to keep this content offline.”
In a blog post
published Thursday, Facebook described how an artificial-intelligence
system would, over time, teach itself to identify key phrases that were
previously flagged for being used to bolster a known terrorist group.
The
same system, they wrote, could learn to identify Facebook users who
associate with clusters of pages or groups that promote extremist
content, or who return to the site again and again, creating fake
accounts in order to spread such content online.
One day our technology will address everything,” Ms. Bickert said. “It’s in development right now.” But human moderators, she added, are still needed to review content for context.
0 comments:
Post a Comment