Saturday, November 26, 2022

Nothing Says "Family-Friendly" Like Being Censored By An Ugly Dude In Misapplied Make-Up

theatlantic  |  Everyone I spoke with believes that the very future of how the internet works is at stake. Accordingly, this case is likely to head to the Supreme Court. Part of this fiasco touches on the debate around Section 230 of the Communications Decency Act, which, despite its political-lightning-rod status, makes it extremely clear that websites have editorial control. “Section 230 tells platforms, ‘You’re not the author of what people on your platform put up, but that doesn’t mean you can’t clean up your own yard and get rid of stuff you don’t like.’ That has served the internet very well,” Dan Novack, a First Amendment attorney, told me. In effect, it allows websites that host third-party content to determine whether they want a family-friendly community or an edgy and chaotic one. This, Masnick argued, is what makes the internet useful, and Section 230 has “set up the ground rules in which all manner of experimentation happens online,” even if it’s also responsible for quite a bit of the internet’s toxicity too.

But the full editorial control that Section 230 protects isn’t just a boon for giants such as Facebook and YouTube. Take spam: Every online community—from large platforms to niche forums—has the freedom to build the environment that makes sense to them, and part of that freedom is deciding how to deal with bad actors (for example, bot accounts that spam you with offers for natural male enhancement). Keller suggested that the law may have a carve-out for spam—which is often filtered because of the way it’s disseminated, not because of its viewpoint (though this gets complicated with spammy political emails). But one way to look at content moderation is as a constant battle for online communities, where bad actors are always a step ahead. The Texas law would kneecap platforms’ abilities to respond to a dynamic threat.

“It says, ‘Hey, the government can decide how you deal with content and how you decide what community you want to build or who gets to be a part of that community and how you can deal with your bad actors,’” Masnick said. “Which sounds fundamentally like a totally different idea of the internet.”

“A lot of people envision the First Amendment in this affirmative way, where it is about your right to say what you want to say,” Novack told me. “But the First Amendment is just as much about protecting your right to be silent. And it’s not just about speech but things adjacent to your speech—like what content you want to be associated or not associated with. This law and the conservative support of it shreds those notions into ribbons.”

The implications are terrifying and made all the worse by the language of Judge Oldham’s ruling. Perhaps the best example of this brazen obtuseness is Oldham’s argument about “the Platforms’ obsession with terrorists and Nazis,” concerns that he suggests are “fanciful” and “hypothetical.” Of course, such concerns are not hypothetical; they’re a central issue for any large-scale platform’s content-moderation team. In 2015, for example, the Brookings Institution issued a 68-page report titled “The ISIS Twitter census” mapping the network of terrorist supporters flooding the platform. The report found that in 2014, there were at least 46,000 ISIS accounts on Twitter posting graphic violent content and using the platform to recruit and collect intelligence for the Islamic State.

0 comments:

Fuck Robert Kagan And Would He Please Now Just Go Quietly Burn In Hell?

politico | The Washington Post on Friday announced it will no longer endorse presidential candidates, breaking decades of tradition in a...