Showing posts with label Diversity Training. Show all posts
Showing posts with label Diversity Training. Show all posts

Tuesday, February 15, 2022

Two Weeks Ago Sloly Asked For Help, Today He Got A Foot In His Ass...,

cbc  |   "Chief Sloly and the Ottawa Police Service have been working, with our policing partners, around the clock for three weeks to end this illegal occupation of our city," the statement said.

"This unprecedented situation, well beyond the experience of any municipal policing body in Canada, has put tremendous strain on all our officers."

The statement said the Ottawa Police Service is working with the OPP and RCMP to establish a joint incident command that it says will see more resources and expertise made available to help end what many are calling the occupation of the nation's capital.

"In future there will be an opportunity for a full review of the operation, but right now it is time to work together with our partners and focus on ending this illegal occupation," the statement said.

OPS media relations told CBC News no one was available for an interview.

The Globe and Mail recently noted that while Sloly has faced criticism for his handling of some issues, he was not known in policing circles as someone quick to resort to heavy-handed measures.

During a special meeting of the Ottawa Police Services Board Friday, police board chair Coun. Diane Deans defended Sloly's response to the crisis, saying that despite requests for help issued to the province and the federal government the OPS still did not have the resources it needed to end the occupation of the city. 

The Ottawa Police Service is "working tirelessly with the resources they have and there has been some progress. There have been over 1,700 tickets issued, there have been at least 25 arrests, police have been working to seize fuel, they've made progress on clamping down on the encampment at Coventry Rd. and in Confederation Park, but it's not enough," Deans said at the meeting.

"We do not have the resource requirement that we have asked for at this point."

Deans declined an interview request from CBC News Monday when asked about specific allegations related to Sloly's behaviour as chief of police.

Wednesday, June 09, 2021

Festus And Cooter Are Endangered Pissants - Google IS White Supremacy

wired |  The repercussions of Gebru’s termination quickly radiated out from her team to the rest of Google and, beyond that, to the entire discipline of AI fairness research.

Some Google employees, including David Baker, a director who’d been at the company for 16 years, publicly quit over its treatment of Gebru. Google’s research department was riven by mistrust and rumors about what happened and what might happen next. Even people who believed Gebru had behaved in ways unbecoming of a corporate researcher saw Google’s response as ham-handed. Some researchers feared their work would now be policed more closely. One of them, Nicholas Carlini, sent a long internal email complaining of changes that company lawyers made to another paper involving large language models, published after Gebru was fired, likening the intervention to “Big Brother stepping in.” The changes downplayed the problems the paper reported and removed references to Google’s own technology, the email said.

Soon after, Google rolled out its response to the roiling scandal and sketched out a more locked-down future for in-house research probing AI’s power. Marian Croak, the executive who had shown interest in Gebru’s work, was given the task of consolidating the various teams working on what the company called responsible AI, including Mitchell and Gebru’s. Dean sent around an email announcing that a review of Gebru’s ouster had concluded; he was sorry, he said, that the company had not “handled this situation with more sensitivity.”

Dean also announced that progress on improving workforce diversity would now be considered in top executives’ performance reviews—perhaps quietly conceding Gebru’s assertion that leaders were not held accountable for their poor showing on this count. And he informed researchers that they would be given firmer guidance on “Google’s research goals and priorities.” A Google source later explained that this meant future projects touching on sensitive or commercial topics would require more input from in-house legal experts, product teams, and others within Google who had relevant expertise. The outlook for open-minded, independent research on ethical AI appeared gloomy. Google claimed that it still had hundreds of people working on responsible AI, and that it would expand those teams; the company painted Gebru and Mitchell’s group as a tiny and relatively unimportant cog in a big machine. But others at Google said the Ethical AI leaders and their frank feedback would be missed. “For me, it’s the most critical voices that are the most important and where I have learned the most,” says one person who worked on product changes with Gebru and Mitchell’s input. Bengio, the women’s manager, turned his back on 14 years of working on AI at Google and quit to join Apple.

Outside of Google, nine Democrats in Congress wrote to Pichai questioning his commitment to preventing AI’s harms. Mitchell had at one point tried to save the “Stochastic Parrots” paper by telling executives that publishing it would bolster arguments that the company was capable of self-policing. Quashing it was now undermining those arguments.

Some academics announced that they had backed away from company events or funding. The fairness and technology conference’s organizers stripped Google of its status as a sponsor of the event. Luke Stark, who studies the social impacts of AI at the University of Western Ontario, turned down a $60,000 grant from Google in protest of its treatment of the Ethical AI team. When he applied for the money in December 2020, he had considered the team a “strong example” of how corporate researchers could do powerful work. Now he wanted nothing to do with Google. Tensions built into the field of AI ethics, he saw, were beginning to cause fractures.

“The big tech companies tried to steal a march on regulators and public criticism by embracing the idea of AI ethics,” Stark says. But as the research matured, it raised bigger questions. “Companies became less able to coexist with internal critical research,” he says. One person who runs an ethical AI team at another tech company agrees. “Google and most places did not count on the field becoming what it did.”

To some, the drama at Google suggested that researchers on corporate payrolls should be subject to different rules than those from institutions not seeking to profit from AI. In April, some founding editors of a new journal of AI ethics published a paper calling for industry researchers to disclose who vetted their work and how, and for whistle-blowing mechanisms to be set up inside corporate labs. “We had been trying to poke on this issue already, but when Timnit got fired it catapulted into a more mainstream conversation,” says Savannah Thais, a researcher at Princeton on the journal’s board who contributed to the paper. “Now a lot more people are questioning: Is it possible to do good ethics research in a corporate AI setting?”

If that mindset takes hold, in-house ethical AI research may forever be held in suspicion—much the way industrial research on pollution is viewed by environmental scientists. Jeff Dean admitted in a May interview with CNET that the company had suffered a real “reputational hit” among people interested in AI ethics work. The rest of the interview dealt mainly with promoting Google’s annual developer conference, where it was soon announced that large language models, the subject of Gebru’s fateful critique, would play a more central role in Google search and the company’s voice assistant. Meredith Whittaker, faculty director of New York University’s AI Now Institute, predicts that there will be a clearer split between work done at institutions like her own and work done inside tech companies. “What Google just said to anyone who wants to do this critical research is, ‘We’re not going to tolerate it,’” she says. (Whittaker herself once worked at Google, where she clashed with management over AI ethics and the Maven Pentagon contract before leaving in 2019.)

Any such divide is unlikely to be neat, given how the field of AI ethics sprouted in a tech industry hothouse. The community is still small, and jobs outside big companies are sparser and much less well paid, particularly for candidates without computer science PhDs. That’s in part because AI ethics straddles the established boundaries of academic departments. Government and philanthropic funding is no match for corporate purses, and few institutions can rustle up the data and computing power needed to match work from companies like Google.

For Gebru and her fellow travelers, the past five years have been vertiginous. For a time, the period seemed revolutionary: Tech companies were proactively exploring flaws in AI, their latest moneymaking marvel—a sharp contrast to how they’d faced up to problems like spam and social network moderation only after coming under external pressure. But now it appeared that not much had changed after all, even if many individuals had good intentions.

Inioluwa Deborah Raji, whom Gebru escorted to Black in AI in 2017, and who now works as a fellow at the Mozilla Foundation, says that Google’s treatment of its own researchers demands a permanent shift in perceptions. “There was this hope that some level of self-regulation could have happened at these tech companies,” Raji says. “Everyone’s now aware that the true accountability needs to come from the outside—if you’re on the inside, there’s a limit to how much you can protect people.”

Gebru, who recently returned home after her unexpectedly eventful road trip, has come to a similar conclusion. She’s raising money to launch an independent research institute modeled on her work on Google’s Ethical AI team and her experience in Black in AI. “We need more support for external work so that the choice is not ‘Do I get paid by the DOD or by Google?’” she says.

Gebru has had offers, but she can’t imagine working within the industry anytime in the near future. She’s been thinking back to conversations she’d had with a friend who warned her not to join Google, saying it was harmful to women and impossible to change. Gebru had disagreed, claiming she could nudge things, just a little, toward a more beneficial path. “I kept on arguing with her,” Gebru says. Now, she says, she concedes the point.

Sunday, May 09, 2021

Deplorables Understand That Wokeism Is A PsyOp...,

theatlantic |  Nonprofit organizations that provide these training sessions argued that the order violated their free-speech rights and hampered their ability to conduct their business. In December, a federal judge agreed; President Joe Biden rescinded the order the day he took office. But by then, critical race theory was already a part of the conservative lexicon. Since Trump’s executive order, Rufo told me, he has provided his analysis “to a half-dozen state legislatures, the United States House of Representatives, and the United States Senate.” One such state legislature was New Hampshire’s; on February 18, the lower chamber held a hearing to discuss Keith Ammon’s bill. Rufo was among those who testified in support of it.

Concerned that the measure might fail on its own, Republicans have now included its language in a must-pass budget bill. In March, Republican Governor Chris Sununu signaled that he would object to “divisive concepts” legislation because he believes it is unconstitutional, but he has since tempered his stand. “The ideas of critical race theory and all of this stuff—I personally don’t think there’s any place for that in schools,” he said in early April. But, he added, “when you start turning down the path of the government banning things, I think that’s a very slippery slope.” Almost everyone I spoke with for this article assumed that Sununu would sign the budget bill, and that the divisive-concepts ban would become law.   

Although free-speech advocates are confident that bills like Ammon’s will not survive challenges in court, they believe the real point is to scare off companies, schools, and government agencies from discussing systemic racism. “What these bills are designed to do is prevent conversations about how racism exists at a systemic level in that we all have implicit biases that lead to decisions that, accumulated, lead to significant racial disparities,” Gilles Bissonnette, the legal director of the ACLU of New Hampshire, told me. “The proponents of this bill want none of those discussions to happen. They want to suppress that type of speech.”

Conservatives are not the only critics of diversity training. For years, some progressives, including critical race theorists, have questioned its value: Is it performative? Is it the most effective way to move toward equity or is it simply an effective way of restating the obvious and stalling meaningful action? But that is not the fight that has materialized over the past nine months. Instead, it is a confrontation with a cartoonish version of critical race theory.

For Republicans, the end goal of all these bills is clear: initiating another battle in the culture wars and holding on to some threadbare mythology of the nation that has been challenged in recent years. What’s less clear is whether average voters care much about the debate. In a recent Atlantic/Leger poll, 52 percent of respondents who identified as Republicans said that states should pass laws banning schools from teaching critical race theory, but just 30 percent of self-identified independents were willing to say the same. Meanwhile, a strong majority of Americans, 78 percent, either had not heard of critical race theory or were unsure whether they had.

Last week, after President Biden’s first joint address to Congress—and as Idaho was preparing to pass its bill—Senator Tim Scott stood in front of United States and South Carolina flags to deliver the Republican response. “From colleges to corporations to our culture, people are making money and gaining power by pretending we haven’t made any progress,” Scott said. “You know this stuff is wrong. Hear me clearly: America is not a racist country.” Rufo immediately knew what he meant. “Senator Tim Scott denounces critical race theory in his response to Biden’s speech tonight,” he tweeted. “We have turned critical race theory into a national issue and conservative political leaders are starting to fight.”

 

Sunday, February 21, 2021

Google Diversity Thinner Than Skimmed Piss - Old Marian Croak The Only Responsible Negroe On Deck...,

theverge |  Google has fired Margaret Mitchell, co-lead of the ethical AI team, after she used an automated script to look through her emails in order to find evidence of discrimination against her coworker Timnit Gebru. The news was first reported by Axios.

Mitchell’s firing comes one day after Google announced a reorganization to its AI teams working on ethics and fairness. Marian Croak, a vice president in the engineering organization, is now leading “a new center of expertise on responsible AI within Google Research,” according to a blog post

Mitchell joined Google in 2016 as a senior research scientist, according to her LinkedIn. Two years later, she helped start the ethical AI team alongside Gebru, a renowned researcher known for her workon bias in facial recognition technology.

In December 2020, Mitchell and Gebru were working on a paper about the dangers of large language processing models when Megan Kacholia, vice president of Google Brain, asked that the article be retracted. Gebru pushed back, saying the company needed to be more open about why the research wasn’t acceptable. Shortly afterwards, she was fired, though Google characterized her departure as a resignation. 

After Gebru’s termination, Mitchell became openly critical of Google executives, including Google AI division head Jeff Dean and Google CEO Sundar Pichai. In January, she lost her corporate email access after Google began investigating her activity.

“After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees,” Google said in a statement to Axios about Mitchell’s firing.

Wednesday, January 27, 2021

Diagnosing Critical Race Theory, Diversity Training, And Social Justice...,

pitt.edu |  For a long time, philosophers of science have expressed little interest in the so-called demarcation project that occupied the pioneers of their field, and most now concur that terms like “pseudoscience” cannot be defined in any meaningful way. However, recent years have witnessed a revival of philosophical interest in demarcation. In this paper, I argue that, though the demarcation problem of old leads to a dead-end, the concept of pseudoscience is not going away anytime soon, and deserves a fresh look. My approach proposes to naturalize and down-size the concept, anchoring it to real-life doctrines and fields of inquiry. First, I argue against the definite article “the” in “the demarcation problem”, distinguishing between territorial and normative demarcation, and between different failures and shortcomings in science apart from pseudoscience (such as fraudulent or faulty research). Next, I argue that pseudosciences can be fruitfully regarded as simulacra of science, doctrines that are not epistemically warranted but whose proponents try to create the impression that they are. In this element of imitation of mimicry, I argue, lies the clue to their common identity. Despite the huge variety of doctrines gathered under the rubric of “pseudoscience”, and the wide range of defects from which they suffer, pseudosciences all engage in similar strategies to create an impression of epistemic warrant. The indirect, symptomatic approach defended here leads to a general characterization of pseudosciences in all domains of inquiry, and to a useful diagnostic tool.

Weak People Are Open, Empty, and Easily Occupied By Evil...,

Tucker Carlson: "Here's the illusion we fall for time and again. We imagine that evil comes like fully advertised as such, like evi...