Showing posts with label Go Ogle. Show all posts
Showing posts with label Go Ogle. Show all posts

Friday, December 11, 2020

Supporting The 2020 U.S. Election Bishes....,

youtube |  Yesterday was the safe harbor deadline for the U.S. Presidential election and enough states have certified their election results to determine a President-elect. Given that, we will start removing any piece of content uploaded today (or anytime after) that misleads people by alleging that widespread fraud or errors changed the outcome of the 2020 U.S. Presidential election, in line with our approach towards historical U.S. Presidential elections. For example, we will remove videos claiming that a Presidential candidate won the election due to widespread software glitches or counting errors. We will begin enforcing this policy today, and will ramp up in the weeks to come. As always, news coverage and commentary on these issues can remain on our site if there’s sufficient education, documentary, scientific or artistic context.

Connecting people to authoritative information

While only a small portion of watch time is election-related content, YouTube continues to be an important source of election news. On average 88% of the videos in top 10 search results related to elections came from authoritative news sources (amongst the rest are things like newsy late-night shows, creator videos and commentary). And the most viewed channels and videos are from news channels like NBC and CBS.

We also showed information panels linking both to Google’s election results feature, which sources election results from The Associated Press, and to the Cybersecurity & Infrastructure Security Agency’s (CISA) “Rumor Control” page for debunking election integrity misinformation, alongside these and over 200,000 other election-related videos. Collectively, these information panels have been shown over 4.5 billion times. Starting today, we will update this information panel, linking to the “2020 Electoral College Results” page from the Office of the Federal Register, noting that as of December 8, states have certified Presidential election results, with Joe Biden as the President-elect. It will also continue to include a link to CISA, explaining that states certify results after ensuring ballots are properly counted and correcting irregularities and errors.

Additionally, since Election Day, relevant fact check information panels, from third party fact checkers, were triggered over 200,000 times above relevant election-related search results, including for voter fraud narratives such as “Dominion voting machines” and “Michigan recount.”




Monday, December 07, 2020

Timnit Gebru: Google Definitely Has A "Type" When It Comes To Diversity And Inclusion...,

technologyreview |  The paper, which builds off the work of other researchers, presents the history of natural-language processing, an overview of four main risks of large language models, and suggestions for further research. Since the conflict with Google seems to be over the risks, we’ve focused on summarizing those here.

Environmental and financial costs

Training large AI models consumes a lot of computer processing power, and hence a lot of electricity. Gebru and her coauthors refer to a 2019 paper from Emma Strubell and her collaborators on the carbon emissions and financial costs of large language models. It found that their energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data.

Strubell’s study found that one language model with a particular type of “neural architecture search” (NAS) method would have produced the equivalent of 626,155 pounds (284 metric tons) of carbon dioxide—about the lifetime output of five average American cars. A version of Google’s language model, BERT, which underpins the company’s search engine, produced 1,438 pounds of CO2 equivalent in Strubell’s estimate—nearly the same as a roundtrip flight between New York City and San Francisco.

Gebru’s draft paper points out that the sheer resources required to build and sustain such large AI models means they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” they write.

Massive data, inscrutable models

Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there's a risk that racist, sexist, and otherwise abusive language ends up in the training data.

An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

Moreover, because the training datasets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, [...] undocumented training data perpetuates harm without recourse.”

Research opportunity costs

The researchers summarize the third challenge as the risk of “misdirected research effort.” Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them. “This research effort brings with it an opportunity cost,” Gebru and her colleagues write. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated datasets (and thus also use less energy).

Illusions of meaning

The final problem with large language models, the researchers say, is that because they’re so good at mimicking real human language, it’s easy to use them to fool people. There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.

The dangers are obvious: AI models could be used to generate misinformation about an election or the covid-19 pandemic, for instance. They can also go wrong inadvertently when used for machine translation. The researchers bring up an example: In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.

Thursday, December 07, 2017

Peasants Will Be Matched and Bred Via eHarmony and 23andMe...,


DailyMail |   Location-based apps like Tinder have transformed the dating world.
But how will technology help us find Mr or Mrs Right 25 years from now?

According to a new report, the future of romance could lie in virtual reality, wearable technology and DNA matching.

These technologies are set to take the pain out of dating by saving single people time and effort, while giving them better matches, according to the research.

Students from Imperial College London were commissioned by relationship website eHarmony.co.uk to produce a report on what online dating and relationships could look like by 2040.

They put together a report based on analysis of how people's lifestyle habits have evolved over the past 100 years.

Sunday, December 03, 2017

Power And Control Over Your Mind, Attention, Resources...,


Counterpunch  |  When a system enters into the final stage of its deterioration – whether that is an institutional system, a state, an empire, or the human body – all the important information flows that support coherent communication breakdown. In this final stage, if this situation is not corrected the system will collapse and die.

It has become obvious to nearly everyone that we have reached this stage on the planet and in our democratic institutions. We see how the absolute dysfunction of the global information architecture — represented in the intersection of mainstream media outlets, social technology platforms and giant digital aggregators — is generating widespread apathy, despair, insanity and madness at a scale that is terrifying.

And we are right to be terrified, because this situation is paralyzing us from taking the action required to solve global and local challenges. While liberals fight conservatives and conservatives fight liberals we lose precious time.

While progressives fight government, the corporations and the super-rich we drown in despair. While philanthropists, fueled by their own certainty and wealth, fight for justice or equality or for some poor hamlet in Africa we become apathetic and distracted from the real source of the problem. And while the president fights everyone and everyone fights the president, the collective goes mad.
In the background, however, the game of hoarding resources and not redistributing them accelerates; absorbing the sum total of our collective actions and commitments into a singular unacceptable future. There is only one way to avoid this fate; uncover the source of the disease and cure it by mobilizing solutions.

We are about to break down for you the source of this disease of information that is accelerating us to ecological and institutional collapse because once you see it, you will be free to act and build something else.

DEI Is Dumbasses With No Idea That They're Dumb

Tucker Carlson about Alexandria Ocasio-Cortez and Karine Jean-Pierre: "The marriage of ineptitude and high self-esteem is really the ma...