Sunday, April 02, 2023

Disingenuously Shaping The Narrative Around Large Language Model Computing

vice  |  More than 30,000 people—including Tesla’s Elon Musk, Apple co-founder Steve Wozniak, politician Andrew Yang, and a few leading AI researchers—have signed an open letter calling for a six-month pause on training AI systems more powerful than GPT-4. 

The letter immediately caused a furor as signatories walked back their positions, some notable signatories turned out to be fake, and many more AI researchers and experts vocally disagreed with the letter’s proposal and approach. 

The letter was penned by the Future of Life Institute, a nonprofit organization with the stated mission to “reduce global catastrophic and existential risk from powerful technologies.” It is also host to some of the biggest proponents of longtermism, a kind of secular religion boosted by many members of the Silicon Valley tech elite since it preaches seeking massive wealth to direct towards problems facing humans in the far future. One notable recent adherent to this idea is disgraced FTX CEO Sam Bankman-Fried. 

Specifically, the institute focuses on mitigating long-term "existential" risks to humanity such as superintelligent AI. Musk, who has expressed longtermist beliefs, donated $10 million to the institute in 2015.  

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter states. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter clarifies, referring to the arms race between big tech companies like Microsoft and Google, who in the past year have released a number of new AI products. 

Other notable signatories include Stability AI CEO Emad Mostaque, author and historian Yuval Noah Harari, and Pinterest co-founder Evan Sharp. There are also a number of people who work for the companies participating in the AI arms race who have signed, including Google DeepMind and Microsoft. All signatories were confirmed to Motherboard by the Future of Life Institute to be “independently verified through direct communication.” No one from OpenAI, which develops and commercializes the GPT series of AI models, has signed the letter. 

Despite this verification process, the letter started out with a number of false signatories, including people impersonating OpenAI CEO Sam Altman, Chinese president Xi Jinping, and Chief AI Scientist at Meta, Yann LeCun, before the institute cleaned the list up and paused the appearance of signatures on the letter as they verify each one. 

The letter has been scrutinized by many AI researchers and even its own signatories since it was published on Tuesday. Gary Marcus, a professor of psychology and neural science at New York University, who told Reuters “the letter isn’t perfect, but the spirit is right.” Similarly, Emad Mostaque, the CEO of Stability.AI, who has pitted his firm against OpenAI as a truly "open" AI company, tweeted, “So yeah I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter.” 

AI experts criticize the letter as furthering the “AI hype” cycle, rather than listing or calling for concrete action on harms that exist today. Some argued that it promotes a longtermist perspective, which is a worldview that has been criticized as harmful and anti-democratic because it valorizes the uber-wealthy and allows for morally dubious actions under certain justifications.

Emily M. Bender, a Professor in the Department of Linguistics at the University of Washington and the co-author of the first paper the letter cites, tweeted that this open letter is “dripping with #Aihype” and that the letter misuses her research. The letter says, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research,” but Bender counters that her research specifically points to current large language models and their use within oppressive systems—which is much more concrete and pressing than hypothetical future AI. 

“We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about ‘too powerful AI’,” she tweeted. “Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).” 

“It's essentially misdirection: bringing everyone's attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those—for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used,” Sasha Luccioni, a Research Scientist and Climate Lead at Hugging Face, told Motherboard.


0 comments:

The Weaponization Of Safety As A Way To Criminalize Students

 Slate  |   What do you mean by the “weaponization of safety”? The language is about wanting to make Jewish students feel saf...